patent_id
stringlengths 7
8
| description
stringlengths 125
2.47M
| length
int64 125
2.47M
|
---|---|---|
11862029 | DETAILED DESCRIPTION Overview Electronic devices, such as integrated avionics systems, are typically utilized by one or more members of a flight crew (e.g., the pilot and/or the co-pilot) to navigate an aircraft. Integrated avionics systems may employ primary flight display(s) (PFDs) and multifunction display(s) (MFDs) to furnish primary flight control, navigational, and other information to the flight crew of the aircraft. Additionally, the integrated avionics systems may also employ an avionics control and display unit (CDU) that is configured to provide control functionality to the PFD and/or the MFD and to convey navigation information representing an area the aircraft is traversing. While integrated avionics systems may provide the functionality flight crew and/or autopilot navigation of the aircraft, these systems lack the ability to land the aircraft without pilot and/or co-pilot intervention (e.g., in an emergency situation). Accordingly, autoland systems and processes for landing an aircraft without pilot intervention are described. In implementations, the autoland system includes a memory operable to store one or more modules and at least one processor coupled to the memory. The processor is operable to execute the one or more modules to identify a plurality of potential destinations for an aircraft; calculate a merit for each potential destination identified; select a destination based upon the merit; and create a route from a current position of the aircraft to an approach fix associated with the destination that accounts for the terrain characteristic(s) and/or obstacle characteristic(s). The processor can also cause the aircraft to traverse the route, determine a final approach segment associated with the route; identify terrain characteristic(s) and/or obstacle characteristic(s) associated with the final approach segment; and determine an adjusted final approach segment accounting for the terrain characteristic(s) and/or obstacle characteristic(s). The processor can also cause the aircraft to land at the destination without requiring pilot intervention. In another implementation, the autoland system includes a memory operable to store one or more modules, and at least one processor coupled to the memory and operably coupled to at least one of an engine of the aircraft, a braking system of the aircraft, or a control column of the aircraft. The processor is operable to execute the one or more modules to cause the processor to identify potential destinations for an aircraft. The processor can also calculate a merit for each destination identified, select a destination based upon the merit; receive terrain data and/or obstacle data, the including terrain characteristic(s) and/or obstacle characteristic(s); and create a route from a current position of the aircraft to an approach fix associated with the destination, the route accounting for the terrain characteristic(s) and/or obstacle characteristic(s). The processor can also cause the aircraft to traverse the route, determine a final approach segment associated with the route; identify terrain characteristic(s) and/or obstacle characteristic(s) associated with the final approach segment; and determine an adjusted final approach segment accounting for the terrain characteristic(s) and/or obstacle characteristic(s). The processor can also cause the aircraft to land at the destination without requiring pilot intervention. In one or more implementations, a process for autolanding an aircraft includes identifying potential destinations for an aircraft. The process also includes calculating a merit for each destination identified; selecting a destination based upon the merit; receiving terrain data and/or obstacle data, the including terrain characteristic(s) and/or obstacle characteristic(s); and creating a route from a current position of the aircraft to an approach fix associated with the destination, the route accounting for the terrain characteristic(s) and/or obstacle characteristic(s). The process also includes causing the aircraft to traverse the route, determining a final approach segment associated with the route; identifying terrain characteristic(s) and/or obstacle characteristic(s) associated with the final approach segment; and determining an adjusted final approach segment accounting for the terrain characteristic(s) and/or obstacle characteristic(s). The process also includes causing the aircraft to land at the destination without requiring pilot intervention. Example Implementations FIGS.1A and1Billustrate an example implementation of an integrated avionics system100within an aircraft. The integrated avionics system100may include one or more primary flight displays (PFDs)102, one or more multifunction displays (MFD)104, and one or more multi-product avionics control and display units (CDU)106. For instance, in the implementation illustrated inFIG.1A, the integrated avionics system100may be configured for use in an aircraft that is flown by two pilots (e.g., a pilot and a copilot). In this implementation, the integrated avionics system100may include a first PFD102(1), a second PFD102(2), an MFD104, a first CDU106(1), a second CDU106(2), and a third CDU106(3) that are mounted in the aircraft's instrument panel108. As shown, the MFD104is mounted generally in the center of the instrument panel108so that it may be accessed by either pilot (e.g., by either the pilot or the copilot). The first PFD102(1) and the first CDU106(1) are mounted in the instrument panel108generally to the left of the MFD104for viewing and access by the pilot. Similarly, the second PFD102(2) and the second CDU106(2) are mounted in the instrument panel108generally to the right of the MFD104for viewing and access by the aircraft's copilot or other crew member or passenger. The third CDU106(3) may be mounted between the first and second CDUs106(1),106(2). In implementations, the CDUs106may be positioned within the instrument panel108so that they may be readily viewed and/or accessed by the pilot flying the aircraft (which could be either the pilot or copilot). The PFDs102may be configured to display primary flight information, such as aircraft attitude, altitude, heading, vertical speed, and so forth. In implementations, the PFDs102may display primary flight information via a graphical representation of basic flight instruments such as an attitude indicator, an airspeed indicator, an altimeter, a heading indicator, a course deviation indicator, and so forth. The PFDs102may also display other information providing situational awareness to the pilot such as terrain information, ground proximity warning information, and so forth. As shown inFIG.1B, primary flight information may be generated by one or more flight sensor data sources including, for example, one or more attitude, heading, angular rate, and/or acceleration information sources such as attitude and heading reference systems (AHRS)110, one or more air data information sources such as air data computers (ADCs)112, and/or one or more angle of attack information sources. For instance, the AHRSs110may be configured to provide information such as attitude, rate of turn, slip and skid, while the ADCs112may be configured to provide information including airspeed, altitude, vertical speed, and outside air temperature. Other configurations are possible. Integrated avionics units (IAUs) may aggregate the primary flight information from the AHRS110and ADC112and, in one example configuration, provide the information to the PFDs102via an avionics data bus116. In other examples, the various IAUs may directly communicate with either other and other system components. The IAUs may also function as a combined communications and navigation radio. For example, the IAUs may include a two-way VHF communications transceiver, a VHF navigation receiver with glide slope, a global positioning system (GPS) receiver, and so forth. As shown, each integrated avionics unit may be paired with a primary flight display, which may function as a controlling unit for the integrated avionic unit. In implementations, the avionics data bus116may comprise a high speed data bus (HSDB), such as data bus complying with ARINC 429 data bus standard promulgated by the Airlines Electronic Engineering Committee (AEEC), a MIL-STD-1553 compliant data bus, and so forth. A radar altimeter may be associated with one or more of the IAUs, such as via data bus116or a direct connection, to provide precise elevation information (e.g., height above ground) for autoland functionality. For example, in some configurations, the system100includes a radar altimeter to assist the autoland module214in various functions of the landing sequence, such as timing and maintaining the level-off and/or flare. The MFD104displays information describing operation of the aircraft such as navigation routes, moving maps, engine gauges, weather radar, ground proximity warning system (GPWS) warnings, traffic collision avoidance system (TCAS) warnings, airport information, and so forth, that are received from a variety of aircraft systems via the avionics data bus116. In implementations, the integrated avionics system100employs redundant sources of primary flight information to assure the availability of the information to the pilot, and to allow for cross-checking of the sources of the information. For example, the integrated avionics system100illustrated inFIGS.1A through2employs two PFDs102that receive primary flight information from redundant AHRSs110and ADCs112via redundant IAUs. The integrated avionics system100is configured so that the first PFD102(1) receives a first set of primary flight information aggregated by a first IAU from a first AHRS110(1) and ADC112(1). Similarly, the second PFD102(2) receives a second set of primary flight information aggregated by a second IAU from a second AHRS110(2) and ADC112(2). Additionally, although a single avionics data bus116is illustrated inFIG.1B, it is contemplated that redundant data buses may be employed for communication between the various components of the integrated avionics system100. In implementations, primary flight information provided by either the first AHRS110(1) and ADC112(1) or the second AHRS110(2) and ADC112(2) may be displayed on either PFD102(1) or102(2), or on the MFD104upon determining that the primary flight information received from either AHRS110and ADC112is in error or unavailable. Reversionary switches118may be selected by the pilot to configure the PFDs102or MFD104to display primary flight information from either the first AHRS110(1) and ADC112(1) or the second AHRS110(2) and ADC(2). One or both of the PFDs102may also be configured to display information shown on the MFD104(e.g., engine gauges and navigational information), such as in the event of a failure of the MFD104. The integrated avionics system100may employ cross-checking of the primary flight information (e.g., attitude information, altitude information, etc.) to determine if the primary flight information to be furnished to either of the PFDs102is incorrect. In implementations, cross-checking may be accomplished through software-based automatic continual comparison of the primary flight information provided by the AHRS110and ADC112. In this manner, a “miss-compare” condition can be explicitly and proactively annunciated to warn the pilot when attitude information displayed by either PFD102sufficiently disagrees. The CDUs106may furnish a general purpose pilot interface to control the aircraft's avionics. For example, the CDUs106allow the pilots to control various systems of the aircraft such as the aircraft's autopilot system, flight director (FD), flight management system (FMS), electronic stability and protection (ESP) system, autothrottle, navigation systems, communication systems, engines, and so on, via the avionics data bus116. In implementations, the CDUs106may also be used for control of the integrated avionics system100including operation of the PFD102and MFD104. In implementations, one or both of the CDUs106may include a display120. The display120of the CDU106may be used for the display of information suitable for use by the pilot of the aircraft to control a variety of aircraft systems. Further, as discussed in greater detail herein below, the display120of the CDU may be configured to display a cursor control area to facilitate manipulation of indicia displayed by a display device of the avionics system (e.g., a PFD102or MFD104) via touch input to the touch screen over the displayed cursor control area. The CDUs106may be operable to provide independent standby primary flight information to the pilot. The CDUs106may be configurable to operate in a reversionary mode to provide standby primary flight information to the pilot(s) of the aircraft. When operating in reversionary mode, the display120of the CDU106is used to display standby primary flight information. As shown inFIG.1B, standby primary flight information, which may include information such as attitude, altitude, heading, vertical speed, and so forth, may be generated by a standby attitude and heading reference system (AHRS)122and a standby air data computer (ADC)124. Data generated by AHRS122and ADC124may be provided to one or more of the CDUs106via a standby avionics data bus128. In implementations, one or more mode switches130may be selected by the pilot to cause any number of the CDUs106to operate in the first mode to display standby primary flight information. While operating in the reversionary mode, the CDUs106may be disconnected from the avionics data bus116so that the CDUs106operate independently of and communicatively isolated from the primary components of the integrated avionics system100(e.g., the PFDs102, the MFD104, the AHRS110, the ADCs112, and so forth). For example, the CDUs106may not communicate with the avionics data bus116while in the first mode or may be physically disconnected from the avionics data bus116(e.g., via the mode switch130, and so on). FIG.2illustrates a system200in an example implementation showing a representative CDU106ofFIGS.1A and1Bin greater detail. The CDU106is illustrated as including a processor202, a memory204, one or more avionics data bus interfaces206,208and the display120. The processor202provides processing functionality for the CDU106and may include any number of processors, micro-controllers, or other processing systems and resident or external memory for storing data and other information accessed or generated by the CDU106. The processor202may execute one or more software programs which implement techniques described herein. The processor202is not limited by the materials from which it is formed or the processing mechanisms employed therein, and as such, may be implemented via semiconductor(s) and/or transistors (e.g., electronic integrated circuits (ICs)), and so forth. The memory204is an example of computer-readable media that provides storage functionality to store various data associated with the operation of the CDU106, such as the software programs and code segments mentioned above, or other data to instruct the processor202and other elements of the CDU106to perform the functionality described herein. Although a single memory204is shown, a wide variety of types and combinations of memory may be employed. The memory204may be integral with the processor202, stand-alone memory, or a combination of both. The memory204may include, for example, removable and non-removable memory elements such as RAM, ROM, Flash (e.g., SD Card, mini-SD card, micro-SD Card), magnetic, optical, USB memory devices, and so forth. The avionics data bus interface206and the standby avionics data bus interface208furnish functionality to enable the CDU106to communicate with one or more avionics data buses such as the avionics data bus116and standby avionics data bus128, respectively, illustrated inFIG.1B. In various implementations, the avionics data bus interface206and standby avionics data bus interface208may include a variety of components, such as processors, memory, encoders, decoders, and so forth, and any associated software employed by these components (e.g., drivers, configuration software, etc.). The display120displays information to the pilot of the aircraft. In implementations, the display120may comprise an LCD (Liquid Crystal Diode) display, a TFT (Thin Film Transistor) LCD display, an LEP (Light Emitting Polymer or PLED (Polymer Light Emitting Diode)) display, a cathode ray tube (CRT), and so forth, capable of displaying text and/or graphical information, such as a graphical user interface. The display120may be backlit via a backlight such that it can be viewed in the dark or other low-light environments. The display120may include a touch interface, such as a touch screen210that can detect a touch input within a specified area of the display120for entry of information and commands. In implementations, the touch screen210may employ a variety of technologies for detecting touch inputs. For example, the touch screen210may employ infrared optical imaging technologies, resistive technologies, capacitive technologies, surface acoustic wave technologies, and so forth. In implementations, buttons, softkeys, keypads, knobs and so forth, may be used for entry of data and commands instead of or in addition to the touch screen210. As shown inFIG.2, the system200(i.e., a CDU106) includes a power source212, such as a back-up power source, that is configured to furnish power to at least partially power the system200in the event the aircraft loses primary power (e.g., primary power sources are no longer furnishing power to the PFDs102, the MFD104, the CDUs106, and the instrument panel108of the aircraft). For example, the power source212is configured to at least substantially power the system200when the aircraft is not powered by the primary power source during operation of the aircraft. In an implementation, the power source212comprises a battery that is configured to provide power to the CDU106when a loss of primary power is detected. For example, the power source212may be configured to furnish power to the CDU106automatically once the primary power ceases, or at least substantially ceases, to power the CDU106and/or the aircraft. In another example, the power source212may be configured to power the CDU106upon the pilot/co-pilot manually causing the power source212to power the CDU106. The back-up power source is configured to furnish power to a CDU106for a predetermined amount of time to allow the pilot/co-pilot to utilize the CDU106for a limited amount of time while the primary power is not available within the aircraft. As shown, the system100includes an autoland module214, which is storable in the memory204and executable by the processor202. The autoland module214is representative of functionality that provides automatic landing functionality for an aircraft. In one or more implementations, the autoland module214provides functionality that provides automatic landing functionality pertaining to airport/runway/approach selection, navigation to an approach while avoiding terrain, obstacles, and/or weather having undesirable characteristics, automatic aircraft speed control, flare processes (e.g., vertical and lateral), braking and/or ground steering, and/or engine shutdown. In one or more implementations, the autoland module214provides functionality to automatically engage (e.g., activate) an emergency autoland process (seeFIG.12). For example, the autoland module214continuously monitors whether a pilot has engaged (e.g., interfaced, actuated, interacted, etc.) with the CDUs106and/or a control wheel and can automatically activate the emergency autoland process based upon one or more engagement characteristics. In an implementation, the autoland module214activates the autoland process once a pilot has not engaged with the CDUs106and/or the control wheel after a defined time period. The one or more engagement characteristics may comprise a pilot actuating one or more of the avionics equipment, continued engagement of the aircraft's autopilot system after a specified event (e.g., emergency descent, predetermined length of time, etc.), continued activation of ESP such that the autopilot system has automatically engaged, and so forth. In one or more implementations, the autoland module214can automatically activate the emergency autoland process based upon the engagement characteristics and/or one or more flight characteristics (e.g., altitude of the aircraft, cabin altitude, cabin pressure, airspeed of the aircraft, flight plan, Winds and Temperature Aloft Forecast, time of night, length of flight, terrain height, a navigational input of the aircraft, etc.). For example, the autoland module214can detect when the cabin depressurizes below a predetermined pressure threshold and can automatically activate emergency autoland processes. In some implementations, the flight characteristics can be furnished to the system100by a user (e.g., pilot). The flight characteristics can also be furnished to the autoland module214by other components internal to the system100(e.g., AHRS, ADCs, IDUs, other modules, etc.). In one or more implementations, the flight characteristics are stored within flight profile information222, which is storable in the memory204of the CDU106. In some implementations, the system100can issue an electronic communication notification based upon the engagement characteristics and/or the flight characteristics. For example, the autoland module214can cause the processor202to issue one or more notifications via the display120. Notifications may be displayed in text (e.g., “Awake?”), displayed with images, haptic (e.g., vibration alerts), aural (e.g., beeps or spoken text), or communicated via another appropriate means to the user. In some implementations, the system100can receive feedback from the pilot(s) in response to the electronic communication notification. For instance, the autoland module214can cause the processor202to issue an electronic communication notification that can be dismissed by the pilot. In an example implementation, the processor can issue a text notification asking if the pilot is awake. If the pilot is awake, he can dismiss the alert. The autoland module214can then determine whether to engage emergency autoland processes based on the feedback received. For example, if the pilot dismisses the notification, the autoland module214can withhold activation of emergency autoland processes. In another example, if the pilot fails to dismiss the notification within a predetermined time period (e.g., approximately 0.5 seconds to 2 minutes), the autoland module214can engage emergency autolanding processes as described herein. In some implementations, the system100can issue escalating levels of electronic communication notifications. For example, the autoland module214can cause the processor202to issue a first notification (e.g., a text notification). If the pilot fails to dismiss the notification within a predetermined time period (e.g., approximately 0.5 seconds to approximately 2 minutes), the autoland module214can cause the processor202to issue one or more additional notifications (e.g., aural, hepatic, etc.). If the pilot fails to dismiss the notification after a predetermined time period (e.g., approximately 0.5 seconds to approximately 2 minutes), the autoland module214can engage emergency autoland processes. In one or more implementations, the user (e.g., pilot, crewmember) can activate the emergency autoland processes manually. For example, the pilot can manually activate the emergency autoland processes in an emergency situation (e.g., emergency descent, depressurization, pilot incapacitation, etc.). The system100can include a switch (e.g., guarded switch) or button configured for manually engaging the emergency autoland processes. Upon engagement of emergency autoland processes, the module autoland214can cause the CDU106to engage one or more systems (e.g., autopilot system, flight director, autothrottle, Electronic Stability and Protection (ESP), Emergency Descent Mode (EDM), etc.) for automatically landing the aircraft. For example, the CDU106can cause the autopilot system to guide the aircraft to the nearest airport, the highest ranked airport based on predetermined merit weighting, and/or on a calculated route. The autoland module214can cause the processor202to execute one or more processes to determine a destination and/or a route. In some embodiments, the processor202can execute an endurance process to determine the aircraft's endurance based on usable fuel onboard the aircraft (endurance=current fuel/current total fuel flow). In example implementations, the aircraft has manually selected tanks for fuel usage, and the avionics does not know which tank is selected, the processor202may utilize the tank with the least fuel for the endurance calculation. In another implementation, if the aircraft has manually selected tanks and the avionics can identify which tank is selected, then the CDU106can utilize the selected tank for the endurance calculation. The autoland module214can use one or more destination selection processes to identify potential destinations for the aircraft. Potential destinations can include an airport location, terrain features (e.g., fields, landing fields, other open areas), bodies of water (e.g., lakes, seaports, etc.), and so forth. In some embodiments, airports within a range of travel of the aircraft are identified based upon a determined endurance of the aircraft. For example, the module214can identify airports within the range of travel of the aircraft. The autoland module214can cause the processor202can identify any airports within a preselected distance from the aircraft. In some implementations, the potential airports can be those within approximately 200 to 500 miles (depending on plane type). If there are no potential destinations within the range of travel of the aircraft, the processor202can identify potential destinations outside of the range of travel, and the autoland module214may select the best available potential destination outside of the range of travel. For example, the module214can select the closest potential destination (e.g., the closest airport), the last loaded origin, the last loaded destination, previously available destinations, and so forth. In some implementations, the autoland module214can cause the processor202to eliminate any airports that are not appropriate candidates for landing. For example, the processor202can eliminate airports that have one or more configurable adverse landing characteristics. Adverse landing characteristics can include, but are not necessarily limited to: airports that do not have at least one hard surface runway, airports that are heliports only, airports that do not have at least one acceptable approach (e.g., GPS approach to a runway with vertical guidance), and so forth. In some implementations, the system100can also incorporate weather data (e.g., METAR, Terminal Doppler Weather Radar (TDWR), terminal aerodrome forecast (TAF), etc.) received from each airport (or from a nearby airport should weather data not be available) in selecting potential airports. For example, the autoland module214can receive METAR data from one or more of the components internal to the system100(e.g., AHRS, ADCs, IDUs, other modules, etc.). The autoland module214can cause the processor202to eliminate airports with unfavorable weather conditions. For example, the processor202can treat unfavorable weather conditions as an adverse landing characteristic and eliminate those airports from the potential airports. The autoland module214can then execute one or more merit processes to determine a merit for each potential destination. For example, the autoland module214can cause processor202to calculate one or more merits for each airport runway based on a variety of runway attributes. Runway attributes can include, but are not necessarily limited to: final approach coarse alignment with the runway, runway characteristics (e.g., runway length, runway width, approach vertical angle (e.g., flight path angle), gradient, etc.), weather conditions (e.g., weather rating (e.g., instrument flight rules (IFR), visual flight rules (VFR), etc.), gust, precipitation level, precipitation type, etc.), attributes specific to the airport (e.g., airport with a tower, airports that anchor class B airspace, exclusively military airports, etc.), travel time to airport (e.g., estimated time enroute (ETE)), and so forth. The autoland module214can cause the processor202can calculate a merit value for each attribute. For example, the processor202can assign each attribute a merit value in the range of −1.0 to 1.0, with 1.0 representing an ideal runway. Negative merit values can be considered to be out of limits. In some implementations, the processor202can determine a final approach course alignment runway merit for a runway corresponding to each potential destination airport. For example, the processor202can calculate the degrees of misalignment of the aircraft with the runway. In some implementations, the processor202can eliminate runways that exceed a preselected maximum misalignment threshold (e.g., 25 degrees to 35 degrees of misalignment). The processor202can also determine an airport attribute runway merit for each potential destination airport. For example, the processor202can assign a high merit value to airports with towers, as the presence of a tower can indicate that the airport has emergency facilities. The processor202can assign low merit values to airports that anchor class B airspace and/or exclusively military airports. The processor202can also determine a travel time runway merit for a runway corresponding to each potential destination airport. In example implementations, the processor202can calculate time to runway using groundspeed along a selected path from the wind triangle based on wind speed and/or wind direction. The processor202can also calculate a runway merit based on one or more runway characteristics. For example, the processor202can assign higher merit values corresponding to wider and/or longer runways. The processor202can also calculate a runway merit based on the weather conditions at each potential destination airport. For example, the processor202can assign low merit values to airports with low visibility, high wind speeds, and so forth. The autoland module214can then determine the total merit for each airport. In implementations, the autoland module214can cause the processor202to apply a predetermined weighting factor (K) to each runway merit (M) and calculate a weighted runway merit (K*M). The sum of all weighting factors (ΣK) represents the maximum possible merit value. The sum of the weighted values (Σ(K*M)) for a runway represents the total merit for the runway. The processor202can also assign penalties to attributes that are out of limits (e.g., the merit is negative). For example, the processor can subtract a penalty equal to the maximum possible merit value (ΣK) from the overall merit of the runway (Σ(K*M)−ΣK). This ensures that the runway is only selected if there are no available runways where all attributes are within limits. The processor202can then determine which runway has the highest total merit (e.g., highest Σ(K*M)). In some implementations, the autoland module214can incorporate route weather data in determining the total merit for each destination. The autoland module214can receive weather data (e.g., weather radar, XM, datalink weather, icing data) and/or forecast data (e.g., Winds and Temperatures Aloft Forecast data, turbulence data, windshear data, NEXRAD data, etc.) from one or more of the components internal to the system100(e.g., AHRS, ADCs, IDUs, other modules, etc.). Datalink weather may include satellite radio sources, FIS-B (ADS-B), Garmin Connext, and/or other datalinks. The module214can cause the processor202to analyze the weather data and/or forecast data for one or more weather intensity characteristics. Weather intensity characteristics can include, but are not necessarily limited to: precipitation level, precipitation type (e.g., rain, snow, sleet, etc.), atmospheric conditions (e.g., wind speed, wind direction, temperature, etc.), storm attributes (e.g., storm top elevation, reflectivity, vertically integrated water, probability of hail, probability of severe hail, maximum hail stone diameter size, speed and/or direction of storm movement, tornadic activity, etc.), weather conditions (e.g., weather severity, visibility, etc.), and so forth. The autoland module214can cause the processor202to compare the weather intensity characteristics to a predefined condition (e.g., a predefined severity and/or intensity threshold. For example, the autoland module214can cause the processor202to compare weather severity to predefined weather severity levels (e.g., low, medium, high, etc.), and identify weather severity areas. If the weather severity of a weather area exceeds one or more of the predefined severity levels, the autoland module214can cause the processor202to adjust the runway merit accordingly. For example, the processor202can create a buffer area around weather areas of predefined severity levels, and downgrade runways that require passing through those areas. The processor202can downgrade (e.g., assess a penalty) to runways that require a route passing through a preselected radius (e.g., approximately five miles to approximately 15 miles) of a high severity weather area (e.g., areas depicted on a NEXRAD map as red areas). The processor202can also eliminate (e.g., assign negative M) to runways that require a route passing through a preselected radius (e.g., approximately two miles to approximately four miles) of a high severity weather area (e.g., NEXRAD red areas). In some implementations, the processor202can increase the minimum distance (e.g., increase the preselected radius) from a high severity weather area (e.g., NEXRAD red areas) based on the size of the area. The processor202can also be configured to identify gradient changes in weather intensity characteristics. For example, the processor202can identify areas that change from a low severity area to a medium severity area within a specified distance (e.g., approximately one mile) and treat those areas as high severity areas. If a route cannot be determined through the weather and/or no routes can be determined due to weather (e.g., all routes contain weather that prohibits routing), the processor202can expand the tolerance for the predefined condition (e.g., expand the tolerance for weather severity) until a route can be determined. In some embodiments, the autoland module214can select a destination based on the total merit. For example, the autoland module214can cause the processor202to select the airport with the highest runway total merit as the destination airport. The processor202can determine an approach fix based on the runway with the highest total merit. The approach fix can include, but is not necessarily limited to: a final approach fix (FAF), an initial approach fix from a published approach (IAF), a point on a published approach, an arbitrary fix point that the system100selects to enable the aircraft to land on the selected runway (e.g., a visual approach fix, etc.), and so forth. If the processor202is unable to identify an optimal runway (e.g., a runway with positive total merit), the processor202can select the runway with the highest negative merit (i.e., the least negative merit). In another implementation, the user can manually select a destination airport and/or runway from the potential airports via the touch screen210. In some embodiments, when no destinations are within range of the aircraft (e.g., based on determined aircraft endurance), the module214will assign the highest merit to the potential destination with the shortest ETE, ignoring all other merits. The autoland module214can create a route from the current position of the aircraft to the destination. For example, the autoland module214can cause the processor202to create a route from the current position of the aircraft to the approach fix. In some embodiments, the processor202can determine a lateral position for the approach fix that reflects the position of an existing published approach fix. The processor202can execute one or more aircraft route calculation processes to determine a route between the current aircraft position and the approach fix. In some implementations, the processor202can determine a direct route between the current position and the destination airport. For example, the processor202can create a direct route at the present altitude between the current position and the FAF. The processor202can then create a hold at the approach fix. For example, the processor202can create a standard (e.g., right turn) or non-standard (e.g., left turn) holding pattern at the FAF (e.g. based on which holding pattern is free of terrain and/or obstacle intrusion), at the FAF altitude, with minimum leg length, where the inbound course of the hold equals the outbound course from the FAF. In embodiments, the hold pattern can be based on one or more of the speed of the aircraft, the altitude of the aircraft, and/or the course of the aircraft. For example, a hold pattern can be traversed if one or more of the following conditions are met: 1) a speed of the aircraft is within a defined tolerance relative to the approach speed, 2) the course of the aircraft is within a defined tolerance relative to the FAF (e.g., within a defined tolerance of the FAF to an FAF+1), and the altitude of the aircraft is within a defined tolerance relative to the altitude of the FAF. The processor202can also create a waypoint associated with the runway (e.g., at the start of the runway) to allow for navigation of the runway (e.g., alignment). In one or more implementations, the aircraft route calculation process can be performed to determine and/or analyze a route in view of predefined characteristics (e.g., distance, terrain characteristics, obstacle characteristics, weather characteristics, etc.). In some embodiments, the route can be created based on the approach fix and an FAF. For example, the route can comprise a FAF−1, where the FAF−1 is a distance back from the FAF in the direction of a missed approach point (MAP) to FAF course such that the path is under the glide path. In one or more implementations, the aircraft route calculation process can include creating the route to account for one or more terrain characteristics and/or obstacle characteristics. For example, the processor202can operate on the aircraft route calculation process to analyze cartographic data for terrain characteristics and/or obstacle characteristics. Terrain characteristics can include, but are not necessarily limited to elevation, altitude, horizontal dimension of the land surface, surface characteristics (e.g., bodies of water, permanent ice and/or snow, etc.), and so forth. Obstacle characteristics can include, but are not necessarily limited to buildings, power lines, other aircraft, and so forth. The autoland module214can cause the processor202to identify one or more terrain characteristics and/or obstacle characteristics, and calculate the course to avoid the terrain characteristics and/or obstacle characteristics. For example, the processor202can compare the terrain characteristics and/or obstacle characteristics with a predefined condition (e.g., predefined altitude or elevation threshold). The processor202can then create a waypoint associated with the terrain characteristic and/or obstacle characteristic. In embodiments, the processor202can identify the elevation and/or altitude of a land region, and create a waypoint at a preselected altitude (e.g., 1000 ft.), above the highest terrain and/or obstacle. In embodiments, the processor202can operate on the aircraft route calculation process to analyze cartographic data dynamically. For example, altitude constraints are always descending to the approach fix altitude by propagating the approach fix altitude back until it is less than the terrain or obstacle elevation with buffer. If it is less, then the processor202propagates the next elevation back until it reaches one higher, and so forth, until the approach fix is reached. In one or more implementations, the aircraft route calculation process can include re-creating the route to account for one or more weather intensity characteristics. For example, the processor202can operate on the aircraft route calculation process to analyze weather data (e.g., weather radar, XM, datalink weather, icing data) and/or forecast data (e.g., Winds and Temperatures Aloft Forecast data, turbulence data, windshear data, NEXRAD data, etc.) for weather intensity characteristics. Weather intensity characteristics can include, but are not necessarily limited to: precipitation level, precipitation type (e.g., rain, snow, sleet, etc.), atmospheric conditions (e.g., wind speed, wind direction, temperature, etc.), storm attributes (e.g., storm top elevation, reflectivity, vertically integrated water, probability of hail, probability of severe hail, maximum hail stone diameter size, speed and/or direction of storm movement, tornadic activity, etc.), weather conditions (e.g., weather severity, visibility, etc.), and so forth. The autoland module214can cause the processor202to compare the weather intensity characteristics to a predefined condition (e.g., a predefined severity and/or intensity threshold) and recalculate the route to avoid weather intensity characteristics that exceed the predefined condition. For example, the processor202can create a waypoint associated with the weather intensity characteristics that exceed the predefined condition. In exemplary implementations, the processor202can identify the severity of weather areas (e.g., low, medium, high, etc.) by comparing the storm severity to predefined weather severity thresholds, as described above. In some implementations, the processor202can utilize forecast data (e.g., Winds and Temperatures Aloft Forecast data at a predefined altitude (e.g., 18,000 feet), NEXRAD data, etc.) to predict severity of weather areas based on one or more weather intensity characteristics (e.g., atmospheric conditions, storm attributes, etc.). The processor202can then create one or more waypoints to avoid the moderate and/or severe weather areas and/or predicted moderate and/or severe weather areas. As the aircraft passes a waypoint, the processor202can operate on the aircraft route calculation process to dynamically analyze weather data and/or forecasting data of the re-created course and create waypoints until no weather intensity characteristics exceeding the predefined condition remain on the route to the FAF. If a route cannot be determined through the weather and/or no routes can be determined due to weather (e.g., all routes contain weather that prohibits routing), the processor202can expand the tolerance for the predefined condition (e.g., expand the tolerance for weather severity) until the route can be re-created. In implementations, the autoland module214can request and/or receive the weather data and/or forecast data from one or more of the components internal to the system100(e.g., AHRS, ADCs, IDUs, other modules, etc.). In some implementations, the autoland module214can cause the processor to compile weather data and/or forecast data received from multiple data sources into one weather tracking grid. For example, the processor202can overlay weather tracking grid cells with forecast data by taking the original weather tracking grid cell and copying to each cell in the direction given by the forecast data for a number of cells given by the distance each cell encompasses and the velocity of the forecast data for that area. As the system100dynamically monitors the weather data, the weather tracking grid is correspondingly updated. Once a route or re-created route has been calculated, the autoland module214can cause the aircraft to traverse the route. For example, the autoland module214can cause the processor202to replace the flight plan's previous route with the newly calculated route to the approach fix and/or re-created route to the approach fix. The published MAP to the runway endpoint can also be loaded to the flight plan. In embodiments, adjustments can be made to align the aircraft with the runway, provide adequate clearance for the aircraft, and/or determine if the runway is viable for routing. For example, the system100can determine a final approach segment for the aircraft, as shown inFIGS.3A through3F. The final approach segment comprises the last leg of the aircraft's approach to landing (e.g., when the aircraft is aligned with the runway for descent). The autoland module214can cause the processor202to execute one or more final approach segment determination processes to determine a final approach segment for the aircraft automatically. In implementations, the module214can determine the final approach segment based on runway alignment data. For example, the final approach segment can be determined based on one or more runway alignment characteristics including, but not necessarily limited to: approach fix (e.g., FAF), glide path intercept point (GPIP1), glide path angle (θ1), threshold crossing height (TCH1), MAP, and so forth. The placement of the MAP can be over the runway threshold, or may be artificially adjusted to the runway threshold. In one or more implementations, the module214can cause the processor to determine a path from the FAF to the GPIP1. The runway alignment characteristics can be furnished to the autoland module214by other components internal to the system100(e.g., FMS, AHRS, ADCs, IDUs, other modules, etc.). For example, the autoland module214can obtain the runway alignment characteristics from the published flight plan. In other embodiments, the flight characteristics can be furnished to the system100by a user (e.g., pilot). In some implementations, the final approach segment determination processes can include adjusting the final approach segment to account for one or more terrain characteristics and/or obstacle characteristics. For example, a clearance detection plane can be determined by offsetting the FAF altitude by a configurable amount (e.g., FAF clearance) and determining a second glide path angle (θ2) associated with the GPIP1(e.g., as described with reference toFIG.3B). The module214can then cause the processor202to evaluate terrain and/or obstacle data for intrusion against the detection plane (e.g., as described with reference toFIG.3C). For example, the autoland module214can cause the processor202to identify one or more terrain characteristics and/or obstacle characteristics with an elevation and/or altitude that exceeds that of the detection plane. If an intruding terrain characteristic and/or obstacle characteristic are detected, the processor202determines a GPIP lateral offset associated with the terrain characteristic and/or obstacle characteristic. For example, the processor202can create a path from an uppermost point of the intruding terrain/obstacle to the runway at the same angle (θ2) as the detection plane (e.g., as described with reference toFIG.3D). The lateral offset is created by positioning the detection plane angle (θ2) on the runway back from the intrusion. If there are multiple intruding terrain and/or obstacles, the point that intrudes by the largest amount relative to the detection plane (e.g., blocks the largest portion of the detection plane) the point with the highest elevation and/or altitude) is utilized in determining the lateral offset. However, it is to be understood that other factors can be utilized to select between multiple intruding terrain and/or obstacles. For example, the point with the highest elevation and/or altitude may be utilized in determining the lateral offset. In one or more embodiments, the system100can determine an adjusted final approach segment that accounts for the detected terrain and/or obstacle characteristics. In some embodiments, the module214causes the processor202to determine a vertical path adjustment for the approach fix. For example, the processor202can determine an offset glide path intercept point (GPIP2) based on the GPIP lateral offset by determining the point where the path from the intrusion intersects the runway at the detection plane angle (θ2) (e.g., as described with reference toFIG.3E). The processor202then adjusts the final approach segment by adjusting the FAF altitude and/or the MAP altitude based on the GPIP2to create a path at the original glide path angle (θ1). As illustrated inFIG.3F, the adjusted final approach segment will have the same approach angle (θ1) as the published approach (e.g., the adjusted final approach segment will be parallel to the original published approach segment), but will utilize a shorter landing distance (e.g., landing distance2) than the landing distance for the published approach (e.g., landing distance1). In one or more embodiments, the system100can determine that the runway is not viable for landing based on the final approach segment and/or the adjusted final approach segment. For example, the module214can cause the processor202to determine that the runway is nonviable when the shortened landing distance (landing distance2) is beneath a predetermined distance threshold. If the runway is determined to be nonviable, the runway should not be utilized for landing. In some embodiments, visual inspection of the path can be utilized to determine a final approach segment. In other embodiments, the module214can cause the processor202to select an alternative runway utilizing the techniques described herein. In one or more implementations, the autoland module214can cause the flight director, autopilot system, and/or navigation system to actuate one or more modes of operation. For example, the autoland module214can cause the autopilot system to actuate the vertical navigation mode (VNAV) and/or the lateral navigation mode (LNAV) to traverse the route from the current position of the aircraft to the waypoint(s) and/or the approach fix. The autoland module214can also cause the autopilot system to actuate a flight level change (FLC) mode and/or an altitude hold mode (ALT) to achieve and/or maintain desired airspeed and/or altitude while traversing the route. For example, the autoland module214can cause the autopilot system to set the altitude preselector to the altitude constraints determined by the aircraft route calculation process. For example, if the altitude preselector is above the current altitude, the autopilot system can actuate FLC mode while the aircraft climbs above the FAF altitude. In another implementation, if the altitude preselector is below the current altitude, the autopilot system can actuate an ALT mode, holding the aircraft at its present altitude. The autoland module214can also cause the flight director and/or navigation systems to traverse the route to the FAF. In one or more implementations, the autoland module214can automatically adjust the barometric pressure setting to maintain an accurate barometric pressure while the emergency autoland process is engaged. The autoland module214can cause the processor202to execute one or more barometric pressure processes to adjust the barometric pressure setting based on altitude automatically. In exemplary implementations, the processor202can determine the altitude of the aircraft utilizing the pressure altitude. If the altitude is above the transition altitude (e.g., 18,000 feet), the processor202can set the barometric correction to the standard pressure setting (e.g., 29.92 in hg). If the altitude is below the transition altitude, the processor202can set the barometric pressure to the navigation system altitude (e.g., GPS altitude). The processor202can also adjust the barometric pressure setting in preparation for approach at the approach fix, regardless of altitude. For example, the processor202can adjust the barometric pressure setting when the aircraft is within a predefined distance from the approach fix (e.g., 10 nautical miles) based factors such as temperature, runway elevation, GPS altitude, and so forth. In one or more implementations, the autoland module214can cause the CDU106to actuate one or more modes of operation to maintain the flight envelope of the aircraft. For example, the autoland module214can cause the CDU106to actuate an automatic level mode. The level mode can mode can coordinate lateral (e.g., pitch), vertical (e.g., yaw), and/or thrust instructions to make an automatic climb or descent to a predefined altitude at a predefined airspeed. If the resulting power setting is too high or too low to keep the aircraft within the normal flight envelope, the CDU106can cause the throttle ESP to automatically adjust power as required to maintain the normal flight envelope. Once the approach fix is reached, the autoland module214can execute one or more processes for landing the aircraft. For example, the autoland module214can cause the processor202to execute a suitable landing process for guiding the landing of the aircraft. The autoland module214can also cause the processor202to execute a suitable flare process to position the nose of the aircraft for touchdown. The autoland module214can cause the processor202to execute a suitable elevator process to actuate one or more flight control surfaces for landing. If one or more of the systems (e.g., autopilot system, flight director, autothrottle, ESP, FD, EDM, etc.) become disengaged, the autoland module214can cause the CDU106to attempt to re-engage the system. For example, if the autopilot system, autothrottle, and/or flight director become disengaged via abnormal disengagement, the CDU106to attempt to re-engage the system(s) approximately every one (1) second while emergency autoland processes are engaged. Upon re-engagement, the autoland module214can re-initiate the autopilot system, autothrottle, and/or flight director to traverse to the selected approach fix. As shown inFIG.2, the autoland module214engage one or more components and/or systems of the aircraft that are internal and/or external to the system100for autolanding the aircraft. For example, the autoland module214can cause the processor202to actuate one or more systems and/or modes of operation of the engine216. For example, the processor202can actuate the autothrottle system to control power of the engine216. The autothrottle system can maintain predetermined speed and/or thrust during different phases of flight (e.g., cruise, descent, hold, near destination, approach, landing flare, inside the approach fix, etc.). For example, the autothrottle system can control the power of the engine216to maintain a predetermined minimum speed inside the approach fix. Upon landing, the processor202can also cause the engine216to transition from an operational state to a non-operational state. For example, the processor202can actuate one or more fuel shutoff valves, digital controls, and/or ignition switches to stop the engine216. In some implementations, the autoland module214can be configured to transition the engine216to a non-operational state only after the aircraft has been on the ground for a predetermined period of time. For example, the autoland module214can actuate a plurality of switches at different points after landing. The autoland module214can actuate a first switch after the aircraft has been on the ground for a predetermined time interval. The module can214actuate a second switch when the aircraft is decelerating and/or when the wheel speed and/or airspeed is above a predefined threshold speed. The autoland module214can actuate a third switch when the pressure in one or more of the brake lines exceeds a predetermined pressure threshold for a predetermined period of time. The use of a plurality of switches, shutoff valves, and/or digital controls can prevent inadvertent engine shutdowns, and can ensure that fuel is removed from the engine shortly after the aircraft is on the ground. In other implementations, the autoland module214may shut down the engines216only after the aircraft has come to a stop (e.g., aircraft with braking maintained by engine-driven pump(s)). In one or more implementations, the autoland module214can also cause the processor202to actuate the braking system218of the aircraft to decelerate and/or stop the aircraft. For example, the processor202can actuate the braking system218for decelerating the aircraft during landing and/or stopping the aircraft on the runway. In embodiments, the braking system can comprise a mechanical braking system and/or a non-mechanical braking system (e.g., reverse thrust, reverse prop, retracting gear, etc.). In one or more implementations, the autoland module214can also cause the processor202to actuate one or more aerodynamic controls220(e.g., yokes, cyclics, side-sticks, etc.) of the aircraft. For example, the processor202can actuate the aerodynamic controls220to control directional movements of the aircraft while traversing the route and/or during landing. Generally, any of the functions described herein can be implemented using software, firmware, hardware (e.g., fixed logic circuitry), or a combination of these implementations. The terms “module” and “functionality” as used herein generally represent software, firmware, hardware, or a combination thereof. The communication between modules in the integrated avionics system100ofFIGS.1A and1Band the CDU ofFIG.2can be wired, wireless, or some combination thereof. In the case of a software implementation, for instance, the module represents executable instructions that perform specified tasks when executed on a processor, such as the processor202of the CDU shown inFIG.2. The program code can be stored in one or more device-readable storage media, an example of which is the memory204associated with the CDU106ofFIG.2. It is contemplated that in some implementations, the autoland module214can provide functionality to engage processes other than an emergency autoland process. One or more of the endurance processes, the airport selection processes, the merit processes, and/or the aircraft route calculation processes can be utilized for navigating and/or landing the aircraft in a non-emergency autoland situation. In some embodiments, the system can operate on one or more of the processes to locate a suitable place to hold at the bottom of an emergency descent that is below the safe altitude for flight without oxygen and is clear of terrain. An approach to an airport can be commenced from that hold location. In some embodiments, the system can operate on one or more of the processes to select a suitable airport and/or runway, and/or develop a glide path to that airport and/or runway within gliding distance in the event of an engine failure. In some embodiments, the system can operate on one or more of the processes (e.g., the selection process) to navigate a selected location (e.g., locate fuel stops, lunch break locations, maintenance facilities, etc.) based on pilot selectable weighting parameters. Pilot selectable weighting parameters can include, but are not necessarily limited to: fuel price, on airport restaurant, availability of a crew car, etc. The route calculation processes can then be utilized to create a route to the selected location that avoids potential threats (e.g., terrain, obstacles, weather, traffic, etc.). The system100can also operate on one or more of the processes to create a route to a predetermined location that avoids potential threats (e.g., terrain, obstacles, weather, traffic, etc.). The route calculation processes can also be used to determine an optimized route based on predetermined factors such as time, fuel, aircraft endurance, and so forth. For example, the processor202can utilize weather data to generate a route with the most favorable winds or other weather conditions. In some embodiments, the system100can operate on the route calculation processes to ensure clearance of the aircraft and/or create a route for the aircraft. For example, the route calculation processes can be utilized to ensure terrain clearance when instrument approaches are created. The route calculation processes can also be utilized to determine a route through mountainous terrain based on a predetermined altitude cap. In some embodiments, the route calculation processes can be utilized to create curved approaches and/or close in approaches to avoid preselected areas (e.g., noise sensitive areas, high security areas, wildlife areas, etc.). In embodiments, the system100can operate on the route calculation processes to re-create a route for the aircraft. For example, the processor202can automatically re-create a predetermined route of the aircraft (e.g., computed flight plan, track vector, etc.) to avoid potential threats (e.g., terrain, obstacles, weather, traffic, etc.). The route calculation processes can also be used to create suggested route modifications. For example, the processor202can suggest a recreated route to avoid potential threats (e.g., terrain, obstacles, weather, traffic, etc.). The system100can notify the user of the suggested re-created route, which can be accepted or dismissed by the user. In some embodiments, the system100can operate on one or more of the processes to remotely activate and/or navigate an aircraft. The module214can be activated from a remote location (e.g., support center) for autopiloting and/or autolanding the aircraft. For example, the module214can be remotely activated to return unmanned aircraft to a base location. Remote activation can also be utilized to control erratic and/or unresponsive aircraft that are unable to engage the module214automatically. In some embodiments, the system100can operate on one or more of the processes to suggest autopilot modes based on a current flight plan and/or flight characteristics (e.g., altitude of the aircraft, cabin altitude, cabin pressure, airspeed of the aircraft, flight plan, Winds and Temperature Aloft Forecast, time of night, length of flight, terrain height, a navigational input of the aircraft, etc.). The module214can cause the processor202to activate the most suitable autopilot mode based on the flight plan and/or flight characteristics. For example, the module214can activate FLC mode to climb or rejoin a descent path that is below the current altitude. In some instances, the autoland module214is configured to cause the generation of one or more displays at a display screen, such as the display120of the CDU106.FIGS.4A through6B,8and11A-11Billustrate example display screens302,402,502of the display120of the CDU106, the PFD102, and/or the MFD104. As described above, the outland autoland module214is configured to cause the display information related to routing the aircraft to the FAF, which is described in greater detail herein. As shown inFIGS.4A and4B, the display screen302may display one or more textual notification banners configured to provide notifications to the user. For example, a first notification banner304may be configured to convey whether or not the autoland module214is active. A second notification banner306may be configured to convey whether or not a user action is required. The display screen302may also display one or more softkeys. For example, the display screen302may display a softkey308for activating a microphone for radio transmission. The display screen302may also display text and/or graphic user instructions for operating the microphone (e.g., volume control, push and hold to talk, etc.), as illustrated inFIG.4B. As shown inFIGS.5A and5B, the display screen402may display navigation information, which may be retrieved via the integrated avionics systems components, that represent information describing operation of the aircraft (e.g., navigation routes, moving maps, engine gauges, weather radar, ground proximity warning system (GPWS) warnings, traffic collision avoidance system (TCAS) warnings, airport information, and so forth). In implementations, the navigation information can be displayed as one or more maps. In one or more implementations, the navigation information can include a first map404(e.g., map graphic) that is configured to convey the route (e.g., flight plan) of the aircraft to the FAF. For example, the first map404may display a topographical representation of the route the aircraft may traverse to reach the FAF. The first map404may be configured to continually update at predetermined time intervals such that the graphical representation reflects the aircraft's location relative to the FAF. In some implementations, the first map404can be configured to convey landmarks (e.g., state lines, roads/highways, cities, etc.) located on the route. The first map404can also display the route of the aircraft relative to weather radar data WRD1. The navigation information can also include a second map406(e.g., map graphic) that is configured to convey a map region pertinent to navigation of the aircraft. For example, the second map406may display graphical representations of an area that the aircraft is traversing. The second map406may be configured to continually update at predetermined time intervals such that the graphical representations reflect the area being traversed with respect to movement of the aircraft (i.e., a moving map). In some embodiments, the maps404,406can be displayed on different display panes408,410, as illustrated inFIG.5A. In other embodiments one or more of the maps404,406, can be displayed on a display insert panel, as illustrated inFIG.5B. In one or more implementations, the display screen402may display one or more graphic and/or text indicator configured to convey information describing the route and/or operation of the aircraft. Indicators can include, but are not necessarily limited to airspeed tape412, altimeter414, horizontal situation indicator416, and so forth. The display screen402can also display textual notification banners configured to provide notifications to the user. For example, a first notification banner418may be configured to convey whether or not the autoland module214is active. A second notification banner420may be configured to convey whether or not a user action is required. As shown inFIGS.6A through6B, the display screen502may display graphics and/or text that represent information describing the operation of the aircraft. For example, the text may include status information SI1504, which may be retrieved via the integrated avionics systems components, that represent information describing operation of the aircraft (e.g., navigation routes, moving maps, engine gauges, weather radar, ground proximity warning system (GPWS) warnings, traffic collision avoidance system (TCAS) warnings, airport information, and so forth). In one or more implementations, the status information SI1504can include text configured to convey dynamic information about the route of the aircraft, the FAF (e.g., name of airport, location of airport, runway number), and/or the status of the aircraft (e.g., speed, altitude, distance to runway, time to landing, etc.), as illustrated inFIGS.6A and6B.FIG.7illustrates example status information that can be conveyed at one or more displays in accordance with the present disclosure. The display screen502can also include text and/or graphics representing dynamic instructions506for the user. For example, the text and/or graphical instructions506can assist the user in communicating with air traffic control (e.g., as described with reference toFIGS.6A and6B), exiting the aircraft upon landing (e.g., as described with reference toFIG.8), fuel management instructions (e.g., as described with reference toFIG.9), and so forth. The display screen502can also display one or more textual notification banners configured to provide notifications to the user. For example, a first notification banner508may be configured to convey whether or not the autoland module214is active. A second notification banner510may be configured to convey whether or not a user action is required. A third notification banner512can be configured to convert the next action that the aircraft may take. Example next actions (e.g., instructional information) that can be conveyed by the third textual banner512are illustrated inFIG.10. The notifications may be accompanied by haptic (e.g., vibration alerts) notifications, aural (e.g., beeps or spoken text) notifications, or communicated via another appropriate means to the user. In implementations, one or more of the notification banners508,510,512can be configured to correspond with the dynamic instructions506. For example, upon autoland module214activation, an instruction to the user may indicate that emergency autoland is active and that air traffic control has been notified of the emergency, while a corresponding second notification banner can be configured to convey that no user action is required (as described with reference toFIGS.6A and6B). In one or more implementations, the dynamic instructions506can represent fuel management instructions (e.g., as described with reference toFIG.9). Example fuel management instructions802can include, but are not necessarily limited to a one-time instruction to set the fuel selector to auto mode, periodic instructions to switch fuel tanks, and so forth. The fuel management instructions802may be accompanied by the second notification banner510configured to convey that fuel management is required. The display screen502can also display graphic and/or text configured to convey the status of the fuel management instruction (e.g., pending, satisfactorily completed, etc.) to the user. As shown inFIGS.11A and11B, the display screen502may display graphical and/or text alerts1002configured to convey failure and/or disengagement of the autoland module214.FIG.11Aincludes an example screen shot of display screen502indicating emergency failure of the autoland module214.FIG.11Bincludes an example screen shot of display screen502indicating normal disengagement of the autoland module214. The alert1002may be accompanied by text and/or graphics configured to convey an instruction to the user (e.g., instructions for re-engaging the autoland module214), as illustrated inFIG.11A In exemplary implementations, display screen302,402,502comprise exemplary display screens of the CDU106, the PFD102, and the MFD104, respectively. However, it is contemplated that any of display screens302,402,502, and/or the text and/or graphics generated thereon, may be generated at any of the CDU106, the PFD102, and/or the MFD104. The display screens302,402,502may comprise a single display plane (as described with reference toFIGS.4A and4B), a plurality of display panes408,410(e.g., as described with reference toFIG.5A) and/or include one or more display insert panels (e.g., as described with reference toFIG.5B). In one or more implementations, the system100can be configured to issue one or more aural communications to the user and/or the air traffic controller. In one or more implementations, the autoland module214can configure an audio system of the aircraft to a predefined configuration. For example, the autoland module214can actuate and/or disable one or more audio system components (e.g., audio sources, radio sources, transponder, speakers, intercom, etc.) to allow automated (e.g., text to speech) communication with the user(s) and/or air traffic control. In implementations, the autoland module214can cause the audio system to issue one or more automated aural communications to provide status updates to the user.FIG.12illustrates example status updates in accordance with one or more implementations of the present disclosure. The autoland module214can also configure the audio system for automated and/or user initiated communication with air traffic control. For example, the autoland module214can select a radio for communication over an emergency frequency. The autoland module214can be configured to cause display (e.g., via one or more of the display screens of the CDU106, the PFD102, and the MFD104, as described above) of the appropriate air traffic control frequency to the user, allowing the user to manually contact and communicate with air traffic control. In another implementation, if no action is taken by the user, the autoland module214can cause the processor202to automatically tune the radio and broadcast on the universal emergency frequency and/or the local traffic frequency for the FAF. The autoland module214may also be configured to cause display of instructions to the user for disabling the automatic broadcasting to allow for manual communication. The autoland module214can also disengage one or more audio controls (e.g., bezels, softkeys, audio panel reversion switches, etc.) to enable automated communication. Upon landing, the autoland module214can control the radio to broadcast on one more appropriate frequencies (tower, approach, center, emergency, etc.) that the aircraft has landed, that the aircraft is on the runway, that the runway is closed, combinations thereof, and the like. In one or more implementations, the autoland module214can actuate the transponder to alert air traffic control that the aircraft is experiencing an emergency. For example, the autoland module214can adjust the transponder code from a standard code (e.g., 1200) to an emergency code (e.g., 7700, a code specific to autoland use, Automatic Dependent Surveillance Broadcast (ADSB) subfields populated with emergency priority status, etc.). The transponder can remain on the emergency code for a predetermined time interval (e.g., 15 seconds). During the predetermined time interval, the user can manually change the code. If the user does not manually change the code within the predetermined time interval, the autoland module214can cause the transponder to adjust back to the previously entered code. If the transponder was previously set to the standard code (e.g., 1200), then the autoland module214can cause the transponder to adjust to the lost communication code (e.g., 7600), following the predetermined time interval, unless the user manually selects a code. In implementations, the transmission of the emergency code can be manually disabled by a user prior to engaging the autoland module214. The user can then manually select the transponder codes as desired. In some embodiments, the autoland module214can make a satellite connection to allow for communication during an emergency situation. For example, the module214can make a satellite connection with a support center that can communicate with the aircraft cabin. In some situations, the autoland module214can be configured to automatically activate the satellite connection based upon the engagement of the emergency autoland module214and/or detection of an emergency event (e.g., cabin depressurization, loss of altitude, etc.). Example Processes FIG.13depicts an example process1200for autolanding an aircraft in an emergency situation utilizing an integrated avionics system, such as the integrated avionics system100described above. As shown inFIG.13, a plurality of potential destinations for an aircraft are identified (Block1202). In some embodiments, a plurality of potential destination airports are identified (Block1204). However, potential destinations can also include an airport location, terrain features (e.g., fields, landing fields, other open areas), bodies of water (e.g., lakes, seaports, etc.), and so forth. In some embodiments, airports within a range of travel of the aircraft are identified based upon a determined endurance of the aircraft. For example, the autoland module214can cause the processor202to execute an endurance process (e.g., as described with reference toFIG.13) to determine the aircraft's endurance based on usable fuel onboard the aircraft (endurance=current fuel/current total fuel flow). The autoland module214can then use one or more airport selection processes (e.g., as described with reference toFIG.15) to identify potential airports within the range of travel of the aircraft, as described. For example, the autoland module214can cause the processor202can identify any airports within a preselected distance (e.g., 200 to 500 miles) from the aircraft. If there are no potential destinations within the range of travel of the aircraft, the processor202can identify and select potential destinations outside of the range of travel such as the closest potential destination (e.g., the closest airport), the last loaded origin, the last loaded destination, previously available destinations, and so forth. In some implementations, the autoland module214can cause the processor202to eliminate any airports that are not appropriate candidates for landing based on one or more adverse landing characteristics. Adverse landing characteristics can include, but are not necessarily limited to: airports that do not have at least one hard surface runway, airports that are heliports only, airports that do not have at least one acceptable approach (e.g., GPS approach to a runway with vertical guidance), and so forth. In some implementations, the system100can also incorporate weather data (e.g., METAR, Terminal Doppler Weather Radar (TDWR), terminal aerodrome forecast (TAF), etc.) received from each airport (or from a nearby airport should weather data not be available) in selecting potential airports. For example, the processor202can treat unfavorable weather conditions as an adverse landing characteristic and eliminate those airports from the potential airports. A merit is calculated for each of the plurality of potential destinations (Block1206). For example, the autoland module214can cause the processor202to identify a merit value for each airport runway using one or more merit processes (e.g., as described with reference toFIG.16). In some implementations, the processor can identify one or more runway merit values for a runway corresponding with each airport. For example, the autoland module214can cause processor202to calculate one or more merits for each runway based on a variety of runway attributes, as described above. Runway attributes can include, but are not necessarily limited to: final approach coarse alignment with the runway, runway characteristics (e.g., runway length, runway width, approach vertical angle (e.g., flight path angle), gradient, etc.), weather conditions (e.g., weather rating (e.g., IFR, VFR, etc.), gust, precipitation level, precipitation type, etc.), attributes specific to the airport (e.g., airport with a tower, airports that anchor class B airspace, exclusively military airports, etc.), travel time to airport (ETE), and so forth. The autoland module214can cause the processor202can calculate a merit value for each attribute. For example, the processor202can assign each attribute a merit value in the range of −1.0 to 1.0, with 1.0 representing an ideal runway. Negative merit values can be considered to be out of limits. In some implementations, the processor202can determine a final approach course alignment runway merit for a runway corresponding to each potential destination airport. For example, the processor202can calculate the degrees of misalignment of the aircraft with the runway. In some implementations, the processor202can eliminate runways that exceed a preselected maximum misalignment threshold (e.g., 25 degrees to 35 degrees of misalignment). The processor202can also determine an airport attribute runway merit for each potential destination airport. For example, the processor202can assign a high merit value to airports with towers, as the presence of a tower can indicate that the airport has emergency facilities. The processor202can assign low merit values to airports that anchor class B airspace and/or exclusively military airports. The processor202can also determine a travel time runway merit for a runway corresponding to each potential destination airport. In example implementations, the processor202can calculate time to runway using groundspeed along a selected path from the wind triangle based on wind speed and/or wind direction. The autoland module214can then determine the total merit for each runway. In embodiments, the autoland module214can determine a total merit for each runway by applying a predetermined weighting factor (K) to each runway merit (M), as described above. The processor202can then determine which runway has the highest total merit (e.g., highest Σ(K*M)). In some implementations, the autoland module214can incorporate route weather data in determining the total merit for each destination. For example, the autoland module214can cause the processor202to analyze the weather data and/or forecast data for one or more weather intensity characteristics. Weather intensity characteristics can include, but are not necessarily limited to: precipitation level, precipitation type (e.g., rain, snow, sleet, etc.), atmospheric conditions (e.g., wind speed, wind direction, temperature, etc.), storm attributes (e.g., storm top elevation, reflectivity, vertically integrated water, probability of hail, probability of severe hail, maximum hail stone diameter size, speed and/or direction of storm movement, tornadic activity, etc.), weather conditions (e.g., weather severity, visibility, etc.), and so forth. The autoland module214can cause the processor202to compare the weather intensity characteristics to a predefined condition (e.g., a predefined severity and/or intensity threshold). For example, the autoland module214can cause the processor202to compare storm severity to predefined weather severity levels (e.g., low, medium, high, etc.), and identify weather severity areas. If the storm severity of a weather area exceeds one or more of the predefined severity levels, the autoland module214can cause the processor202to adjust the airport merit accordingly. For example, the processor202can create a buffer area around weather areas of predefined severity levels, and downgrade runways that require passing through those areas. The processor202can downgrade (e.g., assess a penalty) to runways that require a route passing through a preselected radius (e.g., approximately five miles to approximately 15 miles) of a high severity weather area (e.g., areas depicted on a NEXRAD map as red areas). The processor202can also eliminate (e.g., assign negative M) to runways that require a route passing through a preselected radius (e.g., approximately two miles to approximately four miles) of a high severity weather area (e.g., NEXRAD red areas). In some implementations, the processor202can increase the minimum distance (e.g., increase the preselected radius) from a high severity weather area (e.g., NEXRAD red areas) based on the size of the area. The processor202can also be configured to identify gradient changes in weather intensity characteristics. For example, the processor202can identify areas that change from a low severity area to a medium severity area within a specified distance (e.g., approximately one mile) and treat those areas as high severity areas. If a route cannot be determined through the weather and/or no routes can be determined due to weather (e.g., all routes contain weather that prohibits routing), the processor202can expand the tolerance for the predefined condition (e.g., expand the tolerance for weather severity) until a route can be determined. As shown inFIG.13, a destination is selected based upon the merit (Block1206). For example, the autoland module214can cause the processor202to select the airport with the highest runway total merit as the destination. If the processor202is unable to identify an optimal runway (e.g., a runway with positive total merit), the processor202can select the runway with the highest negative merit. In another example, the user can manually select a destination airport and/or runway from the potential airports via the touch screen210. In some embodiments, when no runways are within range of the aircraft (e.g., based on determined aircraft endurance), the module214will assign the highest merit to the potential destination with the shortest ETE, ignoring all other merits. Terrain data and/or obstacle data is received (Block1208). In embodiments, the terrain data can include at least one terrain characteristic and the obstacle data can include at least one obstacle characteristic. The processor202can operate on the aircraft route calculation process to analyze cartographic data for terrain characteristics and/or obstacle characteristics. Terrain characteristics can include, but are not necessarily limited to elevation, horizontal dimension of the land surface, surface characteristics (e.g., bodies of water, permanent ice and/or snow, etc.), and so forth. Obstacle characteristics can include buildings, power lines, other aircraft, and so forth. A route is created from the current position of the aircraft to an approach fix associated with the destination (Block1210). The processor202can execute one or more aircraft route calculation processes (e.g., as described with reference toFIG.17) to determine a route between the current aircraft position and the approach fix. The approach fix can include, but is not necessarily limited to an FAF, an IAF, a point on a published approach, an arbitrary fix point that the system100selects to enable the aircraft to land on the selected runway (e.g., a visual approach fix, etc.), and so forth. For example, the autoland module214can cause the processor202to create a FAF for the destination airport. The processor202can then create a route from the current position of the aircraft to the FAF. In embodiments, the processor202can create a route from the current position of the aircraft to the FAF that accounts for the terrain and/or obstacle characteristics. The processor202can create a route that avoids terrain and/or obstacle characteristics that exceed a predetermined condition. In some embodiments, the processor202can compare the terrain characteristics and/or obstacle characteristics with a predefined elevation and/or altitude threshold. For example, the processor202can identify elevation and/or altitude of a land region, and create a waypoint at a preselected altitude (e.g., 1000 ft.), above the highest terrain. In other embodiments, the processor202can determine a direct route between the current position and the destination where no terrain characteristics or obstacles are present. For example, the processor202can create a direct route at the present altitude between the current position and the FAF. In some implementations, weather data is received (Block1212). The weather data can include at least one weather intensity characteristic and the terrain data can include at least one terrain characteristic. For example, the processor202can operate on the aircraft route calculation process to analyze weather data (e.g., weather radar, XM, datalink weather, icing data) and/or forecast data (e.g., Winds and Temperatures Aloft Forecast data, turbulence data, windshear data, NEXRAD data, etc.) for weather intensity characteristics. Weather intensity characteristics can include, but are not necessarily limited to: precipitation level, precipitation type (e.g., rain, snow, sleet, etc.), atmospheric conditions (e.g., wind speed, wind direction, temperature, etc.), storm attributes (e.g., storm top elevation, reflectivity, vertically integrated water, probability of hail, probability of severe hail, maximum hail stone diameter size, speed and/or direction of storm movement, tornadic activity, etc.), and so forth. In some implementations, the weather intensity characteristic is compared to a predefined condition along the route (Block1214). For example, the autoland module214can cause the processor202to compare whether intensity characteristics to a predefined severity and/or intensity threshold. In some implementations, the system determines if the route can be re-created to avoid weather characteristics that exceed the predefined condition (Decision Block1216). If the route cannot be re-created (NO to Decision Block1216), the parameters defining the predefined condition are modified until the route can be re-created (Block1218). For example, the processor202can expand the tolerance for the predefined condition (e.g., expand the tolerance for weather severity) until a route can be determined based upon the expanded tolerance for the predefined condition(s). If the route can be re-created (YES to Decision Block1216), then the route is re-created to avoid weather intensity characteristics that exceed the predefined condition (Block1220). The autoland module214can cause the processor202to create one or more waypoints associated with the weather intensity characteristics. For example, the processor202can create one or more waypoints to avoid severe weather areas and/or predicted severe weather areas, as described above. As the aircraft passes a waypoint, the processor202can operate on the aircraft route calculation process to dynamically analyze weather data and/or forecasting data of the re-created course and create waypoints until no weather intensity characteristics exceeding the predefined condition remain on the route to the approach fix. In some implementations, the system100determines if the destination is still within the range of travel based upon the determined endurance and the weather intensity characteristics and/or terrain characteristics (Decision Block1222). For example, the autoland module214can cause the processor202to determine if the airport is still within the preselected distance (e.g., 200 to 500 miles) from the aircraft based on the re-calculated route. If the destination is no longer within the range of travel (NO from Decision Block1222), destinations are re-identified within the range of travel (Block1202). If the airport is still within the range of travel (YES from Decision Block1222), to the route is traversed (Block1224). For example, the autoland module214can cause the processor202to replace the flight plan's previous route with the newly calculated route and/or re-created route. In one or more implementations, the autoland module214can cause the flight director, autopilot system, and/or navigation system to actuate one or more modes of operation to traverse the route, as described above. For example, the autoland module214can cause the autopilot system to actuate the vertical navigation mode (VNAV) and/or the lateral navigation mode (LNAV) to traverse the route from the current position of the aircraft to the waypoint(s) and/or the FAF. The autoland module214can also cause the autopilot system to actuate a flight level change (FLC) mode and/or an altitude hold mode (ALT) to achieve and/or maintain desired airspeed and/or altitude while traversing the route. In one or more embodiments, a final approach segment associated with the route is determined (Block1226). For example, the final approach segment can be determined using one or more final approach segment determination processes (e.g., as described with reference toFIG.18) and/or techniques (e.g., as described with reference toFIGS.3Athrough3F). In some implementations, one or more terrain and/or obstacle characteristics associated with the final approach segment are identified (Block1228). For example, a clearance detection plane can be determined by offsetting the FAF altitude by a configurable FAF clearance amount and determining a second glide path angle (θ2) associated with the GPIP1. The module214can then cause the processor202to evaluate terrain and/or obstacle data for intrusion against the detection plane (e.g., identify one or more terrain characteristics and/or obstacle characteristics with an elevation and/or altitude that exceed that of the detection plane). An adjusted final approach segment is determined that accounts for the terrain and/or obstacle characteristics (Block1230). In embodiments, the module214causes the processor202to determine a vertical path adjustment for the approach fix. For example, the module214causes the processor202to determine a GPIP lateral offset associated with the terrain characteristic and/or obstacle characteristic, and an associated offset glide path intercept point (GPIP2). The processor202then adjusts the final approach segment by adjusting the FAF altitude and/or the MAP altitude based on the GPIP2and the original glide path angle (θ1). The system can cause the aircraft to land at the destination without requiring pilot intervention (Block1232). For example, the autoland module214can cause the processor202to execute one or more landing processes, flare processes, and/or elevation processes to land the aircraft as described above. In implementations, the autoland module214engages one or more components and/or systems of the aircraft that are internal and/or external to the system100for traversing the route and/or landing the aircraft. For example, the autoland module214can cause the CDU106to engage one or more of the autopilot system, the flight director, autothrottle, ESP, EDM, braking system, aerodynamic control, engine, and so forth. FIG.14illustrates an example process1300for determining an endurance of an aircraft utilizing an integrated avionics system, such as the integrated avionics system100described above. As shown inFIG.14, a fuel tank of the aircraft is selected (Block1302). In example implementations, the aircraft has manually selected tanks for fuel usage, and the avionics does not know which tank is selected, the processor202may utilize the tank with the least fuel for the endurance calculation. In another example, if the aircraft has manually selected tanks and the avionics knows which tank is selected, then the CDU106can utilize the selected tank for the endurance calculation. The current amount of fuel available in the selected tank is determined (Block1304). As shown inFIG.14, the aircraft's current total fuel flow is then determined (Block1306). An endurance for the aircraft is determined based current amount of fuel available and the current total fuel flow (Block1308). The endurance of the aircraft may be defined as the current fuel divided by the current total fuel flow. FIG.15illustrates an example process for identifying one or more airports within the range of travel of an aircraft1400utilizing an integrated avionics system, such as the integrated avionics system100described above. The endurance of the aircraft is determined (Block1402). For example, the endurance of the aircraft can be determined using an endurance process, such as the endurance process illustrated inFIG.14. The system100can determine whether or not airports are located within the range of travel based upon the endurance (Decision Block1404). For example, the autoland module214can cause the processor202can identify any airports within a preselected distance from the aircraft. In some implementations, the potential airports can be those within approximately 200 to 500 miles (depending on plane type). If there are no airports within the range of travel of the aircraft (NO to Decision Block1404), then an airport outside of the range of travel is selected as the destination airport (Block1406). Airports outside of the range of travel can include, but are not necessarily limited to: the closest airport, the last loaded origin, the last loaded destination airport, previously available destination airports, and so forth. If there are airports located within the range of travel (YES to Decision Block1404), a determination is made of whether the airports within the range of travel have any adverse landing characteristics (Decision Block1408). For example, the processor202can eliminate airports that have one or more adverse landing characteristics. Adverse landing characteristics can include, but are not necessarily limited to: airports that do not have at least one hard surface runway, airports that are heliports only, airports that do not have at least one acceptable approach (e.g., GPS approach to a runway with vertical guidance), and so forth. In some implementations, the autoland module214can cause the processor to treat unfavorable weather conditions at the airport as an adverse landing characteristic. If the airport has one or more adverse landing characteristics (YES to Decision Block1408), then the airport is eliminated from the potential destination airports (Block1410). If the airport does not possess one or more adverse landing characteristics (NO to Decision Block1408), then it is designated as a potential destination airport (Block1412). FIG.16illustrates an example process1500for selecting a destination airport utilizing an integrated avionics system, such as the integrated avionics system100described above. As shown inFIG.16, potential destination airports are identified (Block1502). For example, the potential destination airports can be identified using an airport selection process, such as the airport selection process illustrated inFIG.15. One or more runway merits are calculated for each runway of the potential destination airports (Block1504). For example, the autoland module214can cause processor202to calculate one or more merits for each runway based on a variety of runway attributes, as described above. Runway attributes can include, but are not necessarily limited to: final approach coarse alignment with the runway, runway characteristics (e.g., runway length, runway width, approach vertical angle (e.g., flight path angle), gradient, etc.), weather conditions (e.g., weather rating (e.g., IFR, VFR, etc.), gust, precipitation level, precipitation type, etc.), attributes specific to the airport (e.g., airport with a tower, airports that anchor class B airspace, exclusively military airports, etc.), travel time to airport (ETE), and so forth. The autoland module214can cause the processor202can calculate a merit value for each attribute. For example, the processor202can assign each attribute a merit value in the range of −1.0 to 1.0, with 1.0 representing an ideal runway. Negative merit values can be considered to be out of limits. In some implementations, a final approach course alignment runway merit is calculated for a runway corresponding to each potential destination airport (Block1506). For example, the processor202can calculate the degrees of misalignment of the aircraft with the runway. In some implementations, the processor202can eliminate runways that exceed a preselected maximum misalignment threshold (e.g., 25 degrees to 35 degrees of misalignment). In some implementations, an airport attribute runway merit is calculated for each destination airport (Block1508). For example, the processor202can assign a high merit value to airports with towers, as the presence of a tower can indicate that the airport has emergency facilities. The processor202can assign low merit values to airports that anchor class B airspace and/or exclusively military airports. In some implementations, a travel time runway merit is calculated for a runway corresponding to each potential destination airport (Block1510). For example, the processor202can calculate time to runway using groundspeed along a selected path from the wind triangle based on wind speed and/or wind direction. In some implementations, a runway characteristics merit is calculated for a runway corresponding to each potential destination airport (Block1512). In some implementations, a weather conditions merit is calculated for each potential destination airport (Block1514). For example, the processor202can assign low merit values to airports with low visibility, high wind speeds, and so forth. The autoland module214can then determine the total merit for each runway (Block1516). In embodiments, the autoland module214can determine a total merit for each runway by applying a predetermined weighting factor (K) to each runway merit (M), as described above. The processor202can then determine which runway has the highest total merit (e.g., highest Σ(K*M)). In some implementations, the autoland module214can incorporate route weather data in determining the total merit for each airport. For example, the module214can cause the processor202to analyze the weather data and/or forecast data for one or more weather intensity characteristics. Weather intensity characteristics can include, but are not necessarily limited to: precipitation level, precipitation type (e.g., rain, snow, sleet, etc.), atmospheric conditions (e.g., wind speed, wind direction, temperature, etc.), storm attributes (e.g., storm top elevation, reflectivity, vertically integrated water, probability of hail, probability of severe hail, maximum hail stone diameter size, speed and/or direction of storm movement, tornadic activity, etc.), and so forth. The autoland module214can cause the processor202to compare the weather intensity characteristics to a predefined condition (e.g., a predefined severity and/or intensity threshold. For example, the autoland module214can cause the processor202to compare weather intensity characteristics to predefined weather severity levels (e.g., low, medium, high, etc.), and identify weather severity areas. If the weather intensity characteristics of a weather area exceed one or more of the predefined severity levels, the autoland module214can cause the processor202to adjust the airport merit accordingly. For example, the processor202can create a buffer area around weather areas of predefined severity levels, and downgrade runways that require passing through those areas. The processor202can downgrade (e.g., assess a penalty) to runways that require a route passing through a preselected radius (e.g., approximately five miles to approximately 15 miles) of a high severity weather area (e.g., areas depicted on a NEXRAD map as red areas). The processor202can also eliminate (e.g., assign negative M) to runways that require a route passing through a preselected radius (e.g., approximately two miles to approximately four miles) of a high severity weather area (e.g., NEXRAD red areas). In some implementations, the processor202can increase the minimum distance (e.g., increase the preselected radius) from a high severity weather area (e.g., NEXRAD red areas) based on the size of the area. The processor202can also be configured to identify gradient changes in weather intensity characteristics. For example, the processor202can identify areas that change from a low severity area to a medium severity area within a specified distance (e.g., approximately one mile) and treat those areas as high severity areas. If a route cannot be determined through the weather and/or no routes can be determined due to weather (e.g., all routes contain weather that prohibits routing), the processor202can expand the tolerance for the predefined condition (e.g., expand the tolerance for weather severity) until a route can be determined. A destination airport is selected based upon the total merit (Block1518). For example, the autoland module214can cause the processor202to select the airport with the highest runway total merit as the destination airport. If the processor202is unable to identify an optimal runway (e.g., a runway with positive total merit), the processor202can select the runway with the highest negative merit. In another implementation, the user can manually select a destination airport and/or runway from the potential airports via the touch screen210. FIG.17illustrates an example process1600for creating a route from a current position of an aircraft to a destination airport utilizing an integrated avionics system, such as the integrated avionics system100described above. As shown inFIG.17, a FAF is created for the destination (Block1602). For example, the processor202can determine a lateral position for the FAF that reflects the position of an existing published FAF. A route is created from the current position of the aircraft to the FAF (Block1604). In some embodiments, the processor202can create a direct route at the present altitude between the current position and the FAF. In other embodiments, the processor202can create a route from the current position of the aircraft to the FAF that accounts for the terrain and/or obstacle characteristics. The processor202can create a route that avoids terrain and/or obstacle characteristics that exceed a predetermined condition. In some embodiments, the processor202can compare the terrain characteristics and/or obstacle characteristics with a predefined elevation and/or altitude threshold. For example, the processor202can identify elevation and/or altitude of a land region, and create a waypoint at a preselected altitude (e.g., 1000 ft.), above the highest terrain. In some embodiments, the route can comprise a FAF−1, where FAF−1 is a distance back from the FAF in the direction of the MAP to FAF course such that the path is under the glide path. In some embodiments, the processor202can then create a hold at the FAF. For example, the processor202can create a standard (e.g., right turn) or non-standard (e.g., left turn) holding pattern at the FAF, at the FAF altitude, with minimum leg length, where the inbound course of the hold equals the outbound course from the FAF. In embodiments, the hold pattern can be based on one or more of the speed of the aircraft, the altitude of the aircraft, and/or the course of the aircraft. For example, a hold pattern can be traversed if one or more of the following conditions are met: 1) a speed of the aircraft is within a defined tolerance relative to the approach speed, 2) the course of the aircraft is within a defined tolerance relative to the FAF (e.g., within a defined tolerance of the FAF to the FAF+1), and the altitude of the aircraft is within a defined tolerance relative to the altitude of the FAF. The processor202can also create a waypoint associated with the runway (e.g., at the start of the runway) to allow for navigation of the runway (e.g., alignment). In one or more implementations, the aircraft route calculation process1600can be performed to determine and/or analyze a route in view of predefined characteristics (e.g., distance, terrain characteristics, weather characteristics, etc.). The route to the FAF is loaded to the flight plan (Block1606). The published MAP to the runway endpoint is then loaded to the flight plan (Block1608). In one or more embodiments, the MAP is adjusted based on clearance and/or runway alignment of the aircraft (Block1610). For example, a final approach segment can be calculated based on the published FAF and MAP. The final approach segment can be determined using one or more final approach segment determination processes (e.g., as described with reference toFIG.18) and/or the techniques (e.g., as described with reference toFIGS.3A through3F). In some implementations, the final approach segment is adjusted based on one or more obstacle and/or terrain characteristics. For example, a clearance detection plane can be determined by offsetting the FAF altitude by a configurable FAF clearance amount and determining a second glide path angle (θ2) associated with the GPIP1. The module214can then cause the processor202to evaluate terrain and/or obstacle data for intrusion against the detection plane (e.g., identify one or more terrain characteristics and/or obstacle characteristics with an elevation and/or altitude that exceed that of the detection plane). The module214then determines an adjusted final approach segment that accounts for the terrain and/or obstacle characteristics. In embodiments, the module214causes the processor202to determine a vertical path adjustment for the approach fix. For example, the module214causes the processor202to determine a GPIP lateral offset associated with the terrain characteristic and/or obstacle characteristic, and an associated offset glide path intercept point (GPIP2). The processor202then adjusts the final approach segment by adjusting the FAF altitude and/or the MAP altitude based on the GPIP2and the original glide path angle (θ1). In some implementations, weather data is received (Block1612). The weather data can include at least one weather intensity characteristic and the terrain data can include at least one terrain characteristic. For example, the processor202can operate on the aircraft route calculation process to analyze weather data (e.g., weather radar, XM, datalink weather, icing data) and/or forecast data (e.g., Winds and Temperatures Aloft Forecast data, turbulence data, windshear data, NEXRAD data, etc.) for weather intensity characteristics. Weather intensity characteristics can include, but are not necessarily limited to: precipitation level, precipitation type (e.g., rain, snow, sleet, etc.), atmospheric conditions (e.g., wind speed, wind direction, temperature, etc.), storm attributes (e.g., storm top elevation, reflectivity, vertically integrated water, probability of hail, probability of severe hail, maximum hail stone diameter size, speed and/or direction of storm movement, tornadic activity, etc.), and so forth. In some implementations, the weather intensity characteristic is compared to a predefined condition along the route (Block1614). For example, the autoland module214can cause the processor202to compare whether intensity characteristics to a predefined severity and/or intensity threshold. In some implementations, the system determines if the route can be re-created to avoid weather characteristics that exceed the predefined condition (Decision Block1616). If the route cannot be re-created (NO to Decision Block1616), then the parameters defining the predefined condition are modified until the route can be re-created (Block1618). For example, the processor202can modify the tolerance for the predefined condition (e.g., expand the tolerance for weather severity) until a route can be determined. If the route can be re-created (YES to Decision Block1616), then the route is re-created to avoid weather intensity characteristics that exceed the predefined condition (Block1620). The autoland module214can cause the processor202to create one or more waypoints associated with the weather intensity characteristics. For example, the processor202can create one or more waypoints to avoid severe weather areas and/or predicted severe weather areas, as described above. As the aircraft passes a waypoint, the processor202can operate on the aircraft route calculation process to dynamically analyze weather data and/or forecasting data of the re-created course and create waypoints until no weather intensity characteristics exceeding the predefined condition remain on the route to the FAF. Once the route is re-created, the system100can return to Block1606to load the re-created route in order to traverse the re-created route. FIG.18illustrates an example process1700for determining a final approach segment for a route of an aircraft utilizing an integrated avionics system, such as the integrated avionics system100described above. As shown inFIG.18, a final approach segment associated with a route of an aircraft is determined based on one or more runway alignment characteristics (Block1702). Runway alignment characteristics can include, but are not necessarily limited to: approach fix (e.g., FAF), glide path intercept point (GPIP1), glide path angle (θ1), threshold crossing height (TCH1), MAP, and so forth. For example, the module214can cause the processor to determine a path from the FAF to the GPIP1. The placement of the MAP can be over the runway threshold, or may be artificially adjusted to the runway threshold. The runway alignment characteristics can be furnished to the autoland module214by other components internal to the system100(e.g., FMS, AHRS, ADCs, IDUs, other modules, etc.) and/or by a user (e.g., pilot). In some embodiments, the autoland module214can obtain the runway alignment characteristics from the published flight plan. A clearance detection plane associated with the final approach segment is identified (Block1704). For example, a clearance detection plane can be determined by offsetting the FAF altitude by a configurable FAF clearance amount and determining a second glide path angle (θ2) associated with the GPIP1. Obstacle characteristics and/or terrain characteristics that intrude on the clearance detection plane are detected (Block1706). The module214can cause the processor202to evaluate terrain and/or obstacle data for intrusion against the detection plane. For example, the processor202can identify one or more terrain characteristic and/or obstacle characteristic with an elevation and/or altitude that exceeds that of the detection plane. In one or more embodiments, an adjusted final approach segment is determined that accounts for the terrain and/or obstacle characteristics (Block1708). The module214causes the processor202to determine a vertical path adjustment for the approach fix. For example, the processor202can determine a GPIP lateral offset associated with the terrain characteristic and/or obstacle characteristic, and an associated offset glide path intercept point (GPIP2). The processor202then adjusts the final approach segment by adjusting the FAF altitude and/or the MAP altitude based on the GPIP2and the original glide path angle (θ1). As shown inFIG.3Fabove, the adjusted final approach segment will have the same approach angle (θ1) as the published approach (e.g., the adjusted final approach segment will be parallel to the original published approach segment), but will utilize a shorter landing distance (e.g., landing distance2) than the landing distance for the published approach (e.g., landing distance1). The system then determines if the runway is viable for landing based on the final approach segment and/or the adjusted final approach segment (Decision Block1710). For example, the module214can cause the processor202to determine that the runway is nonviable when the shortened landing distance (landing distance2) is beneath a predetermined distance threshold. If the runway is determined to be nonviable (NO to decision block1710), the runway is discarded and not used for landing (Block1712). In some embodiments, visual inspection of the path can be utilized to determine a final approach segment. In other embodiments, the module214can cause the processor202to select an alternative runway utilizing the techniques described herein. If the runway is viable (YES to Decision Block1710), the final approach segment and/or adjusted final approach segment is utilized for landing the aircraft (Block1714). CONCLUSION Although the integrated avionics system100has been described with reference to example implementations illustrated in the attached drawing figures, it is noted that equivalents may be employed and substitutions made herein without departing from the scope of the invention as recited in the claims. Further, the integrated avionics system100, including respective components, as illustrated and described herein is merely an example of a system and components that may be used to implement the present disclosure and may be replaced with other devices and components without departing from the scope of the present disclosure. | 115,795 |
11862030 | DETAILED DESCRIPTION Elements that are present in more than one of the figures are given the same references in each of them, unless otherwise indicated. The terms “low”, “high”, “top”, “bottom”, “above”, “below”, “vertical”, “horizontal” or the like used hereinafter are to be considered as seen by an observer on the ground, and when the aircraft is not in an upside-down position, i.e., when the aircraft is hovering, for example, or is not performing a loop manoeuvre, for example. FIG.1shows an aircraft1according to the disclosure. Optionally, and not exclusively, this aircraft1may be a rotorcraft comprising a rotor5. Irrespective of this aspect, the aircraft1may include an airframe that extends upwards from a bottom end, referred to more simply as the “end4”, to a top2. According to the example shown, the top2may be situated at the rotor5, in this case at a cap of the rotor5. According to the example shown, the end4may be the point of a landing gear3that is closest to the ground, for example when the aircraft1is in a stationary position and there is no wind. Irrespective of these aspects, the aircraft1is provided with an obstacle detection system10. The obstacle detection system10is provided with an obstacle sensor15. The obstacle sensor15is configured to detect one or more obstacles60in a surrounding space70. The obstacles60may be of various shapes, for example being part of the relief63, a tree62, a pylon61, another aircraft, a building, etc. In particular, and with reference toFIG.2, the surrounding space70examined and, for example, scanned by the obstacle sensor15may comprise an examined volume700that extends through 360 degrees about an axis of symmetry AXSYM attached to the aircraft1. In reference toFIG.1, the examined volume700may extend in elevation and in partial section, i.e., vertically as seen by an observer on the ground63, in an angular field86extending to either side of a median plane PMED that is orthogonal to the axis of symmetry AXSYM. The median plane PMED divides the angular field86into two equal parts. In particular, the angular field86may extend over an angle87of at least 40 degrees and, for example, over 45 degrees according to the example shown. Therefore, and in reference toFIG.3, the examined volume700may be in the form of a volume obtained by rotating a sector of a disk about the axis of symmetry AXSYM. Such a volume can be examined very quickly, for example at a frequency of the order of 5 Hertz. The surrounding space70may be restricted to this examined volume700. Alternatively, the surrounding space70may result from the aggregation of all the examined volumes700that have been analyzed during a measurement time. This measurement time may be fixed, being stored, or variable, being calculated by the system and possibly by the obstacle sensor described below or a computer that may or may not be dedicated to this application. For example, the measurement time depends on at least one speed of the aircraft, for example depending on the air speed. The obstacle sensor15may comprise one or more obstacle sensing devices16. For example, an obstacle sensing device16emits a signal and receives the signal returned by a point of an obstacle60, if applicable. This point is referred to as an “obstacle point”. For example, an obstacle sensing device16may be of the LiDAR type. For example, an obstacle sensing device16may be made mobile by a motorized system in order to rotate about an axis and, in particular, said axis of symmetry AXSYM. According to one possibility, the obstacle sensor15may be provided with a single obstacle sensing device16, for example of the LiDAR type. According to another aspect, the obstacle sensor15emits, in a conventional manner, at least one signal carrying at least one item of positioning data of each detected obstacle point75. For example, the obstacle sensor15determines, for each detected obstacle point75, the distance of the obstacle point75from a reference Ref. This reference Ref may be an obstacle sensing device16of the obstacle sensor15. The obstacle sensor15can determine, for each detected obstacle point75, an angle of elevation alphan relative to the reference Ref, and, for example, relative to the median plane PMED passing through the reference Ref. In reference toFIG.2, the obstacle sensor15can determine, for each detected obstacle point75, a bearing angle p sin of the obstacle point relative to the reference Ref and, for example, relative to a forward direction DIR passing through this reference Ref. The obstacle sensor15may comprise a detector provided with a memory which stores all the obstacle points obtained by the obstacle sensing device or devices, possibly during a sliding period referred to as the “measurement time”. The detector may further comprise an algorithm for positioning all the obstacle points obtained during the measurement time, according to the aforementioned example, relative to the reference Ref and in terms of a distance as well as an angle of elevation and a bearing angle. For example, this algorithm is referred to as a “SLAM” (Simultaneous Localization And Mapping) algorithm. The detector may comprise at least one processor and at least one memory, at least one integrated circuit, at least one programmable system, or at least one logic circuit. In addition, and in reference once more toFIG.1, the obstacle detection system10includes a filter20. The filter20communicates via a wired or wireless link with the obstacle sensor15and, if applicable, with the detector and/or with each obstacle sensing device. The filter20may comprise at least one processor and at least one memory, at least one integrated circuit, at least one programmable system, or at least one logic circuit, these examples not limiting the scope to be given to the term “filter”. The term “processor” may refer equally to a central processing unit (CPU), a graphics processing unit (GPU), a digital signal processor (DSP), a microcontroller, etc. The filter20may comprise one or more units. Optionally, the detector and the filter may form part of the same computer. Optionally, the filter20may communicate via a wired or wireless link with a measurement system30measuring a roll angle and a pitch angle of said aircraft1. For example, the measurement system30may comprise an inclinometer for measuring the roll angle, an inclinometer for measuring the pitch angle or indeed an inertial unit. According to another aspect, the obstacle detection system10includes a display25. The display25may, for example, be a two-dimensional display, i.e., a display displaying information in a two-dimensional representation. The filter20and the display25may share the same unit, for example. The filter20may be a computer of a display device, for example, the display25comprising, in particular, a screen of the display device. The filter20may comprise a unit shared with the obstacle sensor15. The obstacle sensor15, the filter20and the display25are in particular configured to apply the method according to the disclosure. In reference toFIG.4, this method comprises examining STP1a surrounding space70by means of the obstacle sensor15. For example, an obstacle sensing device16referred to as a “wide angle” obstacle sensing device is able to rotate about the axis of symmetry AXSYM to detect the obstacles present in the surrounding space to be analyzed. The expression “surrounding space” denotes a volume searched by the obstacle sensor15. The obstacle sensor15examines all the surrounding space that it is able to examine, this surrounding space being fixed and dependent on the sensor and not adjustable. The obstacle sensor15generates, for each detected obstacle point75, at least one item of positioning data transmitted to the filter20. Optionally, the filter20processes the positioning data emitted during a measurement time. For example, the obstacle sensor15transmits to the filter20, for each detected obstacle point75, the distance Dn, the angle of elevation alphan and the bearing angle p sin described above. Obstacles60that pose no danger to the aircraft may be detected because of the large dimensions of the surrounding space70that is examined. According toFIG.4, a tree62that poses no danger may be considered to be an obstacle60. With reference toFIG.5, the detection of obstacle points75relating to obstacles that pose no danger may increase depending on the roll angle or pitch angle of the aircraft1. According to the disclosure and with reference once more toFIG.4, the method then comprises determining STP2, from among all the obstacle points75and with the filter20, each relevant point80, each obstacle point situated within a predetermined detection volume85being a relevant point. This detection volume85is different from the surrounding space70, but the detection volume85and the surrounding space70comprise a common volume. The detection volume85can be attached to the aircraft1. The detection volume85represents a volume in which an obstacle point of an obstacle60is liable to present a danger to the aircraft1. Therefore, according to the method, the filter20selects points referred to as “relevant points” that are considered to be potentially dangerous from among all the identified obstacle points. For example, an obstacle point800of the pylon61is considered relevant whereas all the obstacle points75belonging to the tree62are deemed irrelevant. The method then comprises displaying STP3said relevant points80on the display25, the obstacle points75deemed irrelevant not being displayed. Thus, the filter20is configured to determine whether an obstacle point75is a relevant point80, i.e., a point belonging to the detection volume85, or conversely an irrelevant point, and the display25is configured to display only the obstacle points of the relevant point80type, the irrelevant points not being displayed. According to another aspect, the detection volume85may extend above a low plane PINF, or even between, and inclusive of, a high plane PSUP and a low plane PINF. The high plane PSUP and the low plane PINF are optionally parallel or even horizontal. For example, the detection volume is in the form of a strip, covering all the space situated through 360 degrees around the aircraft between the low plane PINF and the high plane PSUP. The high plane PSUP and the low plane PINF are situated vertically and respectively above and below the aircraft1, the aircraft thus being positioned in the space situated between the high plane PSUP and the low plane PINF. Thus, the aircraft1and obstacles60that are potentially dangerous in the short term are situated in a space between the low plane PINF and the high plane PSUP. The filter20may therefore be configured to determine whether or not an obstacle point75belongs to the detection volume. For example, the filter20may be configured to determine a maximum threshold height H1and a minimum threshold height H2. The maximum threshold height H1represents a distance separating the reference Ref of the aircraft1and a point of the high plane PSUP vertically, i.e., according to gravity. Similarly, the minimum threshold height H2represents a distance separating the reference Ref of the aircraft1and a point of the low plane PINF, according to gravity. For example, the maximum threshold height H1may be equal in absolute value to the distance dis1separating the top2of the aircraft1and the reference Ref, plus a first margin marg1, i.e., H1=dis1+marg1. Similarly, the minimum threshold height H2may be equal in absolute value to the distance dis2separating the end4of a landing gear3and the reference Ref, plus a second margin marg2, i.e., H2=dis2+marg2. Optionally, the maximum threshold height H1and the minimum threshold height H2are variable depending solely on a vertical speed of the aircraft1. According to the preceding example, the first margin marg1may be equal to a constant multiplied by the vertical speed VZ. Similarly, the second margin marg2may be equal to a constant multiplied by the vertical speed VZ. Alternatively, the maximum threshold height H1and the minimum threshold height H2are variable depending on one or more of the following parameters: a vertical speed VZ of the aircraft1, a height of the aircraft1relative to the ground, a forward speed of the aircraft1. According to a first variant depicted in solid lines, the high plane PSUP and the low plane PINF are horizontal and positioned respectively at the maximum threshold height H1and at the minimum threshold height H2relative to the reference Ref. With reference toFIG.5, the margins help ensure that the aircraft1remains between the low plane PINF and the high plane PSUP following a modification of its attitude, for example during roll. According to a second variant, the high plane PSUP and the low plane PINF are parallel to a plane containing a landing area. According to a third variant shown with dashed lines inFIG.4, the high plane PSUP and the low plane PINF are parallel to a plane defined by a conventional longitudinal speed vector Vx and lateral speed vector Vy of the aircraft1. To this end, the filter20can communicate with a conventional longitudinal speed sensor and a conventional lateral speed sensor. Irrespective of the variant, the method may include a step in which the filter20determines an obstacle height Hn of each obstacle point75, relative to a reference plane PO passing through the reference Ref. For example, the reference plane PO can be a horizontal plane, a plane that coincides with the median plane, a plane parallel to the high plane PSUP. The filter20then determines, by means of this obstacle height Hn, whether the obstacle point75is situated in the space between and inclusive of the high plane PSUP and the low plane PINF. For example, the obstacle height Hn of an obstacle point75depends on the distance of the obstacle point75from the reference Ref and its angle of elevation alphan and its bearing angle p sin. Optionally, the obstacle height Hn of an obstacle point75is calculated using the following relation: Hn=Dn*sin(alphan+(phi*sin(psin))+theta*cos(psin)) where “Hn” represents the obstacle height, “=” represents the equals sign, “*” represents the multiplication sign, “+” represents the addition sign, “sin” represents the sine function, “cos” represents the cosine function, “Dn” represents the distance of the point from the reference, obtained by means of the obstacle sensor, “alphan” represents the angle of elevation of the point, obtained by means of the obstacle sensor, “p sin” represents the bearing angle of the obstacle point, obtained by means of the obstacle sensor, “phi” represents the roll angle of the aircraft, “theta” represents the pitch angle of the aircraft. In particular in the context of the first variant, if an obstacle point75has an obstacle height greater than or equal to the minimum threshold height H2and less than or equal to the maximum threshold height H1, and considering that the minimum threshold height H2has a negative sign and the maximum threshold height H1has a positive sign, this obstacle point75is then considered to be a relevant point80by the filter20. FIG.6shows an aircraft1according to the disclosure applying the method of the disclosure. During flight, the obstacle sensor15of the aircraft1examines the surrounding space70. The filter20determines the relevant obstacle points. ThisFIG.6is a schematic illustration. Obstacles are shown schematically as hatched areas in order to illustrate the disclosure. FIG.7shows a display that would display all the obstacle points75obtained at the flight point shown inFIG.6. It can be seen that a pilot cannot make use of the displayed information. FIG.8shows only the relevant points80displayed on a display25, in a two-dimensional representation28. The relevant points80are displayed by the display25, by means of pixels, seen from above the aircraft1and through 360 degrees around a symbol29representing the aircraft1. The displayed information becomes intelligible and thus contributes to flight safety. Optionally, the display25may perform a colorization step STP3.1. During this colorization step STP3.1, the display25assigns a color to each relevant point80depending on the nature of the material of the obstacle detected at this relevant point80. Thus, a first relevant point801may have a first color COL1whereas another relevant point802has another color COL2. This nature may be evaluated in a conventional manner by the obstacle sensor15. Thus, each relevant point80may have a first color when the obstacle60is detected as belonging to the mineral group, a second color when the obstacle60is detected as belonging to the metal group, a third color when the obstacle60is detected as belonging to the organic material group, and a fourth color when the obstacle60is detected as belonging to the group of elements diffused in the air. Naturally, the present disclosure is subject to numerous variations as regards its implementation. Although several embodiments are described above, it should readily be understood that it is not conceivable to identify exhaustively all the possible embodiments. It is naturally possible to replace any of the means described with equivalent means without going beyond the ambit of the present disclosure. | 17,353 |
11862031 | DESCRIPTION OF THE PREFERRED EMBODIMENTS The following description of the preferred embodiments of the invention is not intended to limit the invention to these preferred embodiments, but rather to enable any person skilled in the art to make and use this invention. 1. Overview 1.1 Semantic Parsing The semantic parsing method, an example of which is shown inFIG.2, can include performing inference using the system S200; and can optionally include training the system components S100. The method functions to automatically interpret flight commands from a stream of air traffic control (ATC) radio communications. The method can additionally or alternatively function to train and/or update a natural language processing system based on ATC communications. Additionally, the method can include or be used in conjunction with collision avoidance and/or directed perception for traffic detection associated therewith. The performing inference S200can include: at an aircraft, receiving an audio utterance from air traffic control S210, converting the audio utterance into a predetermined format S215, determining commands using a question-and-answer model S240, and optionally controlling the aircraft based on the commands S250(example shown inFIG.3). The method functions to automatically interpret flight commands from the air traffic control (ATC) stream. The flight commands can be: automatically used to control aircraft flight; presented to a user (e.g., pilot, a remote teleoperator); relayed to an auto-pilot system in response to a user (e.g., pilot) confirmation; and/or otherwise used. In an illustrative example, the method can receive ATC audio stream, convert the ATC audio stream to ATC text, and provide the ATC text (as the reference text) and a predetermined set of queries (each associated with a different flight command parameter) to an ATC-tuned question and answer model (e.g., ATC-tuned BERT), which analyzes an ATC text for the query answers. The query answers (e.g., responses of the question and answer model) can then be used to select follow-up queries and/or fill out a command parameter value, which can be used for direct or indirect aircraft control. The ATC audio stream can be converted to the ATC text using an ATC-tuned integrated sentence boundary detection and automatic speech recognition model (SBD/ASR model) and an ATC-tuned language model, wherein an utterance hypotheses (e.g., a sentence hypothesis, utterance by an individual speaker, etc.) can be selected for inclusion in the ATC text based on the joint score from the SBD/ASR model and the language model. S200can be performed using a semantic parsing system100including a Speech-to-Text module and a question and answer (Q/A) module (e.g., cooperatively forming a semantic parser). The system functions to interpret audio air traffic control (ATC) audio into flight commands, and can optionally control the aircraft based on the set of flight commands. The semantic parsing system100is preferably mounted to, installed on, integrated into, and/or configured to operate with any suitable vehicle (e.g., the semantic parsing system can include the vehicle). Preferably, the vehicle is an aircraft, but can alternately be a watercraft, land-based vehicle, spacecraft, and/or any other suitable vehicle. The semantic parsing system can be integrated with any suitable aircraft, such as a rotorcraft (e.g., helicopter, multi-copter), fixed-wing aircraft (e.g., airplane), VTOL, STOL, lighter-than-air aircraft, multi-copter, and/or any other suitable aircraft. However, the vehicle can be an autonomous aircraft, unmanned aircraft (UAV), manned aircraft (e.g., with a pilot, with an unskilled operator executing primary aircraft control), semi-autonomous aircraft, and/or any other suitable aircraft. Hereinafter, the term ‘vehicle’ can refer to any suitable aircraft, and the term ‘aircraft’ can likewise refer to any other suitable vehicle. The semantic parsing system is preferably equipped on an autonomous aircraft, which is configured to control the aircraft according to a set of flight commands using a flight processing system without user (e.g., pilot) intervention. Alternatively, the semantic parsing system can be equipped on a semi-autonomous vehicle and/or human-operated vehicle as a flight aid. In a first variant, the semantic parsing system can display ATC commands to a user (e.g., pilot) and/or relay ATC commands to an auto-pilot system in response to a user (e.g., pilot) confirmation. The term “tuned,” as referenced in regard to neural networks, language models, or otherwise, can be understood to relate to tuning (e.g., adjusting) parameters (e.g., hyperparameters, training parameters, variables, etc.) using training data. Accordingly, an ATC-tuned network can be understood as having parameters tuned based on ATC audio and/or ATC-specific semantic training data (as opposed to a network dedicated to a specific radio frequency band). The term “traffic advisory” as utilized herein can refer to the term traffic advisory as relied upon by FAA, ATC, and/or general aviation guidelines. Additionally or alternatively, the term traffic advisory can refer to any suitable aircraft communications which pertain to air traffic, which can be used to inform/direct collision avoidance (e.g., even in cases where the ego aircraft is not specifically requested/required to identify said traffic). Accordingly, traffic advisories can be determined from ATC, communications of other aircraft on the same radio channel, communications intended for the ego aircraft (e.g., where the aircraft is the intended recipient), communications intended for other aircraft (e.g., where the aircraft is not the intended recipient), automated collision avoidance systems (e.g., TCAS, ACAS, remote collision avoidance/monitoring systems, etc.; which may be based on ground-based radar and transponder relays, space-based GPS, etc.), and/or any other suitable communications, advisories, and/or alerts. It is understood that, in some variants, the term traffic advisory as utilized herein can be interchangeably referenced with “traffic alert,” “traffic communication,” “aircraft advisory,” and/or other like terms. However, the term “traffic advisory” can be otherwise suitably relied upon or referenced herein. The term “negative contact” as utilized herein can refer to a failure to identify, perceive, locate, and/or detect an object/aircraft. For example, negative contact can refer to a lack of visual or other contact (e.g., transponder contact) with an adjacent aircraft (e.g., an aircraft associated with a traffic alert). As a second example, negative contact can be a term used by pilots to inform ATC that the previously issued traffic is not in sight (e.g., which may be followed by a request for the controller to provide assistance in avoiding the traffic). In a third example, the term negative contact as relied upon by FAA, ATC, and/or general aviation guidelines. However, the term “negative contact” can be otherwise suitably relied upon or referenced herein. Though the systems and/or methods herein are addressed in reference to aircraft, it is understood that, in some variants, these systems, methods, and/or elements thereof can be applied to land-based vehicles, taxiing aircraft, and the like. Accordingly, in some variants, the term “aircraft” as referenced herein can interchangeably refer to a vehicle, automobile, fixed-wing aircraft, rotorcraft, watercraft, and/or any other suitable vehicle(s), and/or can be otherwise suitably referenced. 1.2 Directed Aircraft Perception In variants, the perception system can use various sensors to detect other aircraft and/or objects in the vicinity in order to avoid collisions. In some cases, sensors (for example radar, cameras, or directional radio receivers) are arranged in an array. This may be because a single sensor's field of view is smaller than the overall sector of space in which traffic needs to be detected (for example a camera with a particular lens and image sensor resolution), or because an individual sensor cannot distinguish direction itself (for example a directional radio receiver) and which specific sensor detects traffic is indicative of the bearing to that traffic. Some detection methods (for example, computer vision using a camera or radar target detection) may utilize large amounts of signal processing. For example, computer vision may use a deep neural network detection method that requires teraflops worth of computation to detect traffic from images in a video stream. Variants can reduce the computational effort of the traffic detection system by directing its attention to a particular sector of airspace and/or sampling region of perception data. In variants, this direction can be based on NLP capabilities (e.g., semantic parsing of ATC communications) and/or auxiliary data sources (e.g., a low resolution, onboard inputs such as from a transponder, radar, and/or directional radio receiver; historical traffic data, etc.). In a nominal operational mode, a traffic detection system can process sensor inputs from the full sensor array, which covers a wide sector around the aircraft. In the nominal operational mode, sensor inputs can be processed ‘coarsely’ in order to operate with available compute resources. For example, the rate at which the whole array is processed might be relatively low, or the sensor input data (for example from a camera) might be downsampled to a lower spatial resolution. The detection system can search for any traffic at any range in the coverage sectors, which includes many classes of aircraft that may appear at many apparent sizes depending on their range (e.g., a distant large jet may be perceived as apparently similar to a closer light aircraft, for example). When a communication is received on the radio from air traffic control, the semantic parsing system100can process the audio into a semantic interpretation (e.g., via NLP). If the received communication pertains to traffic (for example, “traffic, one o'clock, 5 miles, traffic is a 737”), this information can be passed to the traffic detection system as a traffic advisory. The traffic detection system then increases its performance based on the information provided: it can process the sector of the array covering that bearing at higher rate or with higher resolution, reducing rate or resolution in the other array sectors where the presence of traffic is less of a risk. A priori knowledge about the detection profile of a 737 at a range of 5 miles (for example, its geometry and apparent size) can also improve the detection performance, for example by using a particular model trained on 737 detection and using a known size template for that range. As an example, traffic communications received from ATC may indicate an object identifier (e.g., call sign), estimated/expected (ego-relative) position of aircraft/objects (e.g., “two 'o clock; two thousand feet above”), an object class (e.g., large aircraft class, such as a 737, or light aircraft such as a Cessna 172, etc.), and/or other distinguishing information or object characteristics (e.g., Airline, etc.). 1.3 Examples In a first set of variants, a method for air traffic control (ATC)-directed collision avoidance on an aircraft includes: receiving an air traffic control (ATC) audio signal from a communication system; determining an utterance hypothesis from the ATC audio signal with automatic speech recognition (ASR); autonomously determining a traffic advisory by querying the utterance hypothesis with a pre-trained neural network model based on the utterance hypothesis, the traffic advisory comprising an estimated ego-relative position of an object; locating the object associated with the traffic advisory, comprising: based on aircraft perception data, performing an extended range search with a pretrained classifier, the extended range search directed based on the estimated ego-relative position; and performing an action based on the identification of the object. In some examples, performing the action can include controlling the aircraft based on the object. In some examples, performing the action can include reporting negative contact (e.g., via an ATC radio) and/or determining a resolution advisory (e.g., automatically generating a resolution advisory; onboard the aircraft, remotely, etc.). In some examples, the aircraft perception data comprises a set of camera images collected onboard the aircraft. In some examples, the extended range search is directed by restricting an image pixel search space within the set of camera images based on a proximity of the estimated ego-relative position. In some examples, the method further includes refining the extended range search based on a set of historical traffic data. In some examples, the method further includes refining the extended range search based on aircraft position data from an Automatic Dependent Surveillance-Broadcast (ADS-B). In some examples, a detection range of the extended range search is between 2 and 5 nautical miles (e.g., which may advantageously improve detection accuracy for aircraft which may not be identified during a coarse, closer-range search). In some examples, the object comprises a second aircraft. In some examples, determining the utterance hypothesis from the ATC audio signal includes: with an integrated ASR and sentence boundary detection (SBD) module, generating a set of linguistic hypotheses based on the ATC audio signal; using an ATC-tuned language model, determining a respective language score for each linguistic hypothesis of the set of linguistic hypotheses; and determining the utterance hypothesis from the set of the linguistic hypotheses based on the respective language scores. In some examples, the traffic advisory is determined according to a sequence of the natural language queries. In a second set of variants, nonexclusive with the first, a method for vehicle collision avoidance includes: receiving an audio signal; determining an utterance hypothesis for the audio signal; autonomously determining a traffic alert based on the utterance hypothesis; in response to determination of the traffic alert, performing an extended-range search with a pretrained classifier using vehicle perception data; based on the extended range search, identifying an object associated with the traffic alert; and determining a vehicle command based on the identification of the object. In some examples, the vehicle comprises an aircraft and the object comprises a second aircraft, wherein the traffic alert comprises a traffic advisory from air traffic control (ATC). In some examples, the traffic alert comprises a position estimate for the object. In some examples, the method further includes automatically determining a resolution advisory, wherein the vehicle command is associated with the resolution advisory. In some examples, the extended range search is directed by restricting a search space within the set of camera images based on an estimated position of the object. In some examples, the estimated position of the object is based on aircraft position data from an Automatic Dependent Surveillance-Broadcast (ADS-B). In some examples, the method further includes refining the extended range search based on historical traffic data. 2. Benefits Variations of the technology can afford several benefits and/or advantages. First, variations of this technology can enable communication-directed vehicle perception (e.g., based on ATC communications), which can improve classification accuracy and/or extend the range of vehicle perception in an object (traffic) detection, collision avoidance, and/or navigational context (e.g., for navigation relative to terrain or terrestrial objects/structures, etc.). Additionally, such variants can improve the processing efficiency of object (traffic) detection and/or collision avoidance. For example, granular searches for objects (such as other aircraft in proximity to the flightpath) and/or high processing bandwidth searches can be performed in response to specific ATC communications or requests (e.g., discretely, discontinuously, etc.; as opposed to a continuous, coarser object detection routine which may be used to facilitate emergency collision avoidance relative to objects in close proximity to the aircraft, which may utilize less compute). Additionally, search spaces of granular and/or high processing bandwidth searches (e.g., extended range searches with a range between 2 and 5 nautical miles) can be restricted and/or refined based on ATC communications and/or other data sources (e.g., historical traffic patterns, Automatic Dependent Surveillance-Broadcast [ADS-B] data, etc.). However, variations of this technology can otherwise enable communication-directed vehicle perception. Second, variants can partially or fully automate identification/classification of surrounding air traffic, or otherwise assist pilots in traffic detection, which may improve detection accuracy and thus reduce the frequency of ATC intervention to reroute traffic (e.g., in cases where surrounding traffic cannot be identified). Additionally, variants can partially or fully automate aircraft/pilot actions in the response to ATC requests (e.g., confirming identification, determining a resolution advisory, etc.). Third, variants can confer increased semantic parsing accuracy over conventional systems by utilizing a multiple-query (or repeated question-and-answer) approach, for example by neural network (e.g., BERT), since existing deep neural network models have high intrinsic accuracy in responding to these types of questions. Fourth, variations of this technology utilizing a multiple-query approach which asks natural language questions (e.g., “message intended for DAL456?”; “topics?”; “heading values?”; etc.) of a neural network can improve the interpretability and/or auditability of the semantic parser. In such variants, a specific module/model/query of the semantic parsing system can be identified as a point of failure when a user rejects a command, which can be used to further train/improve the semantic parsing system. In some variants, the multi-query approach can additionally enable portions of the semantic parser to be trained based on partial and/or incomplete tagged responses (e.g., which can be sufficient to answer a subset of the queries used to extract a command from an ATC transcript). As an example, training data can be used when values and/or aircraft tail numbers are not identified and/or validated within a training dataset. Fifth, variations of this technology can enable semantic parsing of ATC utterances without the use of grammar rules or syntax—which can be time intensive to develop, slow to execute, and yield inaccurate results (particularly when handling edge case scenarios or unusual speech patterns). In an example: as a conversation between ATC and an aircraft continues, the ATC controller and the pilot often shorten phrases and/or deviate from the standard speech template, which can severely impact the efficacy of grammar/syntax-based NLP approaches. In variants, the semantic parsing system and/or method can convert unformatted audio, syntactically inconsistent (non-standardized) audio, and/or non-uniform audio data or corresponding ATC transcript into a standardized/formatted data input (e.g., as may be accepted/interpreted by a certified aircraft processor). In variants, standardized inputs can be utilized to certify aircraft systems in a deterministically testable manner. As an example, the technology can be used to convert an arbitrarily large number of audio signals into a substantially finite set of commands (e.g., with bounded ranges of values corresponding to a predetermined set of aircraft command parameters, which can be deterministically tested and/or repeatably demonstrated). Sixth, variations of this technology can include an approach necessarily rooted in computer technology for overcoming a problem specifically arising in the realm of computer networks. In an example, the technology can automatically translate audio into a computer readable format which can be interpreted by an aircraft processor. In an example, the technology can enable control of a partially and/or fully autonomous system based on communications with ATC operators. In such examples, the system/method may act in place of an incapacitated pilot (e.g., for a manned aircraft) and/or replace an onboard pilot (e.g., for an unmanned aircraft). Seventh, variations of this technology can enable high speed and/or high accuracy natural language processing (NLP) of air traffic control (ATC) utterances by leveraging neural network models that were pre-trained on other datasets (e.g., pretrained models), then tuned to ATC-specific semantics. These ATC-tuned models can improve the speed/accuracy of the semantic parsing system in the context of noisy, multi-speaker ATC channels. These ATC-tuned models can also retain the broad ‘common sense’ comprehension of the pre-existing model and avoid overly biasing the semantic parsing system towards conventional ATC language—thus enabling the semantic parsing system to effectively respond to edge case scenarios or speech patterns which infrequently occur in ATC communications. However, variations of the technology can additionally or alternately provide any other suitable benefits and/or advantages. 3. Semantic Parsing System The semantic parsing system100, an example of which is shown inFIG.1, can include: a Speech-to-Text module120and a question-and-answer (Q/A) module130(e.g., cooperatively the “semantic parser”). The semantic parsing system can optionally include a communication subsystem110and a flight processing system140. However, the semantic parsing system100can additionally or alternatively include any other suitable set of components. The semantic parsing system100functions to determine flight commands106from an audio input102(e.g., received ATC radio transmission) which can be used for vehicle guidance, navigation, and/or control. In variants, the semantic parsing system100can optionally include or be used in conjunction with a collision avoidance system200(e.g., a first example is shown inFIG.13; a second example is shown inFIG.14; a third example is shown inFIG.15) to facilitate directed perception and/or collision avoidance (e.g., in accordance with S300). The audio input102can include a unitary utterance (e.g., sentence), multiple utterances (e.g., over a predetermined window—such as 30 seconds, within a continuous audio stream, over a rolling window), periods of silence, a continuous audio stream (e.g., on a particular radio channel, such as based on a current aircraft location or dedicated ATC communication channel), and/or any other suitable audio input. In a first example, the audio input can be provided as a continuous stream. In a second example, a continuous ATC radiofrequency stream can be stored locally, and a rolling window of a particular duration (e.g., last 30 seconds, dynamic window which is sized based on previous utterance detections, etc.) can be analyzed from the continuous radiofrequency stream. The audio input is preferably in the form of a digital signal (e.g., radio transmission passed through an A/D converter and/or a wireless communication chipset), however can be in any suitable data format. In a specific example, the audio input is a radio stream from an ATC station in a digital format. In variants, the system can directly receive radio communications from an ATC tower and translate the communications into commands which can be interpreted by a flight processing system. In a first ‘human in the loop’ example, a user (e.g., pilot in command, unskilled operator, remote moderator, etc.) can confirm and/or validate the commands before they are sent to and/or executed by the flight processing system. In a second ‘autonomous’ example, commands can be sent to and/or executed by the flight processing system without direct involvement of a human. However, the semantic parsing system100can otherwise suitably determine commands from an audio input. The semantic parsing system100is preferably mounted to, installed on, integrated into, and/or configured to operate with any suitable vehicle (e.g., the system can include the vehicle). The semantic parsing system100is preferably specific to the vehicle (e.g., the modules are specifically trained for the vehicle, the module is trained on a vehicle-specific dataset), but can be generic across multiple vehicles. The vehicle is preferably an aircraft (e.g., cargo aircraft, autonomous aircraft, passenger aircraft, manually piloted aircraft, manned aircraft, unmanned aircraft, etc.), but can alternately be a watercraft, land-based vehicle, spacecraft, and/or any other suitable vehicle. In a specific example, the aircraft can include exactly one pilot/PIC, where the system can function as a backup or failsafe in the event the sole pilot/PIC becomes incapacitated (e.g., an autonomous co-pilot, enabling remote validation of aircraft control, etc.). The semantic parsing system100can include any suitable data processors and/or processing modules. Data processing for the various system and/or method elements preferably occurs locally onboard the aircraft, but can additionally or alternatively be distributed among remote processing systems (e.g., for primary and/or redundant processing operations), such as at a remote validation site, at an ATC data center, on a cloud computing system, and/or at any other suitable location. Data processing for the Speech-to-Text module and Q/A module can be centralized or distributed. In a specific example, the data processing for the Speech-to-Text module and the Q/A module can occur at a separate processing system from the flight processing system (e.g., are not performed by the FMS or FCS processing systems; the Speech-to-Text module and Q/A module can be decoupled from the FMS/FCS processing; an example is shown inFIG.12), but can additionally or alternatively be occur at the same compute node and/or within the same (certified) aircraft system. Data processing can be executed at redundant endpoints (e.g., redundant onboard/aircraft endpoints), or can be unitary for various instances of system/method. In a first variant, the semantic parsing system can include a first natural language processing (NLP) system, which includes the Speech-to-Text module and the Q/A module, which can be used with a second flight processing system, which includes the flight processing system and/or communication systems (e.g., ATC radio). In a second variant, an aircraft can include a unified ‘onboard’ processing system for all runtime/inference processing operations. In a third variant, remote (e.g., cloud) processing can be utilized for Speech-to-Text operations and/or Q/A response generation. However, the semantic parsing system100can include any other suitable data processing systems/operations. The semantic parsing system100can optionally include a communication subsystem, which functions to transform an ATC communication (e.g., radio signal) into an audio input which can be processed by the ASR module. Additionally or alternately, the communication subsystem can be configured to communicate a response to ATC. The communication subsystem can include an antenna, radio receiver (e.g., ATC radio receiver), a radio transmitter, an A/D converter, filters, amplifiers, mixers, modulators/demodulators, detectors, a wireless (radiofrequency) communication chipset, and/or any other suitable components. The communication subsystem include: an ATC radio, cellular communications device, VHF/UHF radio, and/or any other suitable communication devices. In a specific example, the communication subsystem is configured to execute S210. However, the communication subsystem can include any other suitable components, and/or otherwise suitably establish communication with air traffic control (ATC). The Speech-to-Text module of the semantic parsing system100functions to convert the audio input (e.g., ATC radio signal) into an utterance hypothesis104, such as in the form of text (e.g., an ATC transcript) and/or alphanumeric characters. The utterance hypothesis is preferably a text stream (e.g., dynamic transcript), but can alternatively be a text document (e.g., static transcript), a string of alphanumeric characters (e.g., ASCII characters), or have any other suitable human-readable and/or machine-readable format. The Speech-to-Text module is preferably onboard the aircraft, but can additionally or alternatively be remote. The Speech-to-Text module is preferably an ATC-tuned Speech-to-Text module, which includes one or more models pre-trained on ATC audio data, but can additionally or alternatively include one or more generic models/networks and/or models/networks pre-trained on generalized training data (e.g., natural language utterances not associated with ATC communication). The Speech-to-Text module can include: an integrated automatic speech recognition (ASR) module122, a sentence boundary detection (SBD) module124, a language module126, and/or other modules, and/or combinations thereof. In a specific example, the Speech-to-Text module can include an integrated ASR/SBD module125. The Speech-to-Text module (and/or submodules thereof) can include a neural network (e.g., DNN, CNN, RNN, etc.), a cascade of neural networks, compositional networks, Bayesian networks, Markov chains, predetermined rules, probability distributions, attention-based models, heuristics, probabilistic graphical models, or other models. The Speech-to-Text module (and/or submodules thereof) can be tuned versions of pretrained models (e.g., pretrained for another domain or use case, using different training data), be trained versions of previously untrained models, and/or be otherwise constructed. In variants, a submodule(s) of the Speech-to-Text module (e.g., ASR module and/or SBD module) can ingest the audio input (e.g., audio stream, audio clip) and generate a set of linguistic hypotheses (e.g., weighted or unweighted), which can serve as an intermediate data format, such as may be used to audit the Speech-to-Text module, audit sub-modules/models therein, and/or select a unitary utterance hypothesis. The set of linguistic hypotheses can include overlapping/alternative hypotheses for segments of audio, or can be unitary (e.g., a single hypothesis for an individual audio segment or time period). The set of linguistic hypotheses can include: utterance hypotheses (e.g., utterance hypothesis candidates), letters, word-segment streams, phonemes, words, sentence segments (e.g., text format), word sequences (e.g., phrases), sentences, speaker changes, utterance breaks (e.g., starts, stops, etc.), and/or any other suitable hypotheses. In variants where the audio stream includes multiple speakers/utterances, the set of linguistic hypotheses can additionally include an utterance boundary hypothesis which can distinguish multiple speakers and/or identify the initiation and termination of an utterance, with an associated weight and/or a speaker hypothesis (e.g., tag identifying a particular speaker, tag identifying a particular aircraft/tower). Additionally or alternately, the utterance boundary hypothesis can identify utterance boundaries and/or change in speaker without identifying individual speaker(s). Each linguistic hypothesis preferably includes an associated weight/score associated with an utterance (and/or utterance boundary), assigned according to a relative confidence (e.g., statistical; such as determined using an ASR model, SBD model, and/or language model; etc.). The set of linguistic hypotheses is preferably ordered, sequential, and/or time-stamped in association with the receipt time, but can be otherwise suitably related. However, the Speech-to-Text module can generate, store, and/or output any other suitable set of hypotheses. As an example, the linguistic hypotheses can include a plurality of utterance hypotheses, wherein a single utterance hypothesis can be selected based on the set of generated set of utterance hypotheses. As a second example, a subset (e.g., complete set) of linguistic hypotheses, with a corresponding weight/score, can be output by the Speech-to-Text module. The Speech-to-Text module can include an ASR module which functions to extract linguistic hypotheses from the audio input. Using the audio input, the ASR module can determine a sequence of linguistic hypotheses, such as: letters, word-segment streams, phonemes, words, sentence segments (e.g., text format), word sequences (e.g., phrases), sentences, and/or any other suitable linguistic hypotheses (e.g., with a corresponding weight). The ASR module is preferably a neural network (e.g., Wav2Letter, Kaldi, Botium, etc.), but can alternatively be any other suitable model. In an example, a pretrained neural network can be tuned for ATC audio and/or trained using ATC audio (e.g., with an associated transcript). In a second example, the ASR module can include the ASR model trained by S110and/or S120. In a specific example, the ASR module is configured to execute S220of the method. The ASR module can optionally include an integrated SBD module. In variants where the ASR module outputs lower-level linguistic components (e.g., phonemes, phonetics, etc.), the semantic parsing system can optionally include auxiliary transformation modules (e.g., phoneme-to-word transformations) that convert the lower-level linguistic components to linguistic components compatible by the language module and/or other modules. The Speech-to-Text module can include an SBD module which functions to identify utterance boundaries and/or speaker changes for a multi-utterance audio inputs. Using the audio input, the SBD module can determine a sequence of linguistic hypotheses, such as: an utterance boundary hypothesis, a speaker hypothesis (e.g., tag identifying a particular speaker, tag identifying a particular aircraft/tower), and/or any other suitable hypotheses. The SBD module is preferably integrated with the ASR module (an example is shown inFIG.10A), but can otherwise be separate from the ASR module, such as operating sequentially with the ASR module (e.g., passing a single utterance input into the ASR module, tagging outputs of the ASR module, etc.; examples are shown inFIGS.10C-D) or in parallel with the ASR module (e.g., separately providing speaker change and/or utterance boundary annotations by way of time stamps, etc.; an example is shown inFIG.10B). The SBD module is preferably a neural network (e.g., Wav2Letter, Kaldi, Botium, etc.), but can alternatively be any other suitable model. In an example, a pretrained SBD neural network can be tuned for ATC audio and/or trained using ATC audio (e.g., with an associated transcript). In a second example, an SBD neural network can be trained separately from the ASR module (e.g., using a distinct training set, using a training set including periods of radio silence and/or audio artifacts, etc.). In a third example, the SBD model can be tuned for ATC audio and/or trained using ATC audio, such as trained to identify silence speakers and/or utterance boundary characters (e.g., transition speakers, transition audio artifacts). However, the Speech-to-Text module can include any other suitable SBD module(s). The language module of the Speech-to-Text module functions to select an utterance hypothesis based on the set of linguistic hypotheses, which can then be passed into the Q/A module. The language module receives the set of linguistic hypotheses from the ASR module (e.g., phonemes, words, sentence subsets, etc.) and returns an utterance hypothesis associated with a single utterance (e.g., a sentence, a series of linguistic hypothesis, etc.). The language module preferably determines the utterance hypothesis purely from the linguistic hypotheses, but can alternatively or additionally ingest the audio input and/or other auxiliary data. Auxiliary data can include: an aircraft ID, contextual information (e.g., vehicle state, geographical position, ATC control tower ID and/or location, etc.), weather data, and/or any other suitable information. The utterance hypothesis is preferably text (e.g., a text string or utterance transcript), but can alternatively be a set of phoneme indexes, audio, or any suitable data format. The language module preferably selects an utterance hypothesis from the set of linguistic hypotheses by weighting the likelihood of various ‘sound-based’ language interpretations in the context of the entire utterance and/or ATC language patterns. In a first variant, the language module assigns language weights/scores to each utterance hypothesis using a neural network language model (e.g., an LSTM network, a CNN, FairSeq ConvLM, etc.) tuned for ATC language (e.g., neural network trained using ATC transcripts, etc.; such as a language model trained according to S140). In a second variant, the language module assigns language weights/scores according to a grammar-based language model (e.g., according to a set of heuristics, grammar rules, etc.). In a third variant, the language module can be tightly integrated with the ASR module. In examples, a language model(s) can be used during the search, during the first pass, and/or during reranking. However, the language module can assign weights/scores in any other suitable manner. In a specific example, the language module is configured to execute S230of the method. In an example, the Speech-to-Text module transforms an ATC audio stream into a natural language text transcript which is provided to the Q/A module, preserving the syntax as conveyed by the ATC speaker (e.g., arbitrary, inconsistent, non-uniform syntax). Alternatively, the speech-to-text module can include a neural network trained (e.g., using audio data labeled with an audio transcript) to output utterance hypotheses (e.g., one or more series of linguistic components separated by utterance boundaries) based on an audio input. However, the speech-to-text module can include: only an automated speech recognition module, only a language module, and/or be otherwise constructed. However, the semantic parsing system can include any other suitable Speech-to-Text module. The semantic parsing system100can include a question-and-answer (Q/A) module (example shown inFIG.7), which functions to determine a set of commands from the selected hypothesis (e.g., text transcript) using a set of flight command queries. The Q/A module preferably receives an utterance hypothesis from the Speech-to-Text module in text, but can alternately receive audio and/or any other suitable inputs. The Q/A module preferably includes one or more Q/A models (e.g., BERT, BERT tuned to ATC applications, etc.), but can additionally or alternatively include a classifier or other model. The Q/A model is preferably a pre-trained language model tuned for ATC transcripts but can be untrained or have another format. The Q/A model can be: a convolutional neural network, a (pre-trained) large neural language model, bidirectional encoder representations from transformers (BERT), generative pre-trained transformer (GPT), and/or any other suitable language model. However, the Q/A module can include any other suitable neural language models. The Q/A module preferably answers a set of flight command queries (e.g., natural language queries). The flight command queries are preferably predetermined (e.g., manually determined, extracted from a command template, etc.), but can be dynamically determined. Flight command queries are preferably semantic queries in a human-readable format, but can additionally or alternatively be provided in a machine-readable format. The command queries are preferably natural language (“reading comprehension”), but can alternatively be vectors, tensors, and/or have another format. The set of flight command queries is preferably organized in a hierarchical structure (e.g., with parent-child query relationships), but can alternatively be organized in a serial structure, or be otherwise organized. The flight command queries can be organized in a list, a tree, or otherwise organized. In variants, flight command queries can be provided as a sequence/series of chained nodes (examples are shown inFIGS.11A-C), each node corresponding to a predetermined query, wherein the nodes include a set of independent nodes and a set of dependent nodes, each dependent node linked to a specific answer/response (e.g., specific answer value) of a broader/higher-level parent semantic query (e.g., where queries have a finite set of answers or a closed range of answers). Accordingly, dependent queries may be triggered in response to a determination of a predetermined answer at a higher-level linked node. Alternatively, the set of predetermined flight command queries can be provided synchronously or asynchronously in any suitable combination/permutation of series and/or parallel. The command queries can be configured to have binary answers (e.g., “yes”, “no”, discrete answers (e.g., letters, integers, etc.), continuous answers (e.g., coordinate values, etc.), and/or any other suitable type of answer value. Different types of commands can have different query structures. For example, high-criticality queries, such as aircraft identifiers, can be structured as binary queries. In another example, attributes with multiple potential answers can be structured as open-ended questions (e.g., “topics?”) instead of binary questions (e.g., “Does the utterance include heading?” Does the utterance include altitude?”). However, the queries can be otherwise structured. Examples of command queries include: whether the aircraft is the intended recipient of an utterance hypothesis, what or whether command parameters or topics (e.g., heading, altitude, etc.) are included in the utterance hypothesis, what or whether command parameter values (e.g., altitude direction, altitude level, etc.) are included in the utterance hypothesis, and/or other queries. In a first example, the Q/A module determines that the utterance is intended for the aircraft (e.g., Question: “Intended for DAL456?”; Answer: “yes”). In a second example, the Q/A module determines the topics of an utterance (e.g., Question: “Topics?”; Answer: “Heading, Altitude”). In a third example, the Q/A determines the values associated with a topic of the utterance (e.g., Question: “Altitude values?”; Answer: “Direction: down, Level: 2000”). In an example, the Q/A module can be configured to execute S240. Based on the queries, the Q/A module outputs a set of flight commands, which can include guidance commands (e.g., navigational instructions; sequences of waypoints, approach landing site, etc.), vehicle state commands (e.g., instructions to modify vehicle state parameters, increase altitude to 5000 ft, etc.), effector state commands (e.g., effector instructions; deploy landing gear, etc.), flightpath commands (e.g., trajectory between waypoints, etc.), and/or any other suitable commands. The commands preferably output in a prescribed format based on the answers generated by the Q/A module, such as a standardized human-readable format (e.g., allowing human validation) and/or a machine-readable format (e.g., allowing human interpretation/validation of the commands). In a specific example, the commands can be provided as the union of the answers to the command parameter identification query and at least one command parameter value query (e.g., corresponding to the answer of the command parameter identification query). In a second example, the commands can be directly taken as a combination of each answer/response as generated by the Q/A module. Output commands are preferably text based and/or alphanumeric, but can be otherwise suitably provided (e.g., text-to-speech validation, etc.). In some variants, the commands can be post-processed according to any suitable heuristics, grammar rules, or formatting protocols, but can otherwise be provided to a pilot and/or flight processing system directly as the output of the Q/A module. In a specific example, the Q/A module can convert an utterance hypothesis into a command in a standardized data format (e.g., as may be accepted/interpreted by a certified aircraft processor). In variants, the commands can include a substantially finite set of command parameters (e.g., altitude, heading, etc.) corresponding to a predetermined set of topics. Additionally, command parameters can be within substantially finite and/or bounded ranges (e.g., heading limited to compass directions, altitude limited by physical aircraft constraints, commands cooperatively limited by flight envelope, etc.). However, command parameters can additionally or alternatively be arbitrary, unbounded, and/or substantially unconstrained. However, the Q/A module can generate any other suitable commands. However, the semantic parsing system can include any other suitable Q/A module. The semantic parsing system100can optionally include and/or be used with a flight processing system, which functions to control various effectors of the aircraft according to the commands. The flight processing system can include an aircraft flight management system (FMS), a flight control system (FCS), flight guidance/navigation systems, and/or any other suitable processors and/or control systems. The flight processing system can control flight effectors/actuators during normal operation of the vehicle, takeoff, landing, and/or sustained flight. Alternatively, the flight processing system can be configured to implement conventional manual flight controls in a flight-assistive configuration. The semantic parsing system can include a single flight processing system, multiple (e.g., three) redundant flight processing systems, and/or any other suitable number of flight processing systems. The flight processing system(s) can be located onboard the aircraft, distributed between the aircraft and a remote system, remote from the aircraft, and/or otherwise suitably distributed. In a specific example, the flight processing system is configured to execute S250. In variants, the flight processing system can be configured (e.g., certified) to accept only a predetermined set of command input and/or inputs having a predetermined format, where the outputs of the Q/A model are provided in the predetermined format and/or are a subset of the predetermined set of commands. However, the semantic parsing system can include any other suitable components and/or be otherwise suitably configured to execute S200of the method. 4. Semantic Parsing Method The method, an example of which is shown inFIG.2, can optionally include training the system components S100; and performing inference using the system S200. The method functions to automatically interpret flight commands from a stream of air traffic control (ATC) radio communications. The method can additionally or alternatively function to train and/or update a natural language processing system based on ATC communications. 4.1 Training Training the system components S100(example shown inFIG.9) functions to generate an ATC-tuned system capable of interpreting ATC audio signals into flight commands. S100can include training a Speech-to-Text model and training a question-and-answer (Q/A) model S150. S100can optionally include generating augmented ATC transcripts S130. However, training the semantic parser S100can include any other suitable elements. S100is preferably performed offline and/or by a remote computing system, but can alternatively be performed onboard the aircraft (e.g., locally, during flight, asynchronously with aircraft flight). Training the Speech-to-Text model functions to generate a transcription model that is specific to ATC communications, accounting for ATC-specific grammar, lexicon, speech patterns, and other idiosyncrasies. Training the Speech-to-Text model can include training an ASR model S110, training an SBD model S120, training a language model S140, and/or any other suitable elements. Training can include: tuning the network weights, determining weights de novo, and/or otherwise training the network. Training (and/or inference) can leverage: gradient-based methods (e.g., stochastic gradient descent), belief propagation (e.g., sum-product message passing; max product message passing, etc.), and/or any other suitable training method. Training an automatic speech recognition (ASR) module S110functions to train a neural network to recognize natural language in ATC communications. The ASR model is preferably trained (e.g., using supervised training, semi-supervised training) from a pre-existing ASR model (e.g., Wav2Letter), and can be ‘tuned’ by providing the neural network a mix (e.g., 50/50, 60/40, 70/30, predetermined mix, 100/0, etc.) of ATC training audio with corresponding ATC transcripts and the original training data (e.g., from the pre-existing model). An example is shown inFIG.4. The ATC training audio with transcripts is preferably manually determined (e.g., by a human, by a domain expert), but can be verified/audited ATC communication audio/transcripts (e.g., generated from an existing ASR model), and/or otherwise determined. The ATC training audio can include a single utterance, multiple utterances, a stream of radio communication over an ATC communications channel, and/or any other suitable training audio. Preferably, utterances (e.g., statements from an individual speaker, sentences, etc.) are individually associated with a transcript as part of the training data. However, the ASR model can be otherwise trained for ATC speech recognition. Training a sentence boundary detection (SBD) module S120functions to train the Speech-to-Text module to identify utterance boundaries (e.g., sentence segment boundaries, sentence boundaries). S120can optionally train the Speech-to-Text module to differentiate unique utterances and/or utterances from different speakers/entities. S120can train an existing ASR model (e.g., as determined in S110, which generates an integrated ASR/SBD model) or a separate model to generate the SBD module. Preferably, the SBD model can be trained using time-length concatenated audio, which includes a series of multiple utterances and periods of silence (e.g., periods of no speaking) therebetween, and the associated multi-utterance training transcripts. The ATC audio and transcripts used to train the SBD model can be the same as the ASR model and/or different from the ASR model. Multi-utterance training transcripts preferably include boundary annotations (e.g., with a unique boundary character or other identifier; using a ‘/’ or ‘%’ character; etc.) which can delineate unique speakers, unique utterances, breaks between utterances, periods of silence, audio artifacts (e.g., the “squelch” when the ATC speaker starts and/or starts broadcasting), and/or any other appropriate boundaries. Boundary annotations are preferably automatically added during transcript concatenation, but can be inserted manually, be determined from the audio, and/or otherwise added. In a specific example, the ASR model is trained by assigning a unique ‘silence speaker’ and/or a unique ‘transition speaker’ in the audio and/or transcript —which can be particularly advantageous in SBD for ATC radio communications, commonly exhibit a characteristic radio “squelch” sound prior to an utterance. By assigning these segments of audio to a unique ‘transition speaker’ (or a ‘squelch speaker’) the SBD model can more accurately differentiate between back-to-back utterances (e.g., with minimal intervening silence), which commonly occurs in noisy ATC radio channels. However, an SBD model can be otherwise trained. Training a language model S140functions to train a language model to distinguish ATC linguistic patterns. In variants, the language model can determine whether a transcript is contextually correct/logical (e.g., syntactically correct, based on ATC grammar, etc.), determine a language/syntax score for a transcript, and/or otherwise determine whether a transcript makes sense. Preferably, S140tunes a pre-existing language model (e.g., convolutional neural network, FairSeq ConvLM, etc.), but can alternately train an untrained language model. An existing language model can be tuned based on ATC transcripts, which can be single utterance ATC transcripts, multi-utterance ATC transcripts, and/or boundary annotated ATC transcripts (e.g., such as those used to train the SBD model in S120), however the language model can be trained using any suitable ATC transcripts. S140preferably does not train on the ATC audio, but can alternatively train on the ATC audio. In variants, the language model can be trained using entity-tagged ATC transcripts, which identify ATC specific entities within the transcript. Tagged entities can include: carriers, aircraft, waypoints, airports, numbers, directions, and/or any other suitable entities. Entity tags can be assigned manually, automatically (e.g., unsupervised), with a semi-supervised HMM tagger (e.g., using a domain expert evaluation tool, etc.), and/or in any other suitable manner. A single word or phrase appearing in a transcript can be assigned to multiple entities depending on the context in which it appears (i.e., the entity tag lexicon can include multiple phonetically and/or lexicographically conflicting entities which are pronounced and/or spelled substantially identically). In an example, “Southwest” can be tagged as (and/or communicate) a direction or a carrier depending on the context in which it appears. Likewise, in a second example, “delta” can be tagged as part of an aircraft name (e.g., DAL456=“delta alpha lima four five six”), a carrier, and/or untagged (e.g., referring to a change in value or parameter) depending on the context in which it appears. In a third example, “Lima” can be an airport, a waypoint, part of an aircraft name, and/or otherwise tagged. In a fourth example, waypoints can be pronounced substantially identically (e.g., “ocean”) while corresponding to different waypoint entities depending on the context in which they appear. However, the language model can be trained with any other suitable transcripts and/or information. In variants, a portion of the training text provided to train the language model is the same as that used to originally train the pre-existing language model (e.g., FairSeq ConvLM). Accordingly, the language model can be ‘tuned’ by providing the neural network a mix (e.g., 50/50, 60/40, 70/30, predetermined mix, etc.) of ATC training transcripts and the original training data (e.g., from the pre-existing model). However, a language model can be otherwise trained for ATC linguistic patterns. S100can optionally include generating augmented ATC transcripts S130(e.g., synthetic transcripts), which functions to expand the number/quantity of ATC training transcripts available to train the language model in S140, an example of which is shown inFIG.5. In variants, this can be beneficial in order to provide training transcripts specific to areas/regions where entities are known (e.g., airport names, waypoints, carriers, etc.), but from which ATC transcripts are unavailable. Additionally or alternately, S130can improve the accuracy of the language model by increasing a size of the training dataset (e.g., number of available utterance transcripts). S130preferably substitutes the values of tagged entities (e.g., within the entity-tagged ATC transcripts) with different entity values from an ATC entity lexicon. The ATC entity lexicon can be manually generated, generated by a domain expert (e.g., pilot), randomly generated (e.g., number substitution), generated using: historical flight logs, aircraft databases, airport databases, randomly generated, and/or otherwise generated. In variants, the augmented ATC transcripts can preferentially (e.g., at a higher rate; with greater frequency; occurring with greater than a threshold number of instances—such as 3 or more within the training set) substitute phonetically and/or lexicographically conflicting entity names (e.g., which are identified by multiple tags in different contexts), such as “southwest” and “delta.” The augmented ATC transcripts can then be used to train the language model in S140and/or question-and-answer model in S150(e.g., an example of training an ATC-tuned language model is shown inFIG.5). However, ATC transcripts can be otherwise generated. Alternatively, the semantic parsing system (and/or neural network models therein) can be trained entirely with real ATC communication transcripts. S100can include training a question-and-answer (Q/A) module S150, which functions to train a model to answer ATC-specific queries. S150preferably includes tuning a pre-trained language model, but can include training an untrained model. The language model can be trained using: an ATC transcript, the associated parsed meaning (e.g., reference outputs; answers to the queries; values for command parameters determined from the ATC transcript, etc.), the set of command queries, and/or other data. In variants, S150can also provide the language model contextual information pertaining to a particular utterance—such as a tail number or carrier for a particular aircraft, a flight plan for the aircraft, a set of utterance transcripts preceding the particular utterance, and/or any other suitable contextual information. The text transcripts used to train the Q/A model can be the same ATC transcripts used to train the ASR and/or SBD model, the same ATC transcripts (and/or augmented ATC transcripts) used to train the language model, the utterance hypotheses output by the Speech-to-Text module, and/or other transcripts. However, the Q/A model can be trained using any suitable ATC transcripts. The parsed meaning used to train the Q/A model can be: manually determined, manually audited by a domain expert, provided by a grammatical semantic parser (e.g., SEMPRE, a lower-accuracy parser than the system, a previous iteration of the system, etc.; an example is shown inFIG.6) referencing ATC grammar (e.g., manually determined, iteratively determined, learned, etc.), and/or otherwise suitably determined. In a specific example, a grammatical semantic parser parses the command parameter values from the ATC transcripts, wherein the parsed values (e.g., command hypotheses), source transcript, optionally ATC audio, and/or other data are presented on a domain evaluation tool (an example is shown inFIG.8) to domain experts. The domain expert can: label to the model output (e.g., as “correct,” “incomplete,” “incorrect,” etc.), correct the parsed values, and/or otherwise interact with the parser output. In variants, reference outputs labelled as “incorrect” and/or “incomplete” can be reviewed and used to update or improve grammar rules of a grammatical semantic parser. In variants, reference outputs labelled “incorrect” are not used to train the Q/A model, but can alternately be used to train the Q/A model (e.g., the “incorrect” label serving to train by counterexample). In variants, reference outputs which are labelled as “correct” and/or “incomplete” can be passed into the Q/A model during S150. In variants, incomplete label data can be used to train a subset of queries associated with a particular utterance (e.g., based on the correctly labelled portions of the transcript). As an example, where the parameter values may be unlabelled and the topics are identified, the topics may be used to train a command identification (e.g., “topics?”) query. Likewise, where the aircraft tail number is tagged/identified, incomplete label data can be used to train the plane-specific speaker identification query(ies). However, the labels can be otherwise used, and model outputs can be otherwise suitably determined. However, a question-and-answer model can be otherwise suitably trained. In variants, the ASR model, SBD model, language model, and/or Q/A model can be optionally retrained and/or updated based on pilot/PIC validation with any suitable update frequency. The models can be updated/retrained independently, synchronously, asynchronously, periodically (e.g., with a common update frequency, with different frequencies), never (e.g., which may be desirable in instances where the deterministic model(s) are certified), based on auditing of the intermediate outputs, and/or can be otherwise suitably updated or trained. The models can be updated locally, onboard the aircraft, periodically via remote/cloud (push) updates, and/or can be otherwise suitably updated/retrained. In variants, the model(s) can be audited based on a pilot rejection of the final output parameters in order to locate error origin(s) within the data pipeline (e.g., as part of a root cause analysis), which can be used as a training input to improve the network. As an example: an erroneous intermediate parameter (such as in the utterance hypothesis or linguistic hypothesis) can result in an incorrect output of the Q/A module even in cases where the Q/A module performs correctly. In variants, the outputs of each model/module can additionally be audited against a formatting template prescribed to each step (e.g., to enable certification compliance of the system). However, the system and/or various subcomponents can be otherwise suitably audited. However, the system components can be otherwise suitable trained. 4.2 Runtime/Inference S200can include: at an aircraft, receiving an audio utterance from air traffic control S210, converting the audio utterance into a predetermined format S215, determining commands using a question-and-answer model S240, and controlling the aircraft based on the commands S250. However, the method S200can additionally or alternatively include any other suitable elements. S200functions to automatically interpret flight commands from the air traffic control (ATC) stream. The flight commands can be automatically used to control aircraft flight; presented to a user (e.g., pilot, a remote teleoperator); relayed to an auto-pilot system in response to a user (e.g., pilot) confirmation; and/or otherwise used. All or portions of S200can be performed continuously, periodically, sporadically, in response to transmission of a radio receipt, during aircraft flight, in preparation for and/or following flight, at all times, and/or with any other timing. S200can be performed in real- or near-real time, or asynchronously with aircraft flight or audio utterance receipt. S200is preferably performed onboard the aircraft, but can alternatively be partially or entirely performed remotely. Receiving an audio utterance from air traffic control S210functions to receive a communication signal at the aircraft and/or convert the communication signal into an audio input, which can be processed by the ASR module. In a specific example, S210transforms an analog radio signal into a digital signal using an A/D converter (and/or other suitable wireless communication chipset), and sends the digital signal to the ASR module (e.g., via a wired connection) as the audio input. S210preferably monitors a single radio channel (e.g., associated with the particular aircraft), but can alternately sweep multiple channels (e.g., to gather larger amounts of ATC audio data). However, S210can otherwise suitably receive an utterance. Converting the audio utterance into a predetermined format S215functions to generate a transcript from the ATC audio. This can be performed by the Speech-to-Text module or other system component. Converting the audio utterance to into a predetermined (e.g., text) format can include: determining a set of utterance hypotheses for an utterance S220and selecting an utterance hypothesis from the set of utterance hypotheses S230; however, the ATC audio can be otherwise converted. Determining a set of utterance hypotheses for an utterance S220functions to identify audio patterns (e.g., such as letters, phonemes, words, short phrases, etc.) within the utterance. In a specific example, S220can be performed by the Speech-to-Text module, an ASR module (and/or ASR model therein), an integrated ASR/SBD module (e.g., with an integrated ASR/SBD model therein), a language module, and/or combinations thereof. S220can optionally include assigning a weight or score to each audio pattern (a.k.a. linguistic hypothesis) using the ASR module and/or other modules. An utterance hypothesis can be: a linguistic hypothesis, a series of linguistic hypotheses, and/or any other suitable hypothesis. In a first variation, an ASR and/or integrated SBD/ASR module generates a set of linguistic hypotheses, wherein a language module receives the linguistic hypotheses and generates a score (e.g., ASR score; same or different from language weight/score) for each string or sequence of linguistic hypotheses. One or more linguistic hypothesis sets can be generated from the same audio clip. The SBD/ASR module can also output a score (ASR score or ASR weight) for each linguistic hypothesis, sequence of hypotheses, and/or set of linguistic hypotheses. However, the set of utterance hypotheses can be otherwise determined. Selecting an utterance hypothesis from the set of utterance hypotheses S230functions to detect language patterns from the set of linguistic hypotheses in the context of the entire utterance. Additionally or alternately, S230can function to select the highest probability string/sequence of linguistic hypotheses as the utterance hypothesis. S230can be performed by the language module, the Q/A module, and/or another module. In a first variation, the language module can select the string or sequence of linguistic hypotheses which has the highest combined language weight (or score) and ASR weight (or score) as the utterance hypothesis. In a second variation, multiple modules' outputs are cooperatively used to select the utterance hypothesis. For example, the utterance hypothesis with the highest combined hypothesis score and/or maximum hypothesis weight cooperatively determined by the language model and the integrated ASR/SBD model is selected. In a first example, the utterance hypothesis which maximizes the language weight multiplied by the ASR weight for an utterance is selected. In a second example, the hypothesis which maximizes the sum of the language score and the ASR score for an utterance. However, the utterance hypothesis can be otherwise selected. Determining commands from the utterance hypothesis using a question-and-answer model S240functions to extract flight commands from the utterance hypothesis, which can be interpreted and/or implemented by a flight processing system. S240is preferably performed by one or more instances of the Q/A module, but can be performed by another component. S240is preferably performed using the set of flight command queries and the utterance hypothesis, but can be otherwise performed. S240can include providing the Q/A module with a set of command queries in addition to the utterance hypothesis as an input, wherein the Q/A module answers the command queries using the utterance hypothesis as a reference text. In a first embodiment, the queries are provided serially, wherein the successive query is determined based on the prior answer. The query series can be determined from the command query set structure (e.g., list, tree, etc.), randomly determined, or otherwise determined. In a specific example, S240includes querying for topic presence within the utterance hypothesis, then only querying for values for the topics confirmed to be within the utterance. In a second specific example, S240includes initially determines if the aircraft (and/or pilot) is the intended recipient of the utterance (associated with the utterance hypothesis), and only querying further if the utterances are intended for the aircraft/pilot (e.g., utterances not intended for the aircraft/pilot are ignored and/or any commands therein are not passed to the flight processing system; utterances corresponding to a transition speaker detections can be neglected; etc.). Alternatively, the Q/A model (or different versions or instances thereof) can be queried with multiple queries in parallel or can be otherwise queried. In a second variant, the Q/A module includes pre-embedded queries, wherein the Q/A module answers a predetermined set of questions based on the utterance hypothesis. For example, the Q/A module can be a multi-class classifier that outputs values, determined from the utterance hypothesis, for each of a set of “classes,” wherein each class represents a command parameter. However, S240can otherwise suitably determine command parameter values. In some variants, the Q/A module can be further utilized (e.g., by S240or a similar process) to determine: flight changes, traffic advisories, and/or any other suitable ATC communications/instructions (e.g., where the aircraft is the intended recipient or otherwise). For example, S240can additionally or alternatively determine traffic advisories, traffic alerts, ATC instructions/directions, and/or any other suitable instructions. However, commands and/or other instructions for the aircraft can be otherwise suitably determined using the Q/A model. S200can optionally include controlling the aircraft based on the commands S250, which functions to modify the aircraft state according to the utterance (e.g., ATC directives). In a specific example, S250autonomously controls the effectors and/or propulsion systems of the aircraft according to the commands (e.g., to achieve the commanded values). In a second example, the flight processing system can change waypoints and/or autopilot inputs based on the commands. In variants, S200can include providing the commands to a flight processing system (e.g., FCS) in a standardized format (e.g., a standardized machine-readable format). However, S250can otherwise suitably control the aircraft based on the commands. Alternatively, the system can be used entirely in an assistive capacity (e.g., without passing commands to an aircraft processor or controlling the aircraft, such as to enable control of an aircraft by a hearing-impaired pilot), and/or can be otherwise used. However, S200can include any other suitable elements. 5. Directed Perception 5.1 System The collision avoidance system200, an example of which is shown inFIG.14, can include: a traffic detection module210, an avoidance module220, and/or any other suitable components. The collision avoidance system can function to facilitate detection of objects (e.g., traffic obstacles, other aircraft, etc.) and/or determination of resolution advisories (e.g., via input from a collision avoidance system, such as TCAS or ACAS, and/or autonomous trajectory planning) to avoid aircraft collisions with objects. In variants (e.g., an example is shown inFIG.15), collision avoidance system200(and/or the traffic detection module and/or the collision avoidance module thereof) can be integrated into the lower assurance system and/or as a computing module of the autonomous computing system as described in U.S. application Ser. No. 17/891,845, filed 19 Aug. 2022, which is incorporated herein in its entirety by this reference. The collision avoidance system200can receive inputs from the aircraft sensor suite and/or perception sensors thereof (e.g., camera, Radar, Lidar, time-of-flight sensors, etc.). The (onboard) aircraft sensor suite can include one or more: time-of-flight sensors (e.g., radar, LIDAR, time-of-flight camera, etc.), radar sensors (e.g., radar altimeter, etc.), LIDAR sensors, sonar sensors, cameras (e.g., RGB, CCD, CMOS, multispectral, visual range, hyperspectral, infrared, stereoscopic, etc.), wave-based sensors (e.g., light waves, sound waves, etc.), light-based sensors (e.g., cameras, visual spectrum, IR spectrum, radio-frequency spectrum, etc.), spatial sensors (e.g., inertial measurement sensors, IMU, INS, accelerometer, gyroscope, altimeter, magnetometer, AHRS, compass, etc.), location sensors (e.g., GPS, GNSS, triangulation, trilateration, etc.), air data sensors (e.g., airspeed, pressure, temperature, etc.), force sensors, vibration sensors, and/or any other suitable set of sensors. Additionally, the collision avoidance system200can receive traffic advisories and/or other commands, advisories, alerts, or other information from the semantic parsing system100. For example, the semantic parsing system can provide traffic advisories (or traffic alerts) which can be used to direct perception via the collision avoidance system200(e.g., in accordance with S300). Additionally, the collision avoidance system can optionally receive inputs from auxiliary data sources230, which can include: historical flight information (e.g., flight logs, tracking data, aggregate routing information, aggregate traffic information, flight traffic heatmaps, etc.), transponder data (e.g., Mode C and Mode S transponders; TCAS, ACAS, etc.), TCAS data, ACAS data, ADS-B data (e.g., from an onboard ADS-B system), ground-based aircraft tracking data (e.g., from ground-based radar/localization), and/or any other suitable datasets. In variants, historical flight information can be accessed from public databases and/or stored locally onboard the aircraft (e.g., in conjunction with a flight plan, prior to departure, etc.). As an example, historical flight information and/or other auxiliary data can be stored onboard the aircraft (e.g., in conjunction with a flight plan or otherwise) in conjunction with the system and/or methods as described in U.S. application Ser. No. 17/674,518, filed 17 Feb. 2022, which is incorporated herein in its entirety by this reference. However, the collision avoidance system can receive any other suitable set of data/inputs from any other aircraft systems or data sources. The collision avoidance system can include a traffic detection module210which functions to detect traffic using data from the perception sensors. As an example, the traffic detection module can execute Block S320of the directed perception method. The traffic detection module can include one or more: pretrained object detector (e.g., pretrained for a specific class of aircraft, detection range, etc.), neural network (e.g., CNN, R-CNN, FCN, YOLO, etc.), graphical model (e.g., Bayesian network), a logistic regression, clustering algorithms, feature detectors (e.g., ORB, SIFT, etc.), histogram of gradients (HOG), single shot detector (SSD), spatial pyramid pooling (SPP-net), and/or any other suitable object detector(s). In variants, the object detector can include a classifier (e.g., binary classifier, multiclass classifier, etc.) and/or can function to classify detected objects. The object detector can include: an integrated object detector/classifier, a binary classifier, a multi-class classifier, a clustering model (e.g., hierarchical clustering model), a regression model, a neural network model (e.g., R-CNN, DNN, CNN, RNN, etc.), a cascade of neural networks, an ensemble of neural networks, compositional networks, Bayesian networks, Markov chains, predetermined rules, probability distributions, heuristics, probabilistic graphical models, and/or other model(s). However, the object detector can include any other suitable model(s). In variants, the traffic detection module can perform continuous traffic detection (e.g., in conjunction with a continuous/persistent collision avoidance routine) and/or can continuously search for objects (or obstacles) in the surrounding airspace environment using a stream of perception data from the aircraft sensor suite (and/or perception sensors thereof). For example, a collision avoidance routine (e.g., close range, such as less than 2 nautical miles, etc.) can be performed in substantially real-time using the vehicle perception data based on a coarse analysis of the aircraft perception data (e.g., downsampling images, binned images, lower resolution search, closer range search, search for larger apparent objects, etc.; lower resolution point cloud, etc.). In variants, the traffic detection module can additionally or alternatively facilitate directed perception/searches based on inputs received from the semantic parsing system (e.g., traffic advisories) and/or auxiliary data sources230(e.g., ADS-B data, historical traffic information, etc.). For example, directed perception/searches can be performed discretely, such as in response to an ATC request (e.g., traffic advisory; request for identification of an aircraft in proximity), a pilot request (e.g., via a pilot validation interface, such as a pilot communicating an ATC traffic advisory and/or manually providing request, etc.), and/or with any other frequency/timing. As an example, the traffic detection system can perform (extended-range) directed perception contemporaneously with a closer-range collision avoidance routine (e.g., and/or closer range traffic detection routine thereof), wherein the closer-range collision avoidance routine is performed in substantially real-time using the vehicle perception data and is based on a coarser analysis of the vehicle perception data than the directed object detection routine search. In variants, the traffic detection routine can be performed: continuously, discontinuously, discretely, in response to a determination of a traffic advisory (e.g., via the semantic parsing system100), in response to an operator request (e.g., ATC communication, pilot input, remote operator, etc.), once (e.g., searching for a particular aircraft/object), periodically, aperiodically, repeatedly, in response to satisfaction of a trigger condition, contemporaneously with another collision avoidance routine (e.g., extended-range traffic detection and/or collision avoidance can occur contemporaneously with close-range traffic detection and/or collision avoidance), and/or with any other suitable frequency/timing. In some variants, perception data (e.g., camera images; point clouds; etc.) can be prefiltered and/or preprocessed to eliminate distorted and/or saturated regions of perception data. For example, saturated pixels (e.g., washed out due to sunlight, etc.) can be masked, filtered-out of the image and/or otherwise neglected from consideration during object detection/classification, which may be useful when objects are large relative to pixel resolution and/or when perception data is downsampled (e.g., during closer range object detection), to improve the quality of image data. Additionally or alternatively, saturated pixels can be used to direct and/or seed extended range searches (e.g., such as by incorporating saturated pixels into heuristic or tree-based searches/classification), and/or otherwise used by extended-range object detection/classification. For example, the ‘glint’ of sunlight reflecting from a windscreen or piece of metal may be a distinct indicator of the presence of a distant aircraft. As the apparent size of an adjacent aircraft grows small (e.g., which is particularly relevant for extended range detection; such as between 2 and 5 nautical miles), the sunlight glint and the corresponding pixel saturation may be a strong(est) indicator of aircraft presence. Thus, variants can advantageously direct extended-range searches by including these pixels and/or directing traffic (object) detection within the surrounding region of pixels (and corresponding airspace). However, perception data can be otherwise filtered/processed as a part of object detection/classification. The traffic detection module can output: object (traffic) detections/identifications (e.g., geospatial location; ego-relative position; bounding box; etc.), an object identifiers (e.g., aircraft tail number, instance ID), an object class (e.g., type of aircraft; type of object; size of aircraft; etc.), a probability (e.g., identification confidence; classification probability; etc.), and/or any other suitable object/traffic information. However, the collision avoidance system can include any other suitable traffic detection module. The collision avoidance system200can include an avoidance module220, which functions to resolve potential collision conflicts. Additionally or alternatively, the avoidance module can enable determination of resolution advisories. Additionally or alternatively, the avoidance module can facilitate aircraft navigation based on traffic detections (and/or failure to detect adjacent aircraft) by the traffic detection module210. The avoidance module preferably receives outputs (e.g., traffic detections) from the traffic detection module210and acts to resolve potential collision conflicts (i.e., avoid collisions) based on the outputs from the traffic detection module. Additionally, the avoidance module can receive outputs from the semantic parsing system (e.g., aircraft advisories, commands, traffic advisories, etc.) to facilitate routing and collision avoidance. The collision avoidance module can include: a classically programmed collision avoidance model(s), a ML model (e.g., neural networks, etc.), heuristic model, a tree-based model, and/or can facilitate collision avoidance by a set of predetermined rules, heuristics, or other techniques. The collision avoidance module can respond by performing (or directing) one or more actions in accordance with Block S330, and/or can otherwise facilitate collision avoidance. In some variants, the avoidance module can facilitate tracking of traffic (e.g., in the immediate airspace, which may facilitate in aircraft routing and collision avoidance; via an object tracking sub-module) and/or can operate without object (traffic) tracking (e.g., in cases where observing an aircraft proximal to an expected position may be sufficient to avoid a collision, such as at the request of ATC; where the traffic detection can perform object tracking, etc.). In a first set of variants, the avoidance module can include an autonomous navigation/routing engine which can autonomously determine resolution advisories based on the traffic detections, wherein the resolution advisories can be validated by a pilot (e.g., via a pilot validation interface). In a second set of variants, the avoidance module can facilitate automated audio requests for collision avoidance guidance (e.g., via the system100and/or an ATC radio), wherein ATC may redirect the aircraft to avoid a potential collision. In a third set of variants, the avoidance module can direct pilot intervention. However, the collision avoidance system200can include any other avoidance module(s); and/or the collision avoidance system can be otherwise configured. 5.2 Method The method S300for directed perception, an example of which is shown inFIG.16, can include determining a traffic advisory S310, locating an object associated with the traffic advisory S320, and performing an action based on the location of the object S330. The method functions to automatically facilitate directed perception based on traffic advisories (e.g., via interpret communications from a stream of ATC radio communications). The method can additionally or alternatively function to facilitate autonomous and/or automatic detection of objects (e.g., other aircraft) based on ATC communications. However, the method S300can otherwise facilitate directed perception. Determining a traffic advisory S310functions to determine a traffic advisory based on ATC communications (e.g., an example is shown inFIG.17). For example, semantic parsing of an ATC audio input (and/or an utterance hypothesis derived thereof) via the semantic parsing system100can be used to determine a traffic advisory, wherein the ego aircraft is the intended recipient (e.g., based on an ego call sign and/or tail number). In such cases, the ATC audio may be associated with an air traffic controller alerting the aircraft to the presence of an object (e.g., aircraft, large tower/bridge, etc.) and/or requesting that the ego aircraft confirm a visual on the adjacent object. Traffic advisories preferably refer to other aircraft, but can reference any suitable objects or visual references (e.g., terrain features, etc.). S310is preferably performed using the semantic parsing system100and/or by executing all or a portion of S200. Additionally or alternatively, S310can include or be based on: validation of the advisory (e.g., by an pilot onboard the ego aircraft, by a remote operator, etc.), pilot inputs (e.g., via a pilot validation interface), and/or any other suitable information/inputs. The traffic advisory (and/or information output from the semantic parsing system100, as derived from the radio utterance) can include object information, such as: an object identifier (e.g., aircraft tail number, call sign, etc.; a proper noun/name such as “Needham Towers” or “Air Force One”), an object class (e.g., bridge, aircraft type, etc.), a position estimate (e.g., position information such as an Earth referenced position estimate, ego-relative position estimate, altitude estimate, ego-relative altitude estimate, ego-relative heading position; “eleven o'clock, two-thousand feet above”), object movement information (e.g., absolute or relative heading, speed, airspeed, etc.) and/or any other suitable object information. In a first variant, S310can include: receiving an audio signal; determining an utterance hypothesis for the audio signal; and autonomously determining a traffic alert based on the utterance hypothesis. In a second variant, nonexclusive with the first, S310can include: receiving an air traffic control (ATC) audio signal from a communication system; determining an utterance hypothesis from the ATC audio signal with automatic speech recognition (ASR); autonomously determining a traffic advisory by querying the utterance hypothesis with a pre-trained neural network model based on the utterance hypothesis, the traffic advisory comprising an estimated ego-relative position of an object. As an example, determining the utterance hypothesis from the ATC audio signal can include: with the integrated ASR and sentence boundary detection (SBD) module, generating a set of linguistic hypotheses based on the ATC audio signal; using an ATC-tuned language model, determining a respective language score for each linguistic hypothesis of the set of linguistic hypotheses; and determining the utterance hypothesis from the set of the linguistic hypotheses based on the respective language scores. However, the traffic advisories can be otherwise suitably determined. Locating an object associated with the traffic advisory S320functions to detect, identify, and/or determine a location (i.e., confirm a position estimate, within threshold accuracy) of traffic, such as nearby aircraft, to facilitate aircraft routing and collision avoidance. S320is preferably performed with perception data collected onboard the aircraft (e.g., via sensor suite and/or perception sensors thereof; LIDAR point clouds, camera images, radar data cubes, etc.), but can additionally utilize auxiliary data sources and/or any other suitable data. S320is preferably performed by the collision avoidance system and/or a traffic detection module thereof, but can be otherwise suitably executed with any other aircraft (sub-)systems. In one set of variants, the collision avoidance system can perform substantially continuous object detection/tracking (e.g., close range detection/tracking; within 2 nautical miles) across surrounding airspace. For example, a collision avoidance system can perform real-time (or near real time) traffic detection/avoidance with a coarse analysis of perception data (e.g., down-sampled data; fast object detection model which utilizes less compute, etc.). Additionally or alternatively, persistent object detection/tracking can be used to validate ADS-B position estimates, and/or fused with ADS-B (and/or other auxiliary data) to provide persistent position estimates for a set of aircraft and/or object in the surrounding airspace (e.g., and/or nearby planned flightpath). In such variants, in cases where the object has already been, and/or is currently, detected/tracked, the object location can be referenced from a prior (e.g., current and/or historical tracking data, such as referenced from recent detection/tracking across the last 10 seconds, last 30 seconds, etc.; based on ADS-B estimate which has been validated for accuracy within the last minute, etc.) in S320. As an example, S320can locate a nearby aircraft by referencing a prior. However, in cases where no prior exists for an aircraft and/or where the estimated aircraft position is beyond the detection range of real-time object detection/tracking systems, S320and/or the traffic detection system may trigger an extended range search (e.g., contemporaneous and/or asynchronously with real-time traffic detection, tracking, and/or collision avoidance). In a second set of variants, nonexclusive with the first set of variants, S320can direct object/traffic detection based on the traffic advisory. More preferably. S320can direct object/traffic detection based on the object information associated with the traffic advisory. As an example, S320can trigger a (directed) object/traffic detection routine (i.e., perception-based search) in response to the (automatic/autonomous) determination of the traffic advisory. Directing object/traffic detection based on the object information can include one or more of: providing the object information (e.g., ego-relative position; aircraft class; etc.) as inputs to a pretrained object detector/classifier, selecting a pretrained object detector from a plurality of object detectors based on the object class (e.g., wherein each object detector of the plurality is pretrained to detect a respective object class; for example a first object detector can be pretrained to detect Boeing 737s and a second object detector can be pretrained to detect light aircrafts, such as a Cessna 172; etc.), restricting a search space based on the object information (e.g., restricting an image pixel search space within the set of camera images based on a proximity of the estimated ego-relative position; restricting an azimuthal region of space based on an ego-relative heading and/or estimated nautical distance; restricting a zenith angle range of search space based on the altitude and/or range; restricting a range of returns based on an estimated distance to the aircraft), seeding an object detection search based on the object information (e.g., wherein the search focuses/centers around the estimated position or a high probability region estimated based on the object information; where the estimated position can serve as a starting point for an object detection routine; kernel[s] or other techniques to focus convolutions and layers of neural network; biased pixel binning approaches; etc.), estimating an apparent size of the aircraft based on the object information (e.g., using classical programming techniques, ML-based programmatic techniques, etc.; and providing the estimated apparent size of the aircraft as an object/traffic detection input), and/or object/traffic detection can be otherwise based on any other suitable object information. Directed perception in S320preferably occurs based on granular perception data (e.g., relative to real-time detection; triggering directed imaging with a modified magnification/focal length/lens focus/etc.; at higher/full-resolution; etc.). Additionally, S230can occur for a current data frame/window (e.g., single image from each of a set of cameras and/or frame of a radar data cube) and/or can include analysis of multiple historical data frames/windows (e.g., optical flow techniques; utilizing historical data to improve detection accuracy, detection likelihood, and/or detection confidence; etc.). However, directed perception can additionally or alternatively be based on auxiliary data (e.g., from auxiliary data sources230), refined data (e.g., refined perception data; iteratively refined search region, etc.), a subregion of data (e.g., subset of pixels, subset of radar data cube, etc.), current perception data, historical perception data, aircraft data, historical NLP traffic data (e.g., prior utterances, parsed during current flight/mission), and/or any other suitable data/information. In a first variant, perception-based traffic detection can be directed based on the estimated position of the aircraft by confining/reducing the search space to a data region corresponding to a sector of airspace containing the estimated position. In a second variant, perception-based traffic detection can be directed based on real-time traffic data from auxiliary data sources (e.g., ADS-B). For example, a perception-based search for a particular aircraft (e.g., as directed by an ATC traffic advisory) can scan a search space which is refined based on the estimated ego-relative position of the aircraft, received from an auxiliary data source such as ADS-B. In a third variant, perception-based traffic detection can be refined based on historical data, such as historical flight traffic aggregates (e.g., a heatmap of high probability regions of airspace, historical trajectories for the particular flight number or tail number, etc.). For example, if aircraft usually change course or turn around a lighthouse (e.g., as may occur off the coast of Cape Cod), this traffic pattern may be used to further refine the search and/or predict the likely trajectory (or region of airspace) where the aircraft is likely to be present. In a fourth variant, perception-based traffic detection can be directed based on inter-aircraft communications on the active radio frequency (e.g., in addition to ground ATC). For example, if the aircraft associated with a traffic advisory has referenced terrain features/callouts (X bridge; blue tanks; “Verify that you are visual on the Needham towers”; etc.) in previous radio communications, these may be identified and referenced to direct perception searches. In some variants, S320can additionally direct searches based on ‘glint’ returns. For example, instead of filtering out highly saturated pixels and neglecting them from consideration as object detection inputs, these may be provided to object detectors and/or referenced as indicators of aircraft presence. In variants, traffic detectors can be pretrained to associate ‘glint’ with the presence of aircraft/objects, and/or data regions surrounding a ‘glint’ instance may be used to more efficiently direct object detection (e.g., seeding search). As an illustrative example, light reflections off of an aircraft windshield are often the first hint of an aircraft that is identified to another pilot. Accordingly, perception-based traffic detection may further utilize glint and/or pixel saturation to further refine or direct the search space. S320preferably identifies and locates aircraft (or another objects) associated with the traffic advisory, and outputs the location and/or confirmation of the detection to facilitate collision avoidance and/or routing. For example, S320can utilize a pretrained traffic/object detector, such as within the traffic detection module210, to detect the aircraft associated with the traffic advisory and determine a location of the aircraft, which can be used to direct actions in accordance with S330. Alternatively, in cases where the aircraft cannot be identified/located, and/or where a confidence falls below a predetermined confidence threshold (e.g., classification probability output by an object detector, etc.), the system can output a null location and optionally request pilot and/or ATC intervention (e.g., by way of S330). For example, a pilot may be able to visually confirm the location of an adjacent aircraft and/or may validate that the aircraft/object cannot be identified (e.g., via the pilot validation interface). In variants, S320can include ‘extended-range’ searches and/or traffic detection beyond the range of real-time detection/tracking capabilities in response to traffic advisory determinations according to S310. For example, S320can include extended-range searches which can be: 1 nautical mile, 2 nautical miles, 3 nautical miles, 4 nautical miles, 5 nautical miles, 7 nautical miles, 10 nautical miles, greater than 10 nautical miles, any open or closed range bounded by the aforementioned values, and/or any other suitable search range. Additionally or alternatively, extended-range searches may utilize a greater granularity of input data (e.g., resolution, refresh rate, optical range, data volume) and/or a proportionally larger amount of processing bandwidth/time (e.g., larger models for object detection/classification, non-generic/class-specific models, etc.). Additionally or alternatively, extended range searches can be discretized/discontinuous (e.g., so as to avoid continuous consuming processing bandwidth) and/or may terminate in response to satisfaction of a trigger condition (e.g., confirmed location of the object with greater than threshold confidence, pilot identification and/or validation of the object location, expiration of a time threshold, receipt of a follow-up request from ATC, etc.). However, S320can otherwise facilitate extended range traffic detection and/or extended range searches for objects associated with traffic advisories. Alternatively, traffic detection and/or perception for collision avoidance can be otherwise directed within any suitable range(s), and/or S320can be otherwise suitably executed. In one variant, a detection range of an extended range search can be between 2 and 5 nautical miles (e.g., which may allow detection of aircraft which may otherwise remain undetected during real-time collision avoidance). In one variant, extended range searches can be refined based on aircraft position data from an Automatic Dependent Surveillance-Broadcast (ADS-B). In variants, S320can include identifying the object associated with a traffic advisory/alert based on a traffic detection routine (e.g., such as an extended-range search; an example is shown inFIG.18). For example, the object can be identified based on a detection/classification probability exceeding a predetermined threshold and/or a location of the object falling within a threshold distance of an expected/estimated position of the object. However, objects associated with a traffic advisory can be otherwise suitably located. Performing an action based on the location of the object S330functions to facilitate aircraft navigation and/or control based on the location of the object/aircraft. Additionally or alternatively, S330preferably occurs in response to determining the location of the aircraft via S320, but can additionally or alternatively occur in response to pilot validation of the location and/or with any other suitable timing. Actions performed in S330can include one or more of: determining a resolution advisory (e.g., via an autonomous engine onboard the aircraft, via a pilot interface, etc.), reporting negative contact (e.g., via an ATC radio), requesting updated ATC instructions (e.g., based on negative contact/failure to identify an object/aircraft; via an ATC request and NLP of the ATC response), confirming perception of the object/aircraft (e.g., providing a response to ATC, such as by generation of a semantic confirmation via the semantic parsing system100), automatically determining an aircraft command (e.g., emergency plan; flight command based on a resolution advisory; flight command according to the current flight plan, such as in cases where the flightpath and/or flight plan remains unchanged and confirming the location of the object may allow the ego aircraft to proceed; etc.), controlling the aircraft, and/or any other suitable actions. In one variant, performing the action includes controlling the aircraft based on the object. In one variant, performing the action includes reporting negative contact via an ATC radio (and/or requesting an updated ATC direction). In one variant, performing the action includes determining a resolution advisory and reporting the resolution advisory via the ATC radio. In one variant, S330can include: reporting negative contact (e.g., traffic not in sight; failure to identify proximal aircraft; reported to ATC and/or pilot) based on an extended-range search failing to identify an object associated with a traffic advisory (and/or traffic alert). However, any other suitable actions can be performed. S330is preferably executed by the collision avoidance system (and/or an avoidance module thereof), but can additionally or alternatively or alternatively be executed by an autonomous computing system, the computing system(s) as described in U.S. application Ser. No. 17/891,845, filed 19 Aug. 2022, which is incorporated herein in its entirety by this reference, and/or any other suitable system(s) modules. In some variants, S300can optionally include cross-validating identification and/or localization of an aircraft based on real-time data. In one variant, the method can optionally include providing a follow-up request for an aircraft class, such as: “Say type of aircraft”, and cross-validating the aircraft location based on semantic analysis of a radio response (e.g., such as by a subsequent iteration of S200). As an illustrative example, a common form of misidentification may occur when an aircraft pilot mistakes a large plane (e.g., which may be easy to see) for a small plane (e.g., which might be harder to see). Cross-validation may avoid misidentifications and/or improve accuracy/confidence of object identification/classification (and localization associated therewith). As a second example, a location can be cross-validated against an auxiliary data source[s] (e.g., ground radar) and/or via a pilot validation interface (e.g., where a pilot may also confirm a visual on the aircraft). However, traffic detections can be otherwise validated and/or verified, or may otherwise be acted upon entirely autonomously. However, directed perception can include any other suitable elements, and/or can be otherwise suitably implemented in conjunction with the semantic parsing system and/or natural language processing. Alternative embodiments implement the above methods and/or processing modules in non-transitory computer-readable media, storing computer-readable instructions. The instructions can be executed by computer-executable components integrated with the computer-readable medium and/or processing system. The computer-readable medium may include any suitable computer readable media such as RAMs, ROMs, flash memory, EEPROMs, optical devices (CD or DVD), hard drives, floppy drives, non-transitory computer readable media, or any suitable device. The computer-executable component can include a computing system and/or processing system (e.g., including one or more collocated or distributed, remote or local processors) connected to the non-transitory computer-readable medium, such as CPUs, GPUs, TPUS, microprocessors, or ASICs, but the instructions can alternatively or additionally be executed by any suitable dedicated hardware device. Embodiments of the system and/or method can include every combination and permutation of the various system components and the various method processes, wherein one or more instances of the method and/or processes described herein can be performed asynchronously (e.g., sequentially), concurrently (e.g., in parallel), or in any other suitable order by and/or using one or more instances of the systems, elements, and/or entities described herein. As a person skilled in the art will recognize from the previous detailed description and from the figures and claims, modifications and changes can be made to the preferred embodiments of the invention without departing from the scope of this invention defined in the following claims. | 104,242 |
11862032 | DETAILED DESCRIPTION FIG.1shows an embodiment of a first planar member for use with the invention, wherein first planar member100includes frets101which are representative of the frets of a stringed instrument. First planar member100can include fret numbering102to indicate the numbering of frets101on the fretboard of a stringed instrument. First planar member100can further include string indicia103which represent the strings of a stringed instrument, and string notes104which represent the notes of the strings of a stringed instrument. While first planar member100simulates a stringed instrument having 17 frets and six strings, it will be appreciated that the first planar member can have indicia that represent different numbers of frets and strings. Thus, the invention is not limited to use with stringed instruments having only 17 frets and six strings. First planar member100can include frets, fret numbering, strings, and string notes that are representative of an instrument selected from, but not limited to, a guitar, banjo, ukulele, or mandolin. First planar member100includes selectable notes105. First planar member can have notches106which are adapted to mate with notches106on second planar member200to permit first planar member100to be detachably connected to second planar member200. Notches106on first planar member100are preferably aligned with individual notes listed in selectable notes105. First planar member100can have instructions107for instructing a user on how to use the device to display one or more chords for playing notes in a musical scale. In some non-limiting embodiments of the invention, notches106are omitted and/or replaced with a means for connecting first planar member100and one or more second planar members200to one another. Some non-limiting connecting means include hook and loop fasteners (e.g. Velcro™) and magnets. FIG.2shows an embodiment for a second planar member for use with the invention. Second planar member200has frets201which represent the frets of a stringed instrument. Second planar member200can further include string indicia202which represent the strings of a stringed instrument. Second planar member includes notes203which represent fingering positions for playing notes in a musical scale. Notes203can include one or more root note indicators204which represent the root notes of chords in a musical scale. Notes203can include indicia that represent the intervals for playing the notes in the musical scale. In the example provided inFIG.2, notes203indicate the intervals 1-2-3-5-6 which correspond to the intervals of the major pentatonic scale. Second planar member200includes one or more root note markers205. Second planar member can include notches106which are adapted mate with notches106on first planar member100so as to permit first planar member100to detachably connect to second planar member200. Root note markers205can be aligned with a notch from notches106. Second planar member200can include scale indicator206to signify that notes203correspond to fingering positions for a given musical scale. Scale indicator206can list any scale playable by a stringed instrument. Scale indicator206, and corresponding notes203, can be, for example, a scale selected from diatonic, a major scale, a minor scale, pentatonic, Ionian, Dorian, Phyrigian, Lydian, Mixolydian, Aeolian, Locrian, and dominant 7, minor 7, and blues. Second planar member200can include string notes207for indicating the notes of the strings represented by string indicia202. First planar member100and second planar member can be made from any suitable material for practicing the device and method disclosed herein. Suitable materials include, but are not limited to, plastic, wood, or bamboo. FIG.3shows an embodiment of the inventive device in use.FIG.3shows first planar member100connected to second planar member200A, which is connected to second planar member200B, which is connected to second planar member200C. Second planar members200A,200B, and200C have notes203which represent fingering positions for playing notes in the major pentatonic scale. The device depicted inFIG.3has first planar member100connected to second planar member200A, wherein root note marker205is selecting the note of G from selectable notes105on first planar member100. Thus, second planar member200A displays notes203which represent the fingering positions on the fretboard of a stringed instrument for playing a first one or more chords in the note of G in the major pentatonic scale. As shown, second planar member200B has root note marker205selecting the note of C from selectable notes105such that second planar member200B displays notes203for fingering positions for playing a second one or more chords in the note of C in the major pentatonic scale. Second planar member200C has root note marker205selecting the note of D from selectable notes105such that second planar member200C displays notes203for fingering positions for playing a third one or more chords in the note of D in the major pentatonic scale. Thus, the arrangement of the inventive device shown inFIG.3simultaneously displays chords for playing the notes G-C-D in the major pentatonic scale. Additionally, notes203on second planar members200A,200B, and200C display the intervals for the fingering positions for playing the notes in the chords. WhileFIG.3shows first planar member100connected to second planar members200A,200B, and200C, it will be appreciated that first planar member100can be connected to just a single second planar member200, or two or more second planar members200, depending on the number of root notes that are desired for chords in a musical scale. In some embodiments, the invention provides a method of displaying chords in a selected musical scale for playing by a stringed instrument. The invention can be practiced by providing first planar member100and selecting at least one second planar member200that displays fingering positions for playing chords in a desired musical key. Second planar member200can be selected based on the use of scale indicator206which indicates the musical scale to which second planar member200relates. The selected second planar member200is then connected to first planar member100such that root note marker205selects a desired root note from selectable notes105on first planar member100. Connecting second planar member200to first planar member100aligns frets101on first planar member100with frets201on second planar member200. With root note marker205aligned with a selected root note from selectable notes105, notes203of second planar member200display the fingering positions for playing one or more chords in the desired musical scale based on the selected root note. Notes203can include indicia for the intervals for the notes and fingering positions. For example, if second planar member200displays notes for the major pentatonic scale, notes203can include indicia for the 1-2-3-5-6 intervals for the fingering positions. | 6,995 |
11862033 | DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS The invention will now be described making reference to the following drawings in which like reference numbers denote like structure or steps. Referring toFIG.1, a data flow overview in accordance with the operation of an embodiment of the present invention is shown. In accordance with this embodiment of the invention, information about a particular drug to be the subject of a clinical trial, to be employed in a public health or disease management situation, or the like, or other medication administration program or prescription may be provided in a database105, and existing industry medication information databases110are preferably employed to access prescription, interaction, application, and other available information about any number of proposed prescription and non-prescription medications and their possible interaction with the clinical trial or other medications. Further, patient medical records115may be used, and as will be described below, are used in conjunction with the industry medical information and a medical professional's prescribing expertise to confirm that a patient is a good candidate for such a clinical trial, or medication administration program. These databases may be accessed in a manner known to one of ordinary skill in the art. Once confirmed, a medication administration regimen in accordance with the clinical trial or other prescription requirements such as in a public health, medical practice environment or the like may be prescribed and entered into the system of the invention at120. Once entered into the system, a particular prescription regimen may cause a set of user instructions, various training sequences and the like125to be generated and transmitted to an apparatus provided to a patient in accordance with an embodiment of the invention for access to the system of the invention. Such an apparatus may comprise a custom designed video and audio capture, analysis and transmission apparatus, a smart phone or other mobile device including a camera or other video and audio capture apparatuses, a netbook, laptop computer, desktop computer, tablet device or the like, or other computing appliance allowing for the display of instructions to a patient, and allowing for the eventual capture, analysis and transmission of video, audio and other analysis information. When installing software on a user's own hardware system, it is preferred that the software detect and otherwise test or determine that the hardware attempting to be utilized by the patient is sufficient to implement the invention and is sufficient to run a software package provided in accordance with the invention. Thus, the software may check that a camera includes sufficient resolution, that a memory of the device is of sufficient size to allow for sufficient captured video storage, that audio may be properly captured, and that the transmission system includes sufficient bandwidth to transmit and receive captured video, audio, video instructions and the like. In a clinical trial or other administration settings, patient instructions and various training sequences may be varied for different users to determine the best set of instructions, or may be varied based upon demographics, experience, or other factors that may require different types of instructions to be provided. It is further contemplated in accordance with an embodiment of the invention that multiple clinical trials or patient populations may be managed by a manager in accordance with the invention so that the invention contemplates a medication administration system that allows for a single point of management for all clinical trials or patient management groups associated with a particular manager or the like. Such management techniques in accordance with the embodiment of the invention may further be applied to various public health situations, disease management scenarios and the like. Such user instructions and training sequences may include general instructions about the particular medication subject to the current trial or medication administration protocol, methods for administration, warnings about side effects, and concerns about drug interactions with common substances or medications, or other medications prescribed to the patient by the system or by another medical service provider. It is contemplated in accordance with an embodiment of the invention that such set of user instructions may be interactive, allowing a user to view additional information about such instructions or prescriptions as desired. These instructions may comprise written, audio or video instructions provided to the user on a display of the user apparatus. It is further contemplated that such instructions may indicate one or more movement sequences to be associated with a corresponding one or more medication administration sequences. A more in-depth description of the information provided at step125is depicted inFIG.2. As is shown inFIG.2, the generation and provision of user instructions as set forth in step125first comprises the step of receiving a sequence of required instruction steps at step205. This sequence may be determined as described above in step120. The system then may confirm whether one or more of the instructions steps require the conveyance of information to a patient at step210. These conveyance steps may comprise a more conventional instruction step, such as the display of written information, comprise a more advanced instruction step, such as the conveyance of audible information, video instructions or the like, or may comprise an interactive instruction step, such as an interactive instruction sequence displaying a desired sequence of information to a patient, and then monitoring and confirming whether the patent has properly administered the medication. Various feedback mechanisms may be provided to allow the patient to try multiple times to perform proper administration, and may also provide varying encouragement or instructions to confirm that administration training has been performed properly. Thus, such an instruction and training sequence may include the eventual capture of video, audio and other information from the user. Therefore, at step215, it may be determined whether one or more of the instruction steps will require the capture of information from the user, thus comprising an advanced interactive training sequence. Thereafter, each of the training steps requiring capture of video information from a user is confirmed at step220. If no further video capture is required, and therefore various training or other interactive sequences have been completed, processing for step125then ends at step250. If it is determined that the capture of video and/or audio information will be required at step220for the current training step, then processing passes to step230, and various instructional video, audio and other sequences may be provided to the user in an instructional sequence format. After being shown a particular instructional sequence, preferably applicable to a particular step of a medication administration protocol sequence, then processing passes to step235where the user may be prompted to perform a particular action or sequence of movements. The user may request to be re-shown these sequences as many times as necessary, and may also include audio or other instructions, so that the user is provided with a training sequence, thereby reducing variability of future performance of that action. When preparing to perform these actions, an alert system may be employed to warn the patient of any issues that may interfere with the proper capture of video and/or audio information, as may take place similarly when actually administering the medication. Thus, the user may be encouraged to properly perform these sequences, thus acting as an interactive training module. Thus, the user may be notified if they are sitting in a manner in which their actions cannot be properly captured, if they are blocked from the camera, the light conditions are insufficient, if an object they are holding is in an improper location, or the like. As is shown inFIG.4, a box410may be provided on a display viewable by a patient using the system. A representation of the patient's face may be shown in a position relative to an optimal filming position for the use of, for example, an inhaler for medication administration. Thus, while facial representation400ais properly positioned, facial representation400bis positioned to the left of the box, while facial representation400cis positioned down and to the right of the box. A similar positioning system may be provided for an injectable medication, the position of a patient body part being provided in place of the facial positioning described above. Thus, not only may proper positioning be determined, but use of the proper body part may also be confirmed. In practice, the box may be made a red or other warning color until proper alignment is achieved (including if a user or desired user body part is not positioned fully within a screen, the user is too close or far from the camera, or for any other reason), at which time the box may change to green or other appropriate color. Further, audio clues may also be given to the patient, such as increasing frequency beeping as the optimal position is approached. Thus, in accordance with an embodiment of the present invention to be employed for inhaler medications, the user is provided with immediate feedback on their position and the ability of their actions to be properly recorded and analyzed. As the user interacts with the system of this embodiment of the invention, such a scheme may be employed to provide continuous feedback to the user, and thus indicating whether the system is able to properly capture and/pr analyze the actions of the user. If time is passed and the user is unable to properly position themselves, or to properly perform desired actions, additional guidance may be provided to the user in order to remedy such a situation, including but not limited to directional indications, voice commands, video images of proper technique, etc. In addition to properly positioning the patient, proper positioning of one or more objects, either absolutely or relative to another body part, may be determined, such as positioning an inhaler relative to the mouth and face of the user, an injectable medication delivery device relative to the body part of the user to receive the injection, or the like for imaging and processing in accordance with an embodiment of the invention. As is shown inFIG.5, an inhaler500may be indicated as properly positioned by a box522, the box being green, for example, as in the description ofFIG.4. Such an object, however, is more likely to be improperly positioned not only left to right and up to down, but also in distance to the imaging apparatus, in accordance with one or more limitations of the imaging device, such as the resolution thereof, low light positions, and the like, and any affect such resolution might have on the ability of the imaging device to identify shape, color text or other coding, or the like associated with the object being imaged. Thus, if positioned too far away from the imaging apparatus, a sequence of boxes510,511,512and a small representation of inhaler500may be provided to alert the user to move the inhaler closer. If the inhaler is not only too far away, but off center, boxes520,521,522may be provided to guide the user to move the inhaler into proper position absolutely and relative to the mouth and face of the user. Similar functionality may be provided for positioning an injectable apparatus relative to a user body part to receive the injection, including relative angle and distance to the body part. By properly positioning such a device, the system may be employed to confirm the identity of such a medication, employing shape, color, labeling, and the like. In addition to determining identity of the medication, such processing may be used to determine safety of the apparatus, such as whether an inhaler or injectable device may have been damaged or tampered with. Further, the medication may be observed to determine any change in color or other characteristic of the medication that may suggest spoilage, improper medication, counterfeit medication or the like. The apparatus, in accordance with an embodiment of the invention, may thus ask the user to move the inhaler or injectable device closer to or further away from the imaging apparatus, may change an ambient light sensitivity of the apparatus, or may otherwise change details of the image capture. As noted above, both color and audio prompting may be provided. To the extent that positioning and orientation of the inhaler, injectable medication administrator or the like when being used is important, a similar system may be employed. As is shown inFIG.6, a set of concentric circles610a-emay be provided to aid in the positioning of an inhaler600. A center circle610emay be provided with a solid center (not shown) upon proper placement of the inhaler. These circles may move as the boxes inFIG.5, and may further use color and/or audio prompts to instruct the user. Further, as images of inhaler positions and orientations, or inhaler and hand positions and orientations, are to be captured and analyzed, the system may also preferably indicate not only proper positioning, but actual acquisition of a correct position and orientation sequence. In accordance with an additional embodiment of the invention, such recognizable positioning and orientation may further comprise a sequence of gestures and apparatus movement and orientation employed to ensure that the patient properly administers their medication. In accordance with an administration process, as noted above, the patient may first be trained to show a particular medication administration device or apparatus in their hand to the camera for imaging and recognition. The patient may then be asked to place the apparatus at an appropriate administration location, such as against the mouth in the case of an inhaler apparatus, or at a particular body part location in the case of an injectable medication. Thereafter, actuation of the apparatus, through the process of monitoring movement and audible cues may be employed. Thus, through a predetermined sequence of actions that are captured, imaged and analyzed, evidence of proper administration can be recorded and analyzed. Furthermore, in accordance with one or more embodiments of the invention, various additional aspects of medication and/or administration may be checked and confirmed. Thus, the system may employ such computer vision and activity recognition to determine a liquid color, liquid consistency or clarity, potential existence of particles, perhaps suggesting a spoiled medication, bubbles in the liquid, suggesting improper handling, in an injectable administration system. Through the use of the system, a number of administrations can be tracked, and a liquid or other level may be used to confirm the count, thus potentially allowing for the addition ordering of further medication, or other counting of inhaler administrations without the need for expensive inhaler units. Also, dosage settings, if applicable on an injectable pen or other apparatus may also be confirmed before administration. Furthermore, as is shown inFIG.7, when tracking the movement of a medication administration apparatus700, it is preferable to depict to a patient whether they are holding the apparatus at a correct orientation, when the apparatus is in transit, or positioned at the administration sight. Thus, as is show inFIG.7, an administration apparatus700is indicated to be reoriented from a horizontal to a vertical orientation through movement in the direction noted by arrows A. A set of guidance tracks710a,710bmay be displayed to a patient and successive apparatus positions and orientations may be superimposed thereon. As the apparatus moves along the proscribed path, concentric circles such as those depicted inFIG.6may be employed to confirm proper location and orientation. Thus, in accordance with an embodiment of the invention, a virtual path may be shown to the user to ensure that the proper method of medication administration is followed. As noted above, color and/or audio sequences may also be employed. Similar positioning information may be processed relative to an injectable medication. Therefore, in accordance with one or more of the positioning assistance schemes noted inFIGS.4-7, a patient may be guided to properly present themselves or an object to an image capture device for capture and interpretation during the noted training phase, or (as will be described below) during a particular medication administration phase. Any of the display and notification techniques noted in any of these Figures may be used in any of the other Figures, in accordance with various embodiments of the invention. Further, these positioning techniques may be employed not only during initial training, but during any subsequent system process employing video image capture of people, objects, or any other entity to be imaged, or the use of audio information. Referring back toFIG.2, at step240these motions of the user may be captured and confirmed as being correct by one or more appropriate computer vision techniques, individual review by a human, or other appropriate determination process. If not correct, processing may return to step230to provide the instructions and example sequences again to the user. Therefore, in accordance with the invention, repeated instruction may be provided to the patient until training can be confirmed that the patient has performed the desired sequence correctly, thereby aiding in limiting future variability in the actions taken by the patient during administration. Such instruction may take the form of analysis of a recorded user action, and comments on what the user may be doing wrong, and how this action may be improved. Once the user has received sufficient instruction, and it is therefore determined that the user has performed the action in a manner that is sufficiently similar to the instruction set, and substantially consistent over a number of performances of the action, processing then passes to step245where it is determined whether there are additional training steps to be presented, and therefore additional video sequences to be captured. If so, processing returns to step220for further processing. If not, processing ends at step250. Referring back to the lower portion ofFIG.1, the horizontal line indicates a time for patient administration of medication. At such time, the user may be notified to take their medication through any desirable communication and notification system, including text messaging, email, telephone call, automated calendar reminder or the like. While not explicitly shown, first, preferably the identity of a user is confirmed through the use of a facial recognition sequence, other biometric identification sequence, or other password identification system. Upon recognition of the individual, the system may display one or more data regarding the individual, such as, by way of example only, name, patient status, medication to be administered, calendar indicating to the patient when medication has been administered and if any administration times have been missed, and, selectively, a score indicative of a level of compliance of the individual with the medication protocol, if desired. Once identified and notified of a type of medication to be administered, the patient may display a medication administration apparatus, such as an inhaler, injectable apparatus, or other medication form (including a pill bottle, pill, or the like) to confirm that the medication is correct and is the currently prescribed medication to be taken through the use of text recognition, medication recognition, barcode or other code reading of one or more unique identifiers from the administration apparatus, pill bottle or the like, or other appropriate medication recognition scheme. The user may alternatively be shown a virtual medicine cabinet with visual or textual indications of one or more medications to be taken at a particular time. Imaging of one or more of such medication apparatuses may then match a medication apparatus provided by the patient to one or more of the pills in the virtual pill box. Thus, the patient is not only allowed to have a particular medication apparatus imaged, but also may be given a visual representation of medications to be taken, medications that have already been taken, and a visual picture of one or more additional medications to look for if the patient is confused or is not immediately able to locate all of the required medication. Such a display may further act as an additional incentive program for the patient to properly take their medication, and may in turn give a patient other incentives, such as a running score, payment information or other point systems if the patient is to be rewarded for properly taking medication. Thus, credit to buy information from a website or store may be provided. For children, various animations may be provided, and pocket money or other credits may be provided to purchase items online or through one or more stores from supporting merchants may be provided. The display of such information may assist in convincing the patient to continue to properly take medication. This sequence of steps therefore acts as an audit trail each time a medication is taken, that can be reviewed later, to ensure that a patient is properly following a regimen. Any of the positioning schemes depicted inFIGS.4-7may be employed. Additionally, after confirmation or failure of confirmation of such administration, the user may be provided with a progress report regarding how they have performed over time, and further providing encouragement for future adherence. Additionally, notice of a next administration time may be provided, along with one or more messages from a healthcare provider regarding protocol changes, or other desired information. Furthermore, use of a combination of visual and/or audio cues may be employed to further determine sequence and timing. Thus, not only should an inhaler be properly positioned, for example, but during use, an inhalation by the patient should occur immediately after actuation of the inhaler. Thus, by visually and/or audibly confirming first actuation, and then inhalation, this sequence of actions can be confirmed. Sound and visual signatures related to each of these actions may be employed to improve a confidence with which the system is able to confirm proper administration. Similarly, an injectable may need to be properly positioned and maintained in a particular position after administration, such as maintenance of a needle after actuation of the injection mechanism for a predetermined period of time. In accordance with the invention, confirmation of patient adherence to the prescribed administration schedule for the medication as prescribed by the clinical trial or other prescription regimen may be determined. While such confirmation may take a number of forms, in accordance with the invention, a preferred method for such confirmation may include capturing a video and audio sequence of the patient actually administering the medication. In a further preferred method, such a sequence for such confirmation may include employing a facial recognition sequence or other biometric confirmation that a particular patient is in fact receiving treatment, but may also provide for the ability to obscure the face or other identifying feature of a user, or otherwise encrypt such information to allow for the storage and use of such images while protecting the identity of the patient, a technique that may be beneficial when a medication administration manager is providing a general report about a clinical trial, and not trying to remedy a situation with a particular patient, or in particular in a public health or disease management scenario. Activity recognition, gesture recognition or other feature for determining whether a particular subject movement meets a predefined movement sequence may be employed to be sure that the patient is properly taking prescribed medication. Referring next toFIG.3, a method in accordance with an additional embodiment of the present invention for performing audio and video capture and recognition of adherence to a prescribed protocol is described, as set forth in steps130and135ofFIG.1. InFIG.3, a patient may first log into the system of the invention at step305, employing the facial recognition, biometric recognition, password entry, or other patient identification method, and at step310proper medication is confirmed as noted above, through the user of bar code reading, text recognition, visual recognition employing video or still image recognition, or other medication recognition technique. The patient may be reminded to log onto the system to take their medication through any type of reminder, such as a text message, email, phone call, automated alarm or the like. Of course, any of the positioning techniques previously described in reference toFIGS.4-7may be employed. Next, at step315it may be confirmed that the process involved will include one or more information capture steps, and at step320it may be determined whether these information steps will include video capture. If not, video processing ends after storage of any non-video information. (Alternatively, steps315and320may be excluded if it is determined that each confirmation sequence may employ video capture, then video processing may pass directly to step325, as described below.) If it is confirmed at step320that one or more steps will include video and/or audio capture, processing then passes to step325where the user may be prompted to perform one or more predetermined actions, these actions being captured. Positioning of the inhaler, injectable medication apparatus, or other medication may be performed in accordance with any of the techniques as described previously in reference toFIGS.4-7. Such recognition in the case of an injectable administration apparatus may also comprise confirming relationship of the injectable administration apparatus and a prescribed body part, proper actuation of the administration apparatus, maintaining the administration apparatus in the location for a predetermined period of time, and perhaps proper post administration action, such as cleaning and storing the apparatus, refrigerating the apparatus, cleaning an injection site and the like. Further, voice recognition may be utilized to allow the user to enter commands, and an audio output may be provided for aiding the user in properly adhering to instructions from the system. Additional audio cues may be recognized, such as upon visual confirmation of administration of an injectable or inhalable medication, audio signatures may be employed in order to determine whether insufficient pressure may have been used, or whether a sufficient or extensive period of time has passed from actuation to inhalation. Proper capture of patient actions is very important as the patient only administers the medication once per capture period. Video capture analysis may then begin at step330, such analysis comprising analysis of the newly captured video and/or audio, as provided as noted above with respect toFIG.2. At step335it may be determined whether the action has been properly captured, and whether the captured action has been properly analyzed by the system. Various incentives may be provided to the patient to encourage them to take their medication properly. Thus, in addition to providing various reminders to a patient as is known in the art, points, monetary or other incentive may be provided to the user for actually having medication administration confirmed. Further proper administration with less errors, etc. may be rewarded more highly, thus giving incentive for the patient to concentrate on administration issues and to attempt to have such administration be as accurate and consistent as possible. Such incentives and medication tracking may be used to determine future courses of treatment or payment. For example, if a patient consistently fails to take medication as required, perhaps a different course of treatment requiring fewer medication administrations may be better for this patient. Alternatively, if a medication requires a consistent administration and is very expensive, failure to comply with administration instructions may be cause for an insurance company, prescribing doctor or the like to not renew such a prescription for the patient, thus saving money in a situation where the money was being wasted because of lack of compliance. If it is determined that administration of the medication did not take place properly, processing may return to step325and the user may be once again prompted to perform the action. Of course, if this process involves actual administration of inhaler or injectable medication, it may not be proper to request re-performance of the action, unless it can be determined that the user did not actually administer the medication. If the action has been properly captured, and is able to be analyzed, processing passes to step345where it may be determined whether additional captures are required. If so, processing returns to step320. If no further captures are required, processing ends at step350where the various captured video sequences are stored. These stored sequences may also be made available for human review and involvement, when it is determined that this would be beneficial. Therefore, in accordance with various embodiments of the invention, because a video image of the patient actually administering an inhalable or other medication (or other method of medication administration, including but not limited to injections, dialysis, and any other medication administration procedure) may be captured and analyzed, actual confirmation may be achieved, rather than simply relying on the patient to state that a particular medication was administered. Such a video image may be captured or stored in any appropriate format given a selected type of activity or gesture recognition that is employed in accordance with a particular embodiment of the invention. Such may include full video, biometric data points, recording of movement of an article, such as a bracelet or the like, affixed to the patient or administrator, use of mapping to provide a stick figure or other body movement tracking technique, or gesture or activity recognition to determine movement or the like. The user may be encouraged to use a particular sequence of movement to be confirmed that they are properly administering the medication according to the protocol, thus reducing the possibility of the potential appropriate movements considered to be “correct.” Or, as noted above, capture of customized video sequences may be performed so that the user is more likely to repeat these same actions. Indeed, various instructional videos or other appropriate training may be provided to a user to insure they properly administer the medication. This captured adherence information may be provided to a healthcare provider, clinical trial manager or the like through a dashboard allowing for the review of information about an individual patient, entire population of patients, or demographically relevant information. Such information may be provided to easily notify the healthcare provider, clinical trial manager or the like of problem patients, demographic groups, medications or the like. One or more dashboards or other reporting mechanisms may be employed as described in copending U.S. patent application Ser. No. 13/189,518, filed Jul. 24, 2011 to Hanina et al., titled “Method and Apparatus for Monitoring Medication Adherence”, the entire contents thereof being incorporated herein by reference. Thus, any adherence or other information obtained in accordance with the present invention may be provided to one or more individuals in accordance with one or more methods or systems as described in the '518 application. Through the use of training as described above, a type of administration language may be generated, allowing for extension to other patients, and also allowing for interpretation of reason for differences from a predefined sequence by a patient. Thus, if a patient performs an action differently over time, this difference may provide insight to a reaction to a medication, changes in the patient's medical condition, or the like. It is further anticipated that analysis of large numbers of patients will allow for a more flexible system that may recognize more of a patient's movements, and thus may improve the ability of the system to function properly. Therefore, in accordance with an embodiment of the invention, a user may perform a predetermined sequence of actions designed to ensure performance of medication administration. Thus, by way of example only, for an inhaled medication as noted above, the user may be asked to first show a medication and may then be prompted to position the medication administration apparatus relative to their mouth in a desired manner. Next the user may be prompted to administer the medication, the action of administration being captured on video and audio, and being interpreted to confirm that the medication has been properly administered. Of course, in accordance with this embodiment of the invention, other action sequences may be employed, and may be mixed with other actions to be performed by a patient or caregiver. Thus, but defining a medication adherence protocol as a single or sequence of gestures that may be recognized by a processing system, the accuracy of confirming that a patient has actually taken a particular medication is improved. Through an interactive learning process, the processing system may also learn patient behaviors to be more accurately determine medication adherence, and to remove some of the potential false positives or false negatives. If a caregiver is involved, it is contemplated that the caregiver be provided with a number of gestures indicative of particular actions to be taken, and use of these gestures prompting the system to confirm that these actions are in fact being taken. Thus, a full audit trail of not only the patient, but also the caregiver may be determine, such as whether they approached the patient at the correct times, or that they washed their hands when approaching. Further uses of the video capture sequences may also be employed, including video capture of responses to questionnaires about current patient states of discomfort, informed consent, example of questions to be asked, video transmission of such questions and the like. The patient may be able to send a video message, pointing to a particular pain or the like, and may include an audio portion as well. Time stamp markers may also be captured to confirm that the user is taking their medication at appropriate times and a number of times a user has taken a particular medication, to confirm whether there are substantial delays between instruction and administration, or for any other time sequence determination. Furthermore, other behavioral markers, such as, by way of example only, shaking hands indicating a particular ailment, or other movements by a patient that may give a hint as to the physical or mental status thereof. Additionally, if the user is taking medication that is improper, or they have already taken, a warning may be provided to warn the user to stop medication administration immediately. In accordance with various embodiments of the invention, when considering administration of an inhalable or injectable medication, analysis of adherence video sequences may be employed to determine a likelihood that a patient has actually administered their medication. Thus, based upon video and audio cues determined related to positioning and use of the medication administration apparatus, it may be determined that the patient is having problems properly positioning the apparatus, and therefore the system is unsure that the patient has administered the medication properly. Low confidence in proper administration based upon failure to properly position the apparatus, failure to record audio signals indicative of proper administration or the like may be employed to determine whether a patient should be retrained, via the automated training system described herein, by automated contact, or by individual personal contact. This determination of low confidence of administration, even if it is ultimately determined that administration likely took place, may still be utilized to determine whether training or other actions may be taken. Such confidence levels may be used, in accordance with a desired algorithm or the like, to provide an overall picture of medication administration by a patients or group of patients, thus allowing for intervention, encouragement, training or the like to be provided when it appears that actions are changing, but not necessarily waiting until a critical issue is discovered. It is further contemplated that the method and apparatus of the invention allow for integration with one or more audio or video conferencing systems, thus receiving and/or providing information there through. Thus, a user may employ a standard video conferencing tool or system, and have this information be coupled to a mobile or other device being used in accordance with an embodiment of the present invention. Therefore, in accordance with the invention, a method and apparatus are provided that allow for the automated confirmation of adherence to administration protocol for medication, and provide for a most sophisticated method for confirming and studying methods of administration of such prescription medication. It will thus be seen that the objects set forth above, among those made apparent from the preceding description, are efficiently attained and, because certain changes may be made in carrying out the above method and in the construction(s) set forth without departing from the spirit and scope of the invention, it is intended that all matter contained in the above description and shown in the accompanying drawings shall be interpreted as illustrative and not in a limiting sense. It is also to be understood that this description is intended to cover all of the generic and specific features of the invention herein described and all statements of the scope of the invention which, as a matter of language, might be said to fall there between. | 38,897 |
11862034 | The drawings depict various embodiments for the purpose of illustration only. Those skilled in the art will recognize that alternative embodiments may be employed without departing from the principles of the technology. Accordingly, while specific embodiments are shown in the drawings, the technology is amenable to various modifications. DETAILED DESCRIPTION Coaching typically involves a coach that walks a group of individuals through an ordered series of educational content. The coach holds group discussions to reflect on learned content. In an electronic format, the content is provided to users on computing devices. The content can be arranged as a playlist for users of the computing devices. The coach can also engage with individuals in chat rooms to conduct discussions. The educational content cannot be personalized because each individual has specific considerations that are not applicable to the entire group. Likewise, in health-related coaching, educational content is not personalized for coaching recipients (e.g., patients). In some instances, coaching is episodic such that it only occurs when the coaching recipient interacts with a coach due to a medical event. Accordingly, a coaching service for coaching recipients covers content that is not necessarily relevant. As a result, coaching recipients fail to pay attention and, consequently, fail to complete a coaching exercise. The disclosed solutions overcome the drawbacks of existing techniques. A coaching service adapts variable content objects in a content flow for individual needs. For example, coaching to treat chronic diseases like diabetes or other conditions may involve educational content related to diet, stress levels, and sleep. In some instances, the variable content objects can include medical recommendations. The content flow for an individual participating in a coaching service is adaptable with a targeted set of content that is formulated based on information obtained from various sources such as electronic medical records, pharmacy data, self-reported information (e.g., surveys), indications of preferred content, and monitored data such as that obtained from a continuous glucose monitor. A personalized content flow includes a series of content objects with messages and/or media that goes well beyond replicating in-person coaching. The content objects can be mapped in different combinations to different coaching recipients. In some embodiments, the coaching service can dynamically recommend relevant content objects that could be shown to the coach's coaching recipient. The recommended content objects could be identified based on data obtained about the coaching recipient. In some embodiments, the content objects are ranked to facilitate selection by the coaching recipient's coach. As such, the coaching service can improve a coaching recipient's experience by providing the selected content in a playlist that maintains the coaching recipient's attention to complete the coaching exercise. Hence, the disclosed coaching service is more efficient both from the perspective of participants and in terms of reducing the demand on network resources. In some embodiments, the content objects are selected dynamically in real-time or near real-time to enable active learning that depends on recent data pulled from different sources. For example, a content flow can include a series of screens shown to a user on a computing device in accordance with the underlying logic of the coaching service to achieve a predetermined objective of a coaching challenge. The content flow is presented to a user and/or coach to improve the coach/user relationship. The coaching service can dynamically swap content flows or individual content objects as a user advances through a content flow. As such, a content flow changes to adapt to an individual's needs as determined based on data obtained from various sources. The coaching service can also sort or rank content to rapidly adapt to changing circumstances of an individual. A frequency of the decision to dynamically change content can vary depending on the availability of data on which the decision is based. For example, a refresh rate to change content of a coaching session can occur rapidly if based on data obtained from a continuous monitoring device. Moreover, the coaching service can throttle the frequency when publishing new content. An electronic coaching service includes a technical means such as a platform that administers network portals for users to access content for coaching sessions over a medium such as a computer network. A health-related electronic coaching tool can include a playlist of media that is ordered to guide a diabetic patient to learn proper eating habits in a step-by-step manner. For example, a playlist of audio recordings can include lectures that are arranged to coach a patient on how to manage a disease. A coaching service may include different playlists for different categories of coaching recipients to achieve goals. The content flow of a playlist is prearranged before a user begins a coaching session. The playlist is selected based on factors including the coaching recipient's demographic profile, historical information, input from other users of the coaching service, etc. For example, the first playlist for young adults can include a first sequence of content objects while a second playlist for older adults can include a second sequence of content objects. As used herein, the prefix “pre” that modifies a term refers to occurring “before” or “in advance of.” In the context of a content flow, “preselected” content is selected for the content flow before the content flow is played, “prepositioned” content is positioned in a content flow before the content flow is rendered, and “prearranged” or “preset” content is preselected and prepositioned. In contrast, “dynamic” content is affected by the passage of time. For example, dynamically selected content is selected while a content flow is being played (e.g., in real-time or near real-time). Likewise, dynamically positioned or dynamically arranged content is positioned or arranged, respectively, while the content flow is being rendered. A health-related coaching service can determine a next recommended action for a coach as a function of a combination of factors including a coaching recipient profile, physiological data (e.g., real-time glucose data), coaching recipient's selections, coach's playlist, historical recommended actions and/or compliance with that, and other coaching recipient response patterns with similar profiles (e.g., demographic, disease profile, lab values, medication list, and clinical data). The next recommended action can be to share a specific content object. In some embodiments, a recommended action is weighted more heavily compared to other actions based on how other coaching recipients responded to the recommended action. In one example, a coaching service can vary content objects of a playlist for a coaching recipient. For example, the coaching service can build a library of coaching content into itemizable “content cards.” The content cards can be organized into decks (also referred to as pillars) that are related to a coaching strategy. When a content card is distributed, the coaching service can monitor the coaching recipient's interaction with that particular content card and select similar content cards in the same deck to maintain the coaching recipient's attention. When a coach chooses to send a content card to the coaching recipient, the coaching service can intervene to prompt the coach with suggested content cards in accordance with a coaching strategy. As used herein, a “content card” may refer to a content object that includes text, images, audio, video snippets, and/or other media. Each content card is designed to educate a coaching recipient in accordance with the objective of a coaching strategy. Content cards can be practical and targeted at behavioral changes, like what to eat and what not to eat. The content cards can be tagged with different keywords or other characteristics that can be used for identifying content cards. A conventional educational segment of a coaching program is also burdensome because it is time-consuming. For example, a conventional coaching program may include a sequence of daily hour-long videos that each follow interactions with a coach to discuss the videos. In contrast to conventional educational content of coaching services, content cards divide an educational program into discrete snippets. The snippets spare a coaching recipient from the cognitive burden of longer educational segments. For example, a snippet of a 2-hour educational video segment may only be a few minutes or seconds long. The use of snippets increases the flexibility and variability of a content flow. For example, a segment for diabetes management may include a 60-minute audio lecture that is burdensome for a coaching recipient to consume. By dividing the 60-minute audio segment into 200 snippets that vary between 30 seconds and 2 minutes, coaching recipients can listen to different combinations of the content cards over an extended period. Further, the disclosed embodiments can pinpoint a content card or combination of content cards that have a greater likelihood of improving the effectiveness of coaching. To aid in understanding, the disclosed embodiments describe an improved coaching service for diabetes management. However, the disclosed technology can be applied to any coaching service or any service that distributes content objects. A diabetes management program can include a prescription for medications and a coaching service for deploying educational content. For example, a doctor can conduct a survey to assess a patient's lifestyle and administer tests to determine a running average of blood-glucose levels (BGLs). In combination with other factors, a coaching service can formulate a diabetes management program to deploy educational content. In practice, a coaching service for diabetes management involves varying amounts of educational information. A coaching service may include 12 daily videos that are each 60 minutes long. In conjunction with the coaching service, the coaching recipient may be given treatments updated with the active assistance of a coach. The burdensome nature of this form of coaching increases the risk that coaching recipients will simply quit, thereby increasing the likelihood of non-adherence which is substantially more dangerous than just quitting any education program. The disclosed coaching service is a computer-implemented technology that can help coaching recipients manage diabetes. For example, a mobile application (“app”) for self-managing diabetes may include a coaching algorithm such as a chatbot implemented as a virtual or simulated coach. Virtual coaching involves the use of an automated communication device or service such as a chatbot that can engage a coaching recipient with a simulated conversation via a messaging mechanism of a mobile portal or web-based portal on a routine or regular basis. In some embodiments, a coaching service can collect physiological and contextual information including coaching recipient activity (e.g., metabolic activity/exercise or taking of medication, eating of particular diet, real-time health/activity state from mobile/wearable sensors, self-reported health/activity state), external factors (e.g., longer daylight, average temperature, season, geographical altitude, pollution level, environmental state from mobile/wearable sensors), or coaching recipient profile information (e.g., age, gender, genotype or phenotype information) to improve a content selection algorithm or adjust determined results. As used herein, a “coach” may refer to a computer-implemented technique for automating coaching processes via a computing device that encourages a user of the computing device to adhere to a given protocol in order to achieve a goal of that protocol. In one example, a coach is an implementation of a chatbot or any automated or semi-automated communications mechanism or device that can communicate with a coaching recipient via a local or network portal. In another example, a coach is a human that uses a computing device to communicate with individuals of a coaching service. In some embodiments, a virtual coach could be completely automated to function as a human being. As such, the user of a computing device can engage in a simulated natural conversation with the virtual coach. In some embodiments, a virtual coach operates in accordance with a set of rules that are customized for a particular user, a particular type of user, a group of users, etc. In another example, a virtual coach could be partially automated such that a user could influence the way a virtual coach operates live (e.g., in real-time or near real-time) while engaged with a user. As used herein, a “user” refers to an individual or entity that interacts with content objects of a coaching service via a computing device. For example, a diabetic patient can manage his or her diabetes with a virtual coaching service by consuming educational content. In another example, the user is a coach that interacts with a coaching recipient over a coaching platform to coach the coaching recipient with content objects. A coach can use a messaging mechanism to improve engagement with a coaching recipient and to share various forms of content objects. Examples of a messenger mechanism include a chat messenger, SMS text, or other input mechanisms that can be used to increase engagement by sharing educational content. In some embodiments, a coach can engage a user in a conversation on a computing device and identify effective content objects for the coaching recipient. FIG.1illustrates a content flow including a communication exchange between a coaching recipient, using a computing device100, and the coaching recipient's coach during an active coaching session. The coach sends the coaching recipient content objects while being engaged in an SMS text conversation. As illustrated, the computing device100is a smartphone with a messaging portal102that includes comments from the coach104-1and104-2and the coaching recipient106engaged in a conversation. Although embodied on a smartphone, the messaging portal102can run on any computing device that allows the user to obtain content objects which, in this example, are linked video snippets108-1and108-2. The messaging portal102can be included in an app or be part of an operating system (OS) of a smartphone or other computing device. The messaging portal102can receive messages from the coach to prompt the coaching recipient to consume content objects. In this example, the coach shares links to video snippets108-1and108-2to help the patient learn to manage a condition. The sharing of content objects can vary by frequency, type, and the amount needed to achieve the desired outcome. FIG.2is a block diagram that illustrates a system200that can implement a coaching service. The system200can dynamically change a content flow of the coaching service. The system200includes components such as coaching servers202that run a content engine, user devices204, and data source devices206that collect information used to dynamically change the content flow. The components are all interconnected over a network208such as the Internet. From the perspective of the user devices204(also referred to individually as a user device204), content flows can be embodied as media playlists that can be played on the user devices204by advancing through a respective ordered series of content objects to satisfy a coaching protocol (e.g., objective). A particular user device204can play a first content object that was preselected from among multiple content objects to play in a position in the series of content objects. Rather than rendering a next content object that was preselected for the series of content objects, a substitute content object is played. The substitute content object can be selected responsive to information collected dynamically from different sources and selected to satisfy a coaching protocol. From the perspective of the coaching servers202, the coaching service is administered to facilitate coaching through the user devices204by one or more coaches. Each user can access the coaching service over respective user devices204. The coaching servers202can cause each user device204to play a first portion of the prearranged content in accordance with a respective coaching protocol for each coaching recipient. The coaching service can then dynamically select a respective second portion of content for each coaching recipient. Each respective second portion of content is configured to substitute a next portion of the prearranged content. The coaching servers202can then cause each of the user devices204to play a respective second portion of content for each coaching recipient in accordance with the coaching protocol for that coaching recipient. The network208may include any combination of private, public, wired, or wireless portions. The data or information communicated over the network208may be encrypted or unencrypted at various locations or along different portions of the network208. Each component of the system200may include combinations of hardware and/or software to process data or information, perform functions, communicate over the network208, and the like. For example, any component of the system200may include a processor, memory or storage, a network transceiver, a display, OS and application software (e.g., for providing a user interface), and the like. Other components, hardware, and/or software included in the system200that would be well known to persons skilled in the art are not shown or discussed herein for the sake of brevity. The user devices204can be used to interact with the system200. Examples of user devices204include smartphones (e.g., APPLE IPHONE, SAMSUNG GALAXY, NOKIA LUMINA), tablet computers (e.g., APPLE IPAD, MICROSOFT SURFACE), computers (e.g., APPLE MACBOOK, LENOVO THINKPAD), and any other device that is capable of exchanging data with the coaching servers202over the network208. The coaching servers202may execute a coaching service on any number of server computers that can operate a content engine. The coaching servers202can store algorithms to dynamically change content flows of a coaching segment. For example, an algorithm may include a combination of rules for determining whether a content object of a content flow should change. The data source devices206may include any number of servers or other computing resources that can collect, store, and/or provide data or information related to content objects for the coaching servers202for use in determining whether to change a content flow. The data source devices206may include any source of healthcare-related information. For example, the data source devices206may include any providers such as medical facilities, private offices, or devices administered by healthcare professionals. In some embodiments, the data or information may include at least portions of medical records. FIG.3is a block diagram that illustrates functional components of a coaching service. A coaching platform300(“platform300”) can include components or modules that collectively operate to perform a process for a coaching service. As used herein, a “component” or “module” may refer to a part or independent unit of hardware and/or software that performs one or more distinct functions. In some instances, a module is self-contained, separable, and/or interchangeable relative to other modules. As shown, the platform300includes one or more processors302, a communication module304, a messaging module306, a learning module308, a content engine310, and storage modules312. Other embodiments of the platform300may include some or all of these modules or components, along with other modules and/or components that are within the scope of the disclosure or known to persons skilled in the art but not shown herein for the sake of brevity. The processor(s)302can execute modules from instructions stored in the storage modules312, which can be any computing device or mechanism capable of storing information. The communication module304may manage communications among components of the platform300and/or between the platform300and another computing device. For example, the communication module304can facilitate communication of user inputs or contextual information related to a coaching recipient's coaching experience. The received inputs or information may be wirelessly uploaded by the user's computing device (e.g., the user device204) or other devices (e.g., data source devices206) over a network (e.g., network208) to a server computer (e.g., coaching servers202). The communication module304facilitates the exchange of communications between a user device and the content engine310. Further, the communication module304may transmit search results to a computing device associated with a coaching recipient or the coaching recipient's coach. The user inputs or contextual information communicated over the communication module304can be stored in storage312, one or more particular storage modules (e.g., storage modules312-1through312-N), a remote storage accessible to the platform300, or some combination thereof. The messaging module306can generate a messaging interface that allows a user (e.g., a coaching recipient) to interact with content objects of a content flow. The content engine310includes underlying logic used to decide when and what content objects to change of a content flow. In some embodiments, the user input, contextual information, and/or values extracted therefrom can be stored in the storage312along with the information used by the content engine310. In this way, the content engine310can improve the recommended content objects for a coaching segment. In some embodiments, the learning module308can utilize the user inputs and/or contextual information to improve the coaching platform300. For example, the learning module308can aggregate collected user inputs and contextual information from numerous users associated with numerous coaching recipients, and process those collected inputs or information in accordance with machine-learning algorithms to train the content engine310. Examples of machine learning algorithms/techniques include Naïve Bayes Classifier algorithms, K Means Clustering algorithms, Support Vector Machine algorithms, linear regression, logic regression, and artificial neural networks. The coaching platform300can also collect contextual information (e.g., real-time health/activity state from mobile/wearable sensors, self-reported health/activity state, environmental state from mobile/wearable sensors, etc.) to help or improve the search algorithm. Although not shown or described for the sake of brevity, the coaching platform300includes modules that ensure compliance with privacy settings and data security. FIG.4is a block diagram that illustrates an information pipeline400that communicates data from various sources to a content engine402, which can formulate content flows and can dynamically change the content flow based on the collected data. The pipeline400obtains data and information from various diverse sources for the dynamic engine. The pipeline400represents one or more communication channels (e.g., network208) and devices (e.g., user devices204or data source devices206). Examples of the diverse sources (e.g., data source devices206) illustrated inFIG.4include medical sources404, a coaching recipient's location information406, a physiological monitor408, a motion tracker410, monitoring devices at the coaching recipient's home412, and virtually any other computing devices such as internet of things (IoT) devices414that can communicate useful information for the content engine402. For example, medical sources404can include electronic medical records (EMRs) that describe a coaching recipient's medical history and prescriptions. Examples of the medical sources404include hospitals, clinics, pharmacies, or the coaching recipients or medical providers themselves. For example, a coaching recipient can input medical information into an application on a mobile phone when engaged in a discussion with a coach about diabetes management. In addition to the user inputs provided by the user of a computing device, contextual information can be derived from conversations during a coaching session. The medical information can include utilization data that indicates how often a coaching recipient sought medical assistance or experienced an emergency. Other examples of the medical sources404include coaching recipient-reported data of surveys or response patterns from other similarly situated coaching recipients. Examples of location information406include a coaching recipient's location, which could be determined by the GPS receiver of the coaching recipient's smartphone. The location information406can be used to determine, for example, whether the coaching recipient visited a restaurant or a gym. If so, a coach can engage the coaching recipient to obtain more details about what the user ate at the restaurant or the exercise that the coaching recipient participated in while at the gym. This contextual information can be used by the content engine402to determine whether a change in a content flow is required and the degree of the change necessary to manage diabetes. An example of the physiological monitor408is a continuous glucose monitor (CGM) that can continuously collect BGLs of a coaching recipient. The physiological monitor408can be worn by the coaching recipient to monitor a physiological parameter of the coaching recipient on a regular basis, continuously throughout the day. Any physiological monitoring device that collects physiological parameter values that are indicative of a condition or useful for managing a condition could be used by the content engine402to determine whether a change in a content flow is required and to determine the degree of the required change. Examples of contextual information from the motion tracker410could include data or information about the user's activities such as whether the user is exercising, the duration and rigor of the exercise, and related physiological indicators of the user such as heart rate. This fitness information can be used alone or in combination with other contextual information to influence the outcome of the content engine402. Examples of contextual information obtained by monitoring the coaching recipient's home412can include intelligent appliances that monitor the user's activities. For example, a smart refrigerator can detect the frequency that a coaching recipient opens the refrigerator and alert the content engine402to change a content flow in response to this activity. In another example, the home412can include a virtual assistant such as the AMAZON ECHO, which uses natural language processing to match user text and voice inputs to execute commands. Examples of the IoT devices414include any computing devices with sensors that can capture contextual information (e.g., environmental sensors) and that can connect over a network to the content engine402. The examples shown inFIG.4are not meant to be limiting. Rather, the content engine402can process user inputs or contextual information from any device capable of generating or capturing the inputs or contextual information and communicating it to the content engine402. In some embodiments, user inputs or contextual information can be collected by the pipeline400continuously (e.g., periodically, hourly, daily) or on demand. For example, a virtual coaching service can administer a messaging portal that engages a coaching recipient in simulated conversations periodically to receive inputs. The user inputs or contextual information may indicate an ongoing severity of symptoms experienced by the coaching recipient. In some embodiments, these user inputs and contextual information can be used to update the coaching recipient's profile. In some embodiments, the user inputs or contextual information can be used for compliance monitoring. For example, a mobile app may prompt the user to input whether the coaching recipient is complying with a desired behavior, such as regularly exercising. Tracking a coaching recipient's compliance in combination with data about the coaching recipient's outcomes can be used to determine whether a content flow for diabetes management is effective at managing the coaching recipient's diabetes. FIG.5is a flow diagram that illustrates a method for varying a next content object of a content flow according to some embodiments of the present disclosure. In step502, a coaching service prepares a content flow for a user of a computing device that subscribes to the coaching service. The content flow includes multiple content objects such as video snippets that are ordered in a sequence to provide a coaching experience for the user. In another embodiment, the coaching service can select the content flow from among multiple content flows that are available for users. Each of the content flows can have a different arrangement of different content objects. The content flows can have the same coaching protocol but are customized for different types of users. In step504, a coaching service can invoke (e.g., start, continue) a content flow as part of a coaching segment. The content flow includes an ordered sequence of content objects (e.g., media objects, media snippets). The content flow is arranged in accordance with a coaching protocol for coaching a user of a computing device. For example, the user can be a coaching recipient of a health-related coaching service that the coaching recipient accesses on a mobile phone. The coaching protocol of a coaching recipient can include weight management as part of a diabetes management coaching service. In step506, the coaching service can cause the computing device to render a first content object of the ordered sequence of content objects. The first content object is preselected and prepositioned in the ordered sequence of content objects before being caused to render on the computing device. The content flow in its entirety can be preset for a user as part of a coaching segment. For example, the content flow can include a series of video snippets to coach a coaching recipient on eating habits as part of a diabetes management segment. Hence, the content flow is based on information associated with the user such as the user being diabetic. In step508, the coaching service dynamically selects and presents a second content object to replace a next content object of the ordered sequence of content objects. Like the first content object, the next content object was preselected and prepositioned in the ordered sequence of content objects based on information associated with the user. The second content object is different from the next content object. In some embodiments, the second content object is selected in response to an indication to continue the content flow. In one example, the user is a patient, the coaching protocol is for managing a condition of the patient, and the second content object is dynamically selected based on a monitored physiological parameter of the patient such as BGLs obtained from a CGM worn by the coaching recipient. For example, the first content object may be a video snippet including an introduction to diabetes management. The next content object could be a video snippet about engaging in exercise for control weight. However, given data that a coaching recipient has recently frequented fast food restaurants via a location tracking application on the coaching recipient's mobile phone, the next content object may be swapped for a different video snippet about making suitable food choices to control weight. The content flow can vary in different ways to satisfy a coaching protocol. For example,FIG.6is a block diagram that illustrates a dynamic modification of a content flow with variable next content objects. The content flow600is functionally equivalent to an index of ordered elements that are rendered to illustrate a coaching segment. In the illustrated embodiment, the elements of the content flow600are content objects that are played on a computing device. The content flow600includes an ordered series of content objects602-1through602-5. Each content object and its respective position in the order of content objects relative to each other are preselected based on information about the user such as a medical condition, prescription, activity, and/or demographic information. The coaching service can create the content flow600for the user or an existing content flow can be selected for a user. In one example, the content object602-2is dynamically inserted between the content object602-1and602-3to become a next content object after the content object602-1in the series of content objects of the content flow600. In this case, the number of content objects of a content flow600increases by one to include the content object602-2. In another example, the content object602-2is the next content object and is replaced with a different content object from the available content objects604. As such, the number of content objects of a content flow600does not change. In another example, the content object602-4is a next content object that is removed from the content flow600. As such, the number of content objects of the content flow600decreases by one due to the removed next content object602-4. In another example, the content object602-3is moved in the ordered series of content objects after content object602-4. In yet another illustrated example, the content objects602-3and602-5are swapped such that the number of content objects of the content flow600does not change despite the order of the content flow changing. In some embodiments, a substitute content object is selected by a coach of the user. For example, the coaching service can dynamically select a variety of alternative content objects and present those alternatives as recommendations for the coach to select during a coaching exercise. The coaching service then receives an indication that the coach selected the substitute content object for the coaching recipient. Referring back toFIG.5, in step510, the coaching service advances the content flow to the second content object in lieu of the next content object and in accordance with the coaching protocol. The dynamic selection of the content object not only aids to provide relevant and timely content but can also help avoid premature termination of the content flow by the user of the computing device. For example, if the user repeatedly restarts playback of a content object but fails to complete it, the coaching service can replace that content object with another content object that facilitates satisfying the coaching protocol. A content flow is not limited to a playlist of a series of content objects on a computing device. Instead, an interface that facilitates a coaching process can be utilized. A content flow can be text-based or be voice-based where audio content is played over an audio channel. In some embodiments, a content flow can include a combination of different forms of content objects that can be played on one or more computing devices. FIG.7is a block diagram that illustrates an example computing device (e.g., computing device100) in which aspects of the disclosed technology can be embodied. For example, the coaching platform300ofFIG.3may be hosted on the computing device700. The computing device700may include generic components and/or components specifically designed to carry out the disclosed technology. The computing device700may be a standalone device or part of a distributed system (e.g., system200ofFIG.2) that spans networks, locations, machines, or combinations thereof. For example, components of the computing device700may be included in or coupled to a system-on-chip (SOC), a single-board computer (SBC) system, a desktop or laptop computer, a kiosk, a mainframe, a mesh of computer systems, or combinations thereof. In some embodiments, the computing device700can operate as a server device or a client device in a client-server network environment, or as a peer machine in a peer-to-peer system. In some embodiments, the computing device700may perform one or more steps of the disclosed embodiments in real-time, near real-time, offline, by batch processing, or combinations thereof. The computing device700includes a processing subsystem702that includes one or more processors704(e.g., central processing units (CPUs), application specific integrated circuits (ASICs), and/or field programmable gate arrays (FPGAs)), a memory controller706, memory708that can store software710, and a peripherals interface712. The memory708may include volatile memory (e.g., random-access memory (RAM)) and/or non-volatile memory (e.g., read-only memory (ROM)). The memory708can be local, remote, or distributed. The computing device700can also include a clock subsystem714that controls a timer for use in some embodiments. The components of the computing device700are interconnected over a bus (not shown) operable to transfer data between hardware components. The peripherals interface712is coupled to one or more external ports716which can connect to an external power source, for example. The peripherals interface712is also coupled to an I/O subsystem718. Other components coupled to the peripherals interface712include communications circuitry720, audio circuitry722for a speaker724and a microphone726, an accelerometer728, a GPS receiver730(or global navigation satellite system (GLONASS) or other global navigation system receiver), and other sensors (not shown). The GPS receiver730is operable to receive signals concerning the geographic location of the computing device700. The accelerometer728can be operable to obtain information concerning the orientation (e.g., portrait or landscape) of the computing device700. The I/O subsystem718includes a display controller732operative to control a touch-sensitive display system734, which further includes the touch-sensitive display of the computing device700. The I/O subsystem718also includes an optical sensor(s) controller736for one or more optical sensors738of the computing device700. The I/O subsystem718includes other components (not shown) to control physical buttons. The communications circuitry720can configure the antenna740of the computing device700. In some embodiments, the antenna740is structurally integrated with the computing device700(e.g., embedded in the housing or display screen) or coupled to the computing device700through the external ports716. The communications circuitry720can convert electrical signals to/from electromagnetic signals that are communicated by the antenna740to networks742(e.g., network208ofFIG.2) or other devices. For example, the communications circuitry720can include radio frequency (RF) circuitry that processes RF signals communicated by the antenna740. The communications circuitry720can include circuitry for performing well-known functions such as an RF transceiver, one or more amplifiers, a tuner, oscillators, a digital signal processor, a CODEC chipset, a subscriber identity module (SIM card or eSIM), and so forth. The communications circuitry720may communicate wirelessly via the antenna740with the networks742(e.g., the Internet, an intranet and/or a wireless network, such as a cellular network, a wireless local area network (LAN) and/or a metropolitan area network (MAN)) or other devices. The software710can include an OS software program, application software programs, and/or modules (e.g., the communication module304, messaging module306, learning module308, content engine310, storage modules312ofFIG.3). For example, a GPS module can determine the location of the computing device700based on the GPS signals received by the GPS receiver730. The GPS module can provide this information to components of the computing device700for use in various applications (e.g., to provide location-based contextual information). A software program, when referred to as “implemented in a computer-readable storage medium,” includes computer-readable instructions stored in the memory (e.g., memory708). A processor (e.g., processors704) is “configured to execute a software program” when at least one value associated with the software program is stored in a register that is readable by the processor. In some embodiments, routines executed to implement the disclosed embodiments may be implemented as part of OS software (e.g., MICROSOFT WINDOWS® and LINUX®) or a specific software application, component, program, object, module, or sequence of instructions referred to as “computer programs.” Computer programs typically comprise one or more instructions set at various times in various memory devices of the computing device700, which, when read and executed by the processor704, will cause the computing device700to execute functions involving the disclosed embodiments. In some embodiments, a carrier containing the aforementioned computer program product is provided. The carrier is one of an electronic signal, an optical signal, a radio signal, or a non-transitory computer-readable storage medium (e.g., the memory708). Operation of the memory708, such as a change in state from a binary one (1) to a binary zero (0) (or vice versa) may comprise a visually perceptible physical change or transformation. The transformation may comprise a physical transformation of an article to a different state or thing. For example, a change in state may involve accumulation and storage of charge or a release of stored charge. Likewise, a change of state may comprise a physical change or transformation in magnetic orientation or a physical change or transformation in molecular structure, such as a change from crystalline to amorphous or vice versa. Aspects of the disclosed embodiments may be described in terms of algorithms and symbolic representations of operations on data bits stored in memory. These algorithmic descriptions and symbolic representations generally include a sequence of operations leading to a desired result. The operations require physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electric or magnetic signals that are capable of being stored, transferred, combined, compared, and otherwise manipulated. Customarily, and for convenience, these signals are referred to as bits, values, elements, symbols, characters, terms, numbers, or the like. These and similar terms are associated with physical quantities and are merely convenient labels applied to these quantities. The computing device700may include other components that are not shown nor further discussed herein for the sake of brevity. One having ordinary skill in the art will understand any hardware and software that is included but not shown inFIG.7. While embodiments have been described in the context of fully functioning computing devices, those skilled in the art will appreciate that the various embodiments are capable of being distributed as a program product in a variety of forms and that the disclosure applies equally, regardless of the particular type of machine or computer-readable media used to actually effect the embodiments. Remarks The embodiments set forth above represent necessary information to enable those skilled in the art to practice the embodiments and illustrate the best mode of practicing the embodiments. Upon reading the following description in light of the accompanying Figures, those skilled in the art will understand the concepts of the disclosure and will recognize applications of these concepts that are not particularly addressed herein. It should be understood that these concepts and applications fall within the scope of the disclosure and the accompanying claims. The purpose of the terminology used herein is only for describing embodiments and is not intended to limit the scope of the disclosure. Reference to “one embodiment” or “an embodiment” means that a particular feature, structure, or parameter described in connection with the embodiment is included in at least one embodiment of the disclosure. The appearances of the phrase “in one embodiment” in this disclosure are not necessarily referring to the same embodiment, nor are separate or alternative embodiments necessarily mutually exclusive of other embodiments. Moreover, various features are described that may be exhibited by some embodiments and not by others. Similarly, various requirements are described that may be requirements for some embodiments and not for other embodiments. As used herein, unless specifically stated otherwise, terms such as “processing,” “computing,” “calculating,” “determining,” “displaying,” “generating,” or the like, refer to actions or processes of an electronic device that manipulates and transforms data, represented as physical (electronic) quantities within the computer's memory or registers, into other data similarly represented as physical quantities within the device's memory, registers, or other such storage medium, transmission, or display devices. When used in reference to a list of multiple items, the word “or” is intended to cover all of the following interpretations: any of the items in the list, all of the items in the list, and any combination of items in the list. Unless the context clearly requires otherwise, throughout the description and the embodiments, the words “comprise,” “comprising,” and the like are to be construed in an inclusive sense, as opposed to an exclusive or exhaustive sense; that is to say, in the sense of “including, but not limited to.” As used herein, the terms “connected,” “coupled,” or any variant thereof, means any connection or coupling, either direct or indirect, between two or more elements; the coupling of or connection between the elements can be physical, logical, or a combination thereof. For example, two components may be coupled directly to one another or via one or more intermediary channels or components. As another example, devices may be coupled in such a way that information can be passed therebetween, while not sharing any physical connection with one another. Where context permits, words in the Detailed Description using the singular or plural form may also include the plural or singular form, respectively. The foregoing description of various embodiments of the described subject matter has been provided for the purposes of illustration and description. It is not intended to be exhaustive or to limit the described subject matter to the precise forms disclosed. Many modifications and variations will be apparent to one skilled in the art. Embodiments were chosen and described in order to best describe the principles of the invention and its practical applications, thereby enabling those skilled in the relevant art to understand the described subject matter, the various embodiments, and the various modifications that are suited to the particular uses contemplated. Although the Detailed Description describes certain embodiments and the best mode contemplated, the technology can be practiced in many ways no matter how detailed the Detailed Description appears. Embodiments may vary considerably in their implementation details, while still being encompassed by this disclosure. Particular terminology used when describing certain features or aspects of various embodiments should not be taken to imply that the terminology is being redefined herein to be restricted to any specific parameters, features, or aspects of the technology with which that terminology is associated. In general, the terms used in the following described should not be construed to limit the technology to the specific embodiments disclosed in the specification, unless those terms are explicitly defined herein. Accordingly, the actual scope of the technology encompasses not only the disclosed embodiments but also all equivalent ways of practicing or implementing the embodiments. | 48,508 |
11862035 | DETAILED DESCRIPTION Disclosed example methods to record a weld include capturing first images of a welding operation with an optical sensor, storing the first images in a memory in communication with the optical sensor and identifying a transmission event corresponding to the welding operation. Disclosed example methods further include, in response to the identifying of the transmission event: generating a record of the welding operation using the first images; and transmitting the record of the welding operation to a weld system controller and data storage. As used herein, the term welding operation includes both actual welds (e.g., resulting in joining, such as welding or brazing) of two or more physical objects, an overlaying, texturing, and/or heat-treating of a physical object, and/or a cut of a physical object) and simulated or virtual welds (e.g., a visualization of a weld without a physical weld occurring). As used herein, the term “wearable device” includes any form factor that is designed or intended to be worn by a person (e.g., personal protective equipment such as helmets, face guards, apparel, or the like; personal devices such as head-mounted electronic devices, wrist mounted devices, body-mounted devices, devices worn around the neck, or the like), any form factor that, while not necessarily designed or intended to be worn by a person, may be adapted to be worn by a person (e.g., smartphones, tablet computers, and/or other digital processing devices). Motivation for aspects of this disclosure includes production weld data monitoring so that a production supervisor can monitor the productivity and quality of both automated and manual welding operations. Data monitoring may comprise collecting data in welding equipment, sending it to the cloud, and retrieving by web browser. Such data may include, for example, a video recording of welding operation which may be useful to fabrication shop supervisors, quality assurance (QA) or quality control (QC) personnel, maintenance personnel, training personnel, and/or the like. In manual welding quality control, video recordings may, for example, be valuable in Failure Mode and Effects Analysis (FMEA) in lean manufacturing. Motivation for aspects of this disclosure also include providing weld operator training using vision equipment to observe how a student positions and moves the torch while welding. Motivation for aspects of this disclosure also include provide support and service. If, for example, a welding expert is asked to help a human operator, it is a challenge to squeeze in two welding helmets for an observer to view the arc together with the operator due to physical constraints. It is also a challenge to view the arc at a remote location. In some example methods, capturing the first images includes at least one of: recording high dynamic range images, recording high dynamic range video, recording wide dynamic resolution images, recording wide dynamic resolution video, recording time-of-flight images, recording three-dimensional images with a structured light camera having three-dimensional depth perception, or recording images at a high frame rate between 500 frames per second and 10,000 frames per second. In some such examples, capturing the first images includes using the optical sensor with a logarithmic response. In some disclosed example methods, identifying the transmission event includes receiving a transmission request from the weld controller and/or detecting an end of the welding operation via the optical sensor or a second sensor. In some example methods, generating the record includes generating a video or set of images from the first images. Some example methods include receiving a synchronization signal, determining a time stamp based on the synchronization signal, and associating the time stamp with the record of the welding operation. In some examples, capturing the first images is in response to the synchronization signal. Some example methods further include detecting a beginning of the welding operation using the optical sensor or a second sensor, where the capturing of the first images is in response to the detecting. In some examples, the method is implemented in at least one of a mobile communications device or a welding helmet. Some disclosed example methods to record a weld include capturing first images of a welding operation with an optical sensor on a head mounted device, storing the first images in a memory, and monitoring a measurement of a welding parameter associated with the welding operation. Example methods also include, in response to identifying that the measurement of the welding parameter has satisfied a threshold parameter value: recording subsequent second images with the optical sensor, storing subsequent second images obtained via the optical sensor in the memory, and storing second measurements of the welding parameter in the memory, where the second measurements correspond to the second images, and generating a record of the welding operation by appending the first images to the second images and appending the second measurements to the second images. In some examples, the recording of the first images includes at least one of: recording high dynamic range images, recording high dynamic range video, recording wide dynamic resolution images, recording wide dynamic resolution video, recording three-dimensional (3D) depth map images (e.g., by a time-of-flight (ToF) camera or structured light 3D scanner), recording three-dimensional images with a structured light camera having three-dimensional depth perception, or recording images at a frame rate between 500 frames per second and 10,000 frames per second. In some example methods, the recording of the first images includes using the optical sensor with a logarithmic response. In some examples, the memory includes a circular buffer (e.g., in a linked list), and storing the first images in the memory includes replacing a first one of the first images with a second one of the first images in a first in, first out scheme. Some example methods further include receiving the measurement of the welding parameter via a communications interface. Some examples further include transmitting the record of the welding operation to a server. Some example methods include identifying that the welding operation has been initiated, the recording of the first images being in response to the identifying that the welding operation has been initiated. In some such examples, identifying that the welding operation has been initiated includes at least one of receiving a synchronization signal or identifying an arc via the optical sensor or a second sensor. Some example methods further include performing digital image processing to extract an image feature representing a characteristic of a weld made during the welding operation, comparing the characteristic to a threshold and, when the characteristic satisfies the threshold, displaying an alert on a display device indicating that the characteristic satisfies the threshold. In some example methods, the head mounted device is a welding helmet including a wireless communications device. The wireless communications device, such as a smartphone or tablet computer, may be detachably mounted to the welding helmet. Disclosed example methods to direct a weld operator using a weld operator personal protection equipment (PPE) include receiving instruction information associated with a welding operation and displaying the instruction information via a display device of the PPE. Some example methods also include, after displaying the instruction information, receiving weld parameter, displaying the weld parameter via the display device during the welding operation, detecting that the welding operation is completed and, in response to the detecting that the welding operation is completed, presenting performance information describing a characteristic of the welding operation via the display device. Some example methods also include performing digital image processing to extract an image feature representing a characteristic of a weld made during the welding operation and displaying information representative of the characteristic via the display device. In some examples, performing the digital image processing and displaying the information representative of the characteristic is at least one of during the welding operation or after the welding operation. Some example methods further include requesting second instructions corresponding to a second welding operation in response to receiving a third instruction via a user interface of the PPE, and displaying the second instructions via the display device. In some example methods, receiving the instruction information includes receiving the instruction information from at least one of a wireless communication device or a system controller in response to transmitting a request for the instruction information via a communications interface of the PPE. Disclosed example apparatus to record a weld include an optical sensor, a storage device, a controller, and a processor. The optical sensor captures first images of a welding operation. The storage device stores the first images. The processor identifies a transmission event corresponding to the welding operation, generates a record of the welding operation using the first images in response to the identifying of the transmission event, and transmits the record of the welding operation to a server. In some example apparatus, the optical sensor is at least one of a high dynamic range image sensor, a wide dynamic range image sensor, a time-of-flight sensor, a structured light sensor, or an image sensor having a frame rate of at least 500-10,000 frames per second. Some example apparatus further include a communications interface, where the processor identifies the transmission event based on at least one of receiving a transmission request from the server or detecting an end of the welding operation via the optical sensor or a second sensor. Some example apparatus include a communications interface to receive a synchronization signal, where the processor captures the first images in response to the synchronization signal, and the processor further determines a time stamp based on the synchronization signal and associates the time stamp with the record of the welding operation. Disclosed head mounted devices include an optical sensor, a storage device, and a processor. The optical sensor captures first images of a welding operation. The storage device stores the first images. The processor is in communication with the storage device and executes instructions to: monitor a measurement of a welding parameter associated with the welding operation and, in response to identifying that the measurement of the welding parameter has satisfied a threshold parameter value, records subsequent second images with the optical sensor, stores subsequent second images obtained via the optical sensor in the storage device, and stores second measurements of the welding parameter in the storage device, the second measurements corresponding to the second images. The processor also generates a record of the welding operation by appending the first images to the second images and appending the second measurements to the second images. In some example head mounted devices, the optical sensor is at least one of a high dynamic range image sensor, a wide dynamic range image sensor, a time-of-flight sensor, a structured light sensor, or an image sensor having a frame rate of at least 500-10,000 frames per second. In some examples, the storage device includes a circular buffer, where the storage device stores the first images in the memory by replacing a first one of the first images with a second one of the first images in a first in, first out scheme. Some example head mounted devices further include a communications interface to receive the measurement of the welding parameter. Some example head mounted devices further include a communications interface to transmit the record of the welding operation to a server. In some examples, the processor identifies that the welding operation has been initiated, and the optical sensor records the first images in response to the identifying that the welding operation has been initiated. In some examples, the processor identifies that the welding operation has been initiated by receiving a synchronization signal and/or identifying an arc via the optical sensor or a second sensor. In some examples, the head mounted device further includes a display device, and the processor performs digital image processing to extract an image feature representing a characteristic of a weld made during the welding operation and compares the characteristic to a threshold, display an alert on the display device when the characteristic satisfies the threshold. Disclosed example PPEs include a display device, a communications interface, and a processor. The display device displays instruction information prior to a welding operation, displays the weld parameter measurements during the welding operation, and displays performance information describing a characteristic of the welding operation. The communications interface receives the instruction information and to receive the weld parameter measurements. The processor executes instructions to detect a start of the welding operation, and detects that the welding operation is completed, and calculates the performance information. The display device displays the weld parameter measurements after the start of the welding operation. In some example PPEs, the processor performs digital image processing to extract an image feature representing a characteristic of a weld made during the welding operation, where the display device displays information representative of the characteristic. Some examples further include a user interface to receive a third instruction, where the processor requests second instructions corresponding to a second welding operation in response to receiving the third instruction, and the display device displays the second instructions. To conserve power and/or reduce power consumption, disclosed examples place a video capture device in a sleep mode while the video capture device is not actively taking video. Photodiode sensitive to arc light or low-power wireless protocols such as Zigbee may be used to signal the video capture device to wake up and begin capturing video, such as in response to a stimulus. For example, when in the sleep or low power mode, the video capture device ceases operations except for monitoring the photodiode or Zigbee or other wireless radio to check for an incoming signal (e.g., from the welding equipment in communication with the video capture device). If a signal to start recording video is received, the wireless radio monitor generates an interrupt and/or otherwise wakes up the main control circuit. Example signals may indicate a trigger pull and/or a suspected weld anomaly (e.g., a suspected weld defect that is being formed). In some examples, a wireless (e.g., Zigbee) coordinator inside the welding equipment receives a notification of a trigger pull event and sends the signal to a wireless (e.g., Zigbee) node in a radio module of the helmet. In response, the wireless node activates a WiFi radio to enable transmission of media (e.g., video and/or audio) via higher-bandwidth protocols such as UDP, TFTP, lwIP, HTTP, and/or any other protocol. In some examples, the helmet provides the media to one or more cloud servers to store and/or process the media. In some examples, the helmet accesses a fog network to store, process, measure and control the image data. The fog network may be implemented by one or more devices external to the helmet via edge and/or peer-to-peer networking. In some examples, the helmet stores the media (e.g., video and/or audio) in a local flash memory and/or other nonvolatile memory inside the helmet. The helmet further implements HTTP and/or FTP servers. In some examples, a smart phone within wireless communication proximity serves as an edge resource fog network by executing an application, an HTTP client, and/or an FTP client. The example smart phone accesses the media stored in the storage device of the helmet. In some examples, the smart phone provides storage, processing, and/or analysis capacities. The weld equipment and/or the smart phone can be edge resources for configuration, pooling, caching and security of videos and audios captured by the helmet. In some examples, the helmet transmits live video captured by a recording device on the helmet to a smart phone and/or computing device within wireless communication proximity using peer-to-peer networking (also referred to as point-to-point networking). The transmission of video enables others to view the welding scene even when those people do not have the ability to directly view the weld scene (e.g., the weld arc) due to physical constraints in and/or surrounding the weld scene. In some examples, the helmet includes an RTSP server, and a smart phone app and/or computing device in communication with the helmet includes an RTSP client. The helmet RTSP server uses the Real-time Transport Protocol (RTP) in conjunction with Real-time Control Protocol (RTCP) for media stream delivery. In some examples, the helmet includes an EMI shield between a wireless antenna and the helmet wearer's head to reduce exposure of the wearer's head to the RF radiation. In some examples, the helmet includes a camera to capture images and an image recognition processor to perform operator identification and/or authorization. In some examples, the operator faces the helmet camera, and the welding system logs in the operator. For example, the welding system may execute a facial recognition process to analyze the facial features of the operator and compare the features with a database of authorized operators. In some examples, the database includes credentials for each operator to identify whether the operator is authorized (e.g., qualified according to a current welder qualification test record (WQTR), and/or approved by a supervisor of the work) to operate using the corresponding weld equipment and/or to operate a specified weld task and/or type of weld task. Additionally or alternatively, the welding system may include image recognition features that recognize a code on an identification card belonging to the welder. In response to identifying a welder in the database, the welding system checks the qualification record of the identified welder for presence and/or expiration information. In some examples, while wearing the camera equipped helmet, the operator may look at the welding consumables such as gas and wire with marker such as QR code in very large font for computer viewing at the distance (e.g., by positioning the helmet so that the item to be viewed by the camera falls within the field of view of the helmet lens). The welding system may perform image processing to identify and log in the consumables for the weld job and/or check the identified consumables against a weld procedure specification (WPS) for inconsistencies that could lead to weld defects. If such inconsistencies are identified, the welding system may alert the operator and/or other people, and/or disable the trigger on the weld torch. In some examples, the camera on the helmet has auto-focuses on an active weld operation. The camera may auto-focus by identifying locations of features representative of an arc (e.g., a brightest area in the scene) and focus on the area(s) immediately surrounding and/or adjacent the features, which in some cases most likely include the joint and/or the electrode. In some examples, the camera also may have optics providing a large depth of field so that the camera is easily achieves focus on the desired area(s). In some examples, camera performs optical and/or digital image stabilization. The helmet may include one or more inertial measurement units (IMUs) such as multi-axis gyroscopes, multi-axis accelerometers, and/or multi-axis magnetometers to detect, encode, and/or measure movement of the helmet (e.g., turning, vibration, traveling and shaking of the helmet as the wearer's head moves to follow the arc). Based on the measured movement, the welding system compensates for the motion by moving the lens and/or the imager using, for example, micro actuators and/or microelectromechanical systems (MEMS) such as piezoelectric crystals. Additionally or alternatively, the welding system may implement electronic image stabilization (EIS). By using image stabilization techniques, a welder training system, such as LiveArc® sold by Miller Electric™, can use helmet mounted cameras instead of or in addition to fixed-location cameras to extract torch motion data and/or torch angularity data with respect to a welded joint. Such data is potentially beneficial for subsequent training of welders to weld on joints that are difficult or impossible for cameras at a fixed location, such as 360 degree 5G position and/or 6G position pipe welding. Additionally or alternatively, a welding helmet may include sensors for a fixed-mount camera to track the motion of the helmet and use the helmet position and/or orientation to transform the images captured by the camera in the helmet. Some example welding systems include a high dynamic range imager or image sensor array (e.g., at least 120 dB of dynamic range) and/or native wide dynamic range imager (e.g., at least 140 dB of dynamic range) on the welding helmet. In other examples, a welding system includes a medium dynamic range (MDR) imager with at least 100 dB of dynamic range to decrease the component costs of the helmet. One example MDR imager that may be used is model MT9V024, sold by ON Semiconductor®. In some examples, a welding helmet further includes a light source oriented to illuminate the weld scene. The lighting can be an active light source such as an LED array. To conserve battery power of the helmet, the light source can be activated automatically when the camera is taking images and determines that additional lighting is beneficial (e.g., luminance received at the camera is less than a threshold). Additionally or alternatively, the active light source can be activated and/or deactivated by an operator interface, such as a voice command. Additionally or alternatively, the helmet may be provided with passive light sources such as a reflective exterior surface. Such a passive light source may reflect light from the arc to illuminate the welding scene. Some example helmets further include an energy harvester such as solar cells that capture arc light photon energy. The energy harvester may charge a battery for the controls circuit to operate the camera circuitry, image processing, wireless devices and IMUs. In some examples, the processor uses automatic gain control (AGC) to control brightness based on the arc signals when processing captured images. AGC is also referred to as automatic exposure control or automatic brightness control. When viewing a welding arc, sudden changes in scene brightness can create difficult viewing conditions. An AGC algorithm chooses a brightness or exposure value between the brightest and darkest areas (e.g., approximately splitting the difference in brightness) to attempt to enable visualization of the entire scene. However, AGC may not provide appropriate results when viewing a welding scene where overexposure of the arc area may be tolerable but underexposure of the joint and wire is not acceptable. Another problem with AGC in welding is that the brightness changes rapidly, for example, from little light to extremely bright light during arc start and during the transition from short circuit to breaking out an arc. While conventional algorithms use an averaging scheme and/or gradual changes in the gain over dozens of frames, such algorithms result in a latency in the digitally rendered images to the actual event of arc ignition and re-ignition. Some disclosed example exposure controllers use arc signals from a power supply or wire feeder (e.g., via a wired or wireless data connection) as a feed forward signal to adapt the exposure time for an optical sensor and/or image processing. Specifically, some examples use arc voltage to determine the presence and absence of an arc in the scene. If the sensed arc voltage (e.g., excluding the welding cable voltage and electrode stickout voltage and contact voltage between wire and contact tip, etc.) is greater than 14V, the exposure controller determines that an arc is present and, in response, reduces the exposure to reveal the details of the dark areas such as joint and wire extension. The exposure controller may also use more aggressive image compression ratios and/or digital image filters for the comparatively brighter scenes. In contrast, when the sensed arc voltage is less than 14V, the exposure controller determines that the arc is absent and the scene is dark. In response to determining that the arc is not present, the exposure controller uses longer exposures and less aggressive image compression ratios and/or digital image filters. In some examples, the exposure controller uses arc power in addition to or instead of the arc signal as a proxy for the brightness of the arc. For example, the exposure controller may use level of arc voltage or arc current (or the product of voltage and current which is the arc power) to predict the brightness of the scene, thus adjusting exposure and selecting corresponding image processing algorithms and their parameters. Thus, example exposure controllers more effectively adapt to arc starts and/or stops, and/or when using welding processes where the arc brightness changes quickly (e.g., frequencies of 20 Hz to 250 Hz), such as in pulse welding and short circuiting welding. Disclosed example weld training systems include a display, a camera, a communications device to communicate with welding equipment, and a welding helmet having a view port. In disclosed example weld training systems, the welding helmet holds the camera, the communications device, and the display such that, when the welding helmet is worn by a wearer, the display is viewable by the wearer, the camera has a view through the view port such that the display displays to the wearer images taken by the camera through the view port and displays a simulated object generated based on information received from the welding equipment via the communications device. In some examples, the communications device transmits a command to welding equipment to cause the welding equipment to operate in a training or simulation mode. In some examples, the communications device receives a trigger signal identifying a start of a simulated weld, the display to display the simulated object in response to receiving the trigger signal. In some example weld training systems the display, the camera, and the communications device are in a smartphone or tablet computer integral to the welding helmet. In some examples, the display, the camera, and the communications device are in a smartphone or tablet computer integral to the welding helmet. In some examples, the smartphone or tablet computer comprises a microphone and a processor. The processor recognizes a first audio command received via the microphone, begins a weld training operation in response to receiving the audio command, including displaying the images and the simulated object to the wearer via the display. The processor recognizes a second audio command received via the microphone and ends the weld training operation in response to the second audio command. Some example weld training systems further include a processor to execute software to provide weld training to the wearer. In some examples, the processor renders at least one of a simulated weld arc, a simulated weld bead, or a simulated weld puddle as the simulated object. In some example weld training systems, the communications device receives welding parameters from the welding equipment. Some examples further include a processor to process the images to extract a plurality of welding conditions and render the simulated object based on the welding parameters and based on the plurality of welding conditions, where the display superimposes the simulated object on the images with a position and a perspective based on the images. In some such examples, the welding conditions include at least one of a contact-tip-to-work distance, a workpiece gauge thickness, a workpiece fit-up, a torch aim with respect to a joint seam, a torch travel angle, a torch work angle, or a torch travel speed. In some such examples, the simulated object includes at least one of a simulated weld arc, a simulated weld puddle, simulated spatter, simulated fumes, or a simulated weld bead. Some examples further include a speaker to output at least one of a simulated arc sound or a simulated gas flow sound. In some examples, the weld parameters comprise at least one of a voltage setpoint, an arc length setpoint, a current setpoint, or a wire feed speed setpoint, or a weld program preset. In some example weld training systems, the processor processes the images to extract a characteristic of a weld scene and renders the simulated object based further on the characteristic, where the characteristic includes at least one of a welding process type, a torch type, a torch condition, a welding consumable type, a weld joint type, a tack weld presence, a workpiece surface cleanliness, a weld fixture state, or a weld clamp state. In some example weld training systems, the communications device is configured to communicate with the weld equipment via wireless communications. Some example weld training systems include a processor to measure a first characteristic of a weld scene by extracting and analyzing features of the images, determine whether a difference between the first characteristic and a second characteristic corresponds to an unacceptable weld condition and, when the difference corresponds to the unacceptable weld condition, output an alert via the display indicating that the weld scene has the unacceptable weld condition. Some example weld training systems include a processor to analyze the images to identify objects in the images and spatial relationships between the objects, render a graphic representative of the spatial relationships, and superimpose the graphic over the images on the display. In some examples, the communications device communicates with the welding equipment to detect a start of a simulated weld operation or an end of the simulated weld operation, and the display to present the simulated object in response to the start of the simulated welding operation or remove the simulated object in response to the end of the simulated welding operation. In some examples, the camera is a high dynamic range camera and the images are high dynamic range images. In some examples, the camera is a medium dynamic range camera and the images are medium dynamic range images. In some examples, the camera is a wide dynamic range camera and the images are wide dynamic range images. In some examples, the images are video and/or still images. Some example weld training systems further include a processor to calibrate distance measurements for the images using a distance reference and measure a physical characteristic of an object present in the images using the calibrated distance measurements. In some examples, the communications device transmits the images to an external computing device. In some examples, the display is to displays weld instruction information overlaid on the images. Referring toFIG.1, there is shown an example welding system10in which an operator18is wearing welding headwear20and welding a workpiece24using a torch22to which power or fuel is delivered by equipment12via a conduit14. The equipment12may comprise a power or fuel source, optionally a source of a shield gas and, where wire/filler material is to be provided automatically, a wire feeder. The welding or cutting system10ofFIG.1may be configured to form a weld joint by any known technique, including flame welding techniques such as oxy-fuel welding and electric welding techniques such as shielded metal arc welding (i.e., stick welding), metal inert gas welding (MIG), tungsten inert gas welding (TIG), and plasma cutting. Optionally in any embodiment, the welding equipment12may be arc welding equipment that provides a direct current (DC) or alternating current (AC) to a consumable or non-consumable electrode16(better shown, for example, inFIG.5C) of the torch22. The electrode16delivers the current to the point of welding on the workpiece24. In the welding system10, the operator18controls the location and operation of the electrode16by manipulating the torch22and triggering the starting and stopping of the current flow. When current is flowing, an arc26is developed between the electrode and the workpiece24. The conduit14and the electrode16thus deliver current and voltage sufficient to create the electric arc26between the electrode16and the workpiece. The arc26locally melts the workpiece24and welding wire or rod supplied to the weld joint512(the electrode16in the case of a consumable electrode or an optionally separate wire or rod in the case of a non-consumable electrode) at the point of welding between electrode16and the workpiece24, thereby forming a weld joint512when the metal cools. As shown, and described more fully below, the equipment12and headwear20may communicate via a link25. Such communications may enable the headwear20to control settings of the equipment12and/or the equipment12to provide information about its settings to the headwear20. Although a wireless link is shown, the link may be wireless, wired, or optical. The server30and headwear20may communicate directly or indirectly. For the former, the server30and headwear20may communicate via a link27. Indirect communications may comprise, for example, the headwear20sending time-stamped images and/or other data to the equipment12via link25, where the equipment12combines the images and/or data with data of its own and then relays the combined data to server30via link29. Similarly, the server30and equipment12may communicate directly or indirectly. For the former, the server30and equipment12may communicate via a link25. Indirect communications may comprise, for example, the equipment12sending time-stamped data to the headwear20via link25, and the headwear20combining the data with images and/or data it captures and then relaying the combined data to server30via link27. Another example is to reduce the real time data traffic on link25during welding while maintaining the synchronization of video captured by the headwear20and the equipment12. For example, upon a trigger pull by operator at22, the equipment12sends a start sync command to headwear20via link25. Thereafter, the headwear20records video or images with timestamp initiated by the start sync command, and the equipment12also records welding data initiated by the same start sync command independently of the headwear20. Upon trigger release or completion of welding, the headwear20uploads the time-stamped video or images to the server30via the communication link27, and the equipment uploads the time-stamped weld data to the server30via the communication link29. The server30combines the video data and weld data together with a common timestamp that allows playback of both data in synchronization. The links25,27, and29may use any suitable protocols such as Bluetooth, Bluetooth Low Energy, WiFi, Zigbee, and/or the like. The server30may be, for example, a local or remote/cloud workstation(s) or server(s) in a data center. For example, the headwear20may transmit images and/or other data (e.g., arc length, temperature, etc.) captured by the headwear20to the server30for real-time interaction (e.g., viewing, annotating etc.) and/or analysis (e.g., parameters of the torch, workpiece, and/or arc). As another example, the headwear20may transmit images and/or other data captured by the headwear20to the server30for recording/storing for later interaction and/or analysis. As another example, the server30may transmit information (e.g., visual and/or audio instructions to adjust various parameters) to the headwear20based on analysis of the image and/or other data received from the headwear20. In an example implementation the server30is a component of a welder training system where the welding operator18motion is tracked by one or more externally-mounted cameras32. During a training exercise, the operator18motion can be captured together with the video captured by camera(s) of the headwear20(e.g., camera(s)414ofFIG.4) for synchronized playback at the server30. One example use of server30is for the purpose of quality control. During production welding, the equipment12captures the welding signals data while the headwear20captures video data representative of what the operator sees. Both data are transmitted to server30. Due to the large storage demand of video data, server30may be a remote server which may more conveniently provide large amounts of storage than a local server. When a defect is found, both the welding signal data and video data are retrieved from the remote server30for playback and failure analysis. Although a wireless link is shown, the link may be wireless, wired, or optical. FIG.2shows example welding equipment in accordance with aspects of this disclosure. The equipment12ofFIG.2comprises an antenna202, a communication port204, communication interface circuitry206, user interface module208, control circuitry210, power supply circuitry212, wire feeder module214, and gas supply module216. The antenna202may be any type of antenna suited for the radio frequencies, power levels, etc. used by the communication link25. The communication port204may comprise, for example, an Ethernet port, a USB port, an HDMI port, a fiber-optic communications port, and/or any other suitable port for interfacing with a wired or optical cable. The communication interface circuitry206is operable to interface the control circuitry210to the antenna202and/or port204for transmit and receive operations. For transmit, the communication interface206may receive data from the control circuitry210and packetize the data and convert the data to physical layer signals in accordance with protocols in use on the communication link25. For receive, the communication interface may receive physical layer signals via the antenna202or port204, recover data from the received physical layer signals (demodulate, decode, etc.), and provide the data to control circuitry210. The user interface module208may comprise electromechanical interface components (e.g., screen, speakers, microphone, buttons, touchscreen, gesture recognition etc.) and associated drive circuitry. The user interface208may generate electrical signals in response to user input (e.g., screen touches, button presses, voice commands, gesture recognition etc.). Driver circuitry of the user interface module208may condition (e.g., amplify, digitize, etc.) the signals and provide them to the control circuitry210. The user interface208may generate audible, visual, and/or tactile output (e.g., via speakers, a display, and/or motors/actuators/servos/etc.) in response to signals from the control circuitry210. The control circuitry210comprises circuitry (e.g., a microcontroller and memory) operable to process data from the communication interface206, from the user interface208, from the power supply212, from the wire feeder214, and/or from the gas supply216. The control circuitry210comprises circuitry (e.g., a microcontroller and memory) operable to output data and/or control signals to the communication interface206, to the user interface208, to the power supply212, to the wire feeder214, and/or to the gas supply216. The power supply circuitry212comprises circuitry for generating power to be delivered to a welding electrode via conduit14. The power supply circuitry212may comprise, for example, one or more switch mode power supplies, buck converters, inverters, and/or the like. The voltage and/or current output by the power supply circuitry212may be controlled by a control signal from the control circuitry210. The power supply circuitry212may also comprise circuitry for sensing and reporting the actual current and/or voltage feedback to the control circuitry210. In an example implementation, the power supply circuitry212may comprise circuitry for measuring the voltage and/or current on the conduit14(at either or both ends of the conduit14) such that reported voltage and/or current is actual and not simply an expected value based on calibration. The wire feeder module214is configured to deliver a consumable wire electrode16to the weld joint512. The wire feeder214may comprise, for example, a spool for holding the wire, an wire feeder for pulling wire off the spool to deliver to the weld joint512, and circuitry for controlling the rate at which the wire feeder delivers the wire. The wire feeder may be controlled based on a control signal from the control circuitry210. The wire feeder module214may also comprise circuitry for reporting the actual wire speed and/or amount of wire remaining to the control circuitry210. In an example implementation, the wire feeder module214may comprise circuitry and/or mechanical components for measuring the wire speed, such that reported speed is actual speed and not simply an expected value based on calibration. The gas supply module216is configured to provide shielding gas via conduit14for use during the welding process. The gas supply module216may comprise an electrically controlled valve for controlling the gas on/off. The valve may be controlled by a control signal from control circuitry210(which may be routed through the wire feeder214or come directly from the control circuitry210as indicated by the dashed line). The gas supply module216may also comprise circuitry for reporting the present gas flow rate to the control circuitry210. In an example implementation, the gas supply module216may comprise circuitry and/or mechanical components for measuring the gas flow rate such that reported flow rate is actual and not simply an expected value based on calibration. FIGS.3A,3B,3C,4A,4B, and4Cshow example welding headwear20in accordance with aspects of this disclosure. The example headwear20is a helmet comprising a shell306in or to which are mounted: one or more cameras414comprising optical components302, one or more display(s)304,305, electromechanical user interface components308, an antenna402, a communication port404, a communication interface406, user interface driver circuitry408, a central processing unit (CPU)410, speaker driver circuitry412, an image processor416, graphics processing unit (GPU)418, display driver circuitry420, sensor(s)422, a power source424, and a memory426. The example memory426ofFIG.4stores machine-readable instructions428which may be executed by the processor410to implement the examples disclosed herein. In other embodiments, rather than a helmet, the headwear may be, for example, a mask, glasses, goggles, attachment for a mask, attachment for glasses, or attachment for goggles, etc. In other example implementations, the camera(s)414may be mounted to a welding fixture, to a robot (e.g., a drone), welding torch (possibly with fiber optic delivered images) and/or any other place suited for capturing images and/or data information about a welding operation. The components of the headwear20may reside on one or more printed circuit boards (PCBs) or flex circuits. In the example shown merely as one illustration, the power source424, camera(s)414, antenna402, Port404, display304, controls308are realized as subsystems (possibly comprising their own PCBs) apart from/coupled to the PCB430while the communications interface406, the user interface driver408, the processor410, the speaker driver412, the GPU418, the display driver420, and/or the memory426reside on PCB430. Each set of optics302may comprise, for example, one or more lenses, filters, and/or other optical components for capturing electromagnetic waves in the spectrum ranging from, for example, infrared to ultraviolet. In an example implementation, optics302aand302bfor two cameras may be positioned approximately centered with the eyes of a wearer of the headwear20to capture images (at any suitable frame rate ranging from still photos to video at 30 fps, 100 fps, or higher) of the field of view that a wearer of the headwear20would have if looking through a lens. In some examples, multiple cameras capture stereoscopic images. Stereoscopic systems calculate the dimensions of the field of view based on the four corners of the image. For example, a stereoscopic system calculates the real-world coordinates of the image points based on a pre-determined spacing between the cameras or optical sensors, and calculates the real-world distance between the points. In one example, the optical sensor414has a high dynamic range (HDR), a medium dynamic range, or a wide dynamic range (WDR) imaging array that has logarithmic response at each pixel in a single frame time, with a dynamic range exceeding 120 dB to >140 dB. Example techniques to capture images of a weld scene using high dynamic range, wide dynamic range, and the like, are disclosed in U.S. patent application Ser. No. 14/978,141, filed Dec. 22, 2015, and entitled “Automated Welding Translation Platform.” The entirety of U.S. patent application Ser. No. 14/978,141 is incorporated herein by reference. The log response imager allows viewing a typical arc high contrast welding scene with a mix of high intensity arc light and low light surroundings such as joint, weld puddle, electrode extension etc. without saturating the sensor, and suppresses the spatial-temporal light accommodation. The log response imager is effective to auto-balance the exposure and view details such as weld pool surface and a joint seam near the bright arc. The sensors can be CMOS for visible wavelengths for example light reflected by the joint, the contact tip, the electrode etc., or InGaAs for short wave infrared wavelength for example emitted by solidifying weld pool. The imager can be monochrome or color. In yet another example, the optical sensor414can have imaging array that has multiple responses or exposure times at each pixel in a single frame time to extend dynamic range for the high contrast problem of viewing a welding scene. For example, the pixels associated with the bright arc could have a fraction of the exposure time than the pixels in the surrounding scene so that the charging of the pixels is slowed down to avoid saturation. In yet another example, the optical sensor414is a high speed camera with frame rate exceeding 500 to 1000 frames per second or substantially faster than the metal transfer and weld pool oscillation dynamics to avoid aliasing. In a preferred implementation, the camera has CMOS pixel array with high photoresponsivity achieved by short picosecond integration time, synchronous exposure, and high speed parallel read out, and other techniques. The preferred frame rate is at least 10× of the weld physics dynamics which is typically between 50 Hz to 250 Hz. To reduce video file size, high frame rate image acquisition (such as 2 KHz, 10 KHz or higher) can be done in burst mode at fixed intervals or upon sync trigger from the equipment12to capture specific metal droplet transfer or weld pool oscillation event. In yet another example, the optical sensor414is a ToF ranging depth camera for 3D depth perception and to overcome the light intensity contrast between bright arc light and dark surroundings. In preferred implementation, the pulse-modulated illumination has a near infrared wavelength that is out of phase with the arc spectrum or to avoid the spectrum peaks of the arc light. In yet another example, the optical sensor414is a structured light 3D scanning camera with 3D depth perception and to overcome the light intensity contrast between bright arc light and dark surroundings. Typically the frame rate is slow but could be sufficient for tasks such as seam tracking with the operator's head being relatively still and/or with motion sensors tracking and accounting for head movement. In yet another example, the optical sensor414contains a combined technology of the ones above, for example, a combined high dynamic range and high frame rate imaging, a stereo vision with two HDR imaging, a combined HDR imaging, and a ToF imaging. The display304may comprise, for example, a LCD, LED, OLED, E-ink, near-eye light field display, and/or any other suitable type of display operable to convert electrical signals into optical signals viewable by a wearer of the headwear20and in some cases producing mediated reality including virtual reality and augmented reality. In the example ofFIG.3A, the display304is integral to the headwear20. In the example ofFIG.3B, the display304is part of a mobile device (e.g., a smartphone, a tablet computer, etc.) that is mounted to an exterior or interior of the headwear20, such as outside or inside of a lens432of the headwear20. In the example ofFIG.4A, the display304is a separate device than the headwear20, and is worn underneath the headwear20such that the display304is within the field of view of the headwear20(e.g., the field of view of the lens432of the headwear20). The electromechanical user interface components308may comprise, for example, one or more touchscreen elements, speakers, microphones, physical buttons, gesture control, EEG mind control, etc. that generate electric signals in response to user input. For example, electromechanical user interface components308may comprise capacity, inductive, or resistive touchscreen sensors mounted on the back of the display304(i.e., on the outside of the headwear20) that enable a wearer of the headwear20to interact with user graphics displayed on the front of the display304(i.e., on the inside of the headwear20). The antenna402may be any type of antenna suited for the radio frequencies, power levels, etc. used by the communication link25. The communication port404may comprise, for example, an Ethernet, a USB port, an HDMI port, a fiber-optic communications port, and/or any other suitable port for interfacing with a wired or optical cable. The communication interface circuitry406is operable to interface the processor410to the antenna202and port204for transmit and receive operations. For transmit operations, the communication interface406may receive data from the processor410and packetize the data and convert the data to physical layer signals in accordance with protocols in use on the communication link25. The data to be transmitted may comprise, for example, control signals for controlling the equipment12. For receive operations, the communication interface may receive physical layer signals via the antenna202or port204, recover data from the received physical layer signals (demodulate, decode, etc.), and provide the data to processor410. The received data may comprise, for example, indications of present settings and/or actual measured output of the equipment12. For electric welding this may comprise, for example, voltage, amperage, and/or wire speed settings and/or measurements. For flame welding this may comprise, for example, gas flow rate and/or gas mixture ratio settings and/or measurements. In some examples, the communications interface406includes a wireless (e.g., Zigbee) coordinator that receives a notification of a trigger pull event and sends the signal to the processor410(e.g., a wireless node). In response, the processor410enables a WiFi radio of the communications interface to enable transmission of media (e.g., video and/or audio) via higher-bandwidth protocols such as FTP, HTTP, and/or any other protocol. In some examples, the headwear20(e.g., via the processor410and the communications interface406) provide media (e.g., video, audio, welding data) to one or more cloud servers to store and/or process the media. In some examples, the headwear20accesses a fog network to store, process, measure and control the image data. The fog network may be implemented by one or more devices external to the headwear20via edge and/or peer-to-peer networking. In some examples, the headwear20stores the media in a local flash memory and/or other nonvolatile memory inside the helmet (e.g., in the memory426). The headwear20may implement HTTP and/or FTP servers to enable data transfer. In some examples, a smart phone within wireless communication proximity serves as an edge resource fog network by executing an application, an HTTP client, and/or an FTP client. The example smart phone accesses the media stored in the storage device of the headwear20. In some examples, the smart phone provides storage, processing, and/or analysis capacities. The weld equipment and/or the smart phone can be edge resources for configuration, pooling, caching and security of videos and audios captured by the headwear20. In some examples, the headwear20transmits live video captured by the camera414on the headwear20to a smart phone and/or computing device within wireless communication proximity using peer-to-peer networking (also referred to as point-to-point networking). The transmission of video enables others to view the welding scene even when those people do not have the ability to directly view the weld scene (e.g., the weld arc) due to physical constraints in and/or surrounding the weld scene. In some examples, the headwear20includes an RTSP server, and a smart phone app and/or computing device in communication with the helmet includes an RTSP client. The headwear20RTSP server uses the Real-time Transport Protocol (RTP) in conjunction with Real-time Control Protocol (RTCP) for media stream delivery. The user interface driver circuitry408is operable to condition (e.g., amplify, digitize, etc.) signals from the user interface component(s)308. The processor410is operable to process data from the communication interface406, the user interface driver408, the image processor416, and the GPU418, and to generate control and/or data signals to be output to the speaker driver circuitry412, the GPU418, and the communication interface406. Signals output to the communication interface406may comprise, for example, signals to control settings of equipment12. Such signals may be generated based on signals from the GPU418and/or the user interface driver408. Signals from the communication interface406may comprise, for example, indications (received via link25) of present settings and/or actual measured output of the equipment12. Signals to the GPU418may comprise, for example, signals to control graphical elements of a user interface presented on display304. Signals from the GPU418may comprise, for example, information determined based on analysis of pixel data captured by cameras414. The speaker driver circuitry412is operable to condition (e.g., convert to analog, amplify, etc.) signals from the processor410for output to one or more speakers of the user interface components308. Such signals may, for example, carry audio to alert a wearer of the headwear20that a welding parameter is out of tolerance, that a weld is being performed out of sequence, to provide audio instructions to the wearer of the headwear20, etc. The one or more cameras414are operable to capture images of the physical environment surrounding the headwear20. The camera(s)414may be operable to capture electromagnetic waves of any suitable wavelength(s) from, for example, infrared to ultraviolet. In an example implementation, there may be two cameras414for capturing stereoscopic images from which 3D positioning information can be obtained through processing of the captured images. In an example implementation, the camera(s)414may each comprise one or more high dynamic range image sensors (e.g., ˜140 dB or more of dynamic range) such that a viewer of the image can simultaneously see the weld arc and the workpiece. In another example implementation, images from multiple image sensors may be combined (e.g., by the GPU418as discussed below) to generate composite image having higher dynamic range than is supported by any of the image sensors alone. In one example, the optical sensor414and optics302assembly is mounted behind the display304. In another example, the optical sensor414and optical components302assembly is mounted outside the display304. In some examples, the image processor416includes an image recognition processor to perform operator identification and/or authorization. To perform operator identification/authorization, the operator faces the helmet camera, the image processor416executes a facial recognition process to analyze the facial features of the operator and compare the features with a database of authorized operators. In some examples, the database includes credentials for each operator to identify whether the operator is authorized (e.g., qualified, approved) to operate using the corresponding weld equipment and/or to operate a specified weld task and/or type of weld task. Additionally or alternatively, the image processor416may include image recognition features that recognize a code on an identification card belonging to the welder. In response to identifying a welder in the database, the welding system checks the qualification record of the identified welder for presence and/or expiration information. In some examples, while wearing the camera equipped helmet, the operator may look at the welding consumables such as gas and wire (e.g., by positioning the helmet so that the item to be viewed by the camera falls within the field of view of the helmet lens). The image processor416may perform image processing to identify and log in the consumables for the weld job and/or check the identified consumables against a WPS for inconsistencies that could lead to weld defects. If such inconsistencies are identified, the headwear20alerts the operator (e.g., via the display304and/or the speaker driver412) and/or other people (e.g., via the communications interface406), and/or disable the trigger on the weld torch. In some examples, the image processor416causes the camera(s)414to auto-focus on an active weld operation. The image processor416may control the auto-focus by identifying locations of features representative of an arc (e.g., a brightest area in the scene) and instructing the camera414to focus on the area(s) immediately surrounding and/or adjacent the features, which in some cases most likely include the joint and/or the electrode. In some examples, the camera414also may have optics providing a large depth of field so that the camera is easily achieves focus on the desired area(s). In some examples, the image processor416controls the camera414to perform optical and/or digital image stabilization. The sensors422may include one or more inertial measurement units (IMUs) such as multi-axis gyroscopes, multi-axis accelerometers, and/or multi-axis magnetometers to detect, encode, and/or measure movement of the helmet (e.g., turning, vibration, traveling and shaking of the helmet as the wearer's head moves to follow the arc). Based on the measured movement, the image processor416compensates for the motion by moving the lens and/or the imager using, for example, micro actuators and/or microelectromechanical systems (MEMS) such as piezoelectric crystals. Additionally or alternatively, the image processor416may implement electronic image stabilization (EIS). By using image stabilization techniques, a welder training system, such as LiveArc® sold by Miller Electric™, can use helmet mounted cameras instead of or in addition to fixed-location cameras to extract torch motion data and/or torch angularity data with respect to a welded joint. Such data is potentially beneficial for subsequent training of welders to weld on joints that are difficult or impossible for cameras at a fixed location, such as 360 degree 5G position and/or 6G position pipe welding. Additionally or alternatively, the sensors422may include sensors for a fixed-mount camera to track the motion of the helmet and use the helmet position and/or orientation to transform the images captured by the camera414in the helmet. Some example cameras414include a high dynamic range imager or image sensor array (e.g., at least 120 dB of dynamic range) and/or native wide dynamic range imager (e.g., at least 140 dB of dynamic range) on the headwear20. In other examples, a welding system includes a medium dynamic range (MDR) imager with at least 100 dB of dynamic range to decrease the component costs of the helmet. One example MDR imager that may be used is model MT9V024, sold by ON Semiconductor®. In some examples, the headwear20further includes a light source oriented to illuminate the weld scene. The lighting can be an active light source such as an LED array. To conserve battery power of the headwear20, the light source can be activated automatically when the camera414is taking images and determines that additional lighting is beneficial (e.g., luminance received at the camera414is less than a threshold). Additionally or alternatively, the active light source can be activated and/or deactivated by an operator interface, such as a voice command. Additionally or alternatively, the headwear20may be provided with passive light sources such as a reflective exterior surface. Such a passive light source may reflect energy from the arc to illuminate the welding scene. The image processor416includes an exposure controller that receives arc signals from a power supply or wire feeder (e.g., via a wired or wireless data connection such as the communications interface406) as a feed forward signal to adapt the exposure time for an optical sensor and/or image processing. Specifically, the image processor416may use arc voltage to determine the presence and absence of an arc in the scene. If the sensed arc voltage (e.g., excluding the welding cable voltage and/or electrode stickout voltage) is greater than 14V, the image processor416determines that an arc is present and, in response, reduces the exposure to reveal the details of the dark areas such as joint and wire extension. The image processor416may also use more aggressive image compression ratios and/or digital image filters for the comparatively brighter scenes. In contrast, when the sensed arc voltage is less than 14V, the image processor416determines that the arc is absent and the scene is dark. In response to determining that the arc is not present, the image processor416uses longer exposures and less aggressive image compression ratios and/or digital image filters. In some examples, the image processor416uses arc power in addition to or instead of the arc signal as a proxy for the brightness of the arc. For example, the image processor416may use level of arc voltage or arc current (or the product of voltage and current which is the arc power) to predict the brightness of the scene, thus adjusting exposure and selecting corresponding image processing algorithms and their parameters. Thus, the image processor416more effectively adapts to arc starts and/or stops, and/or when using welding processes where the arc brightness changes quickly (e.g., frequencies of 20 Hz to 250 Hz), such as in pulse welding and short circuiting welding. The graphics processing unit (GPU)418is operable to receive and process pixel data (e.g., of stereoscopic or two-dimensional images) from the camera(s)414, to output one or more signals to the processor410, and to output pixel data to the display304. As mentioned above, processing of the pixel data from camera(s)414may comprise combining an image from a first optical sensor414or image sensor with an image from a second optical sensor414or image sensor to obtain a resulting composite image which has higher dynamic range than either the of the first second images alone. The processing performed by GPU418may comprise compressing images to reduce the necessary bandwidth for transmitting them and/or the necessary memory for storing them. The processing of pixel data by the GPU418may comprise, for example, analyzing the pixel data to determine, in real-time (e.g., with latency less than 100 milliseconds or, more preferably, less than 20 milliseconds, or more preferably still, less than 5 milliseconds), one or more of the following: name, size, part number, type of metal, or other characteristics of the workpiece24; name, size, part number, type of metal, or other characteristics of the electrode16and/or filler material; type or geometry of joint512to be welded; 2-D or 3-D position of items (e.g., electrode, workpiece, etc.) in the captured field of view, one or more weld parameters (e.g., such as those described below with reference toFIG.5) for an in-progress weld in the field of view; measurements of one or more items in the field of view (e.g., size of a joint or workpiece being welded, size of a bead formed during the weld, size of a weld puddle formed during the weld, and/or the like); and/or any other information which may be gleaned from the pixel data and which may be helpful in achieving a better weld, training the operator, calibrating the system10, etc. In one example, the components inFIG.4Aare contained in a smartphone, such as an iPhone or Android phone, including the optical sensor414. In such an example, headwear20has a holder to secure and house a smartphone or a tablet with camera and WIFI, with (for example) one smartphone camera facing the same direction as the wearer of the helmet with transparent opening in helmet to allow smartphone camera to view the welding scene. The phone may be positioned such that the lens432is in front of the smartphone camera (could be the same one used for the wearer's eyes). In some examples, the lens432may be omitted because the smartphone protects the wearer's eyes from the arc. In an example implementation, the processor410receives synchronizing signal(s) which trigger the optical sensor414to start and/or stop video recording. In some examples, the optical sensor414is in a smartphone, and an “app” (application) may be running on the smartphone to receive the synchronizing signal and control the optical sensor414. The synchronizing signal may be generated by circuitry of the headwear20or by circuitry external to the headwear20. The synchronizing signal may, for example, be: generated by circuitry of the equipment12and received via antenna402; generated by sensor(s)422(e.g., a passive IR sensor or photodiode) and communicated to the optical sensor414via a wired or wireless interface between the optical sensor414and sensor(s)422; be generated by a smartphone (which may be mounted to/within the helmet); or the like. The synchronizing signal may, for example, be in response to: the pull of the gun trigger; a change in the output of a photodiode which captures light intensity of the environment; detection, using image processing algorithms, of a welding arc in an image captured by optical sensor414; and/or any other suitable stimulus. The synchronizing signal may be, for example, arc data (volts, amps, wire speed etc.) associated welding video that can be superimposed/overlaid to the video recorded by the app textually or graphically. The welding equipment12can be a welding power source, a welding torch, a wire feeder, a communications module in the welding cell, a robot, a user interface module, etc. The video can be automatically uploaded to the cloud after the weld is complete. Or the video is transmitted live using a peer-to-peer video service, a hosted video service, and/or a livestream service to be viewed at a remote location and/or locally via a tablet/smartphone connected to the same service as a viewer. The signal can also be instructions from another person viewing the streaming video, either audio command or visual instructions. The remote server or app does digital image processing and makes some measurement of the welding process (arc length, wire placement, weld pool size, etc.) and/or weld results (weld size, weld pattern, weld length, etc.). The smartphone could have split screen, such as one screen positioned in front of each of the operator's eyes, to create a stereoscopic vision effect. The smartphone could also have near-eye augmented reality display technology disclosed such as in the U.S. Pat. No. 8,957,835. The app can automatically detect the brightness and adjusts accordingly when using the filter. The app can have variable set points for darkness, recording, still shots, audible feedback. The app can receive inputs, such as from an accelerometer and/or a gyroscope, to sense head motion and compensate for motion, audible input, or other remote input. The app and filter can be used independently of a helmet or headset (hand-held, bench mount, stands, etc.). In virtual reality mode, the smartphone app can simulate arc and weld. When used with other sensors, the virtual reality app may be used for operator training as described below. In an example implementation, a VR training app executes on a smartphone or a tablet housed inside or outside a helmet, providing a software operator training system using their existing welding equipment and a smartphone or tablet device. As a result, specialized weld training equipment is not needed, and any welding equipment can be converted into a training tool. In some examples, the app can enable an operator to practice welding on a real workpiece to be welded instead of on a simulated or laboratory setting. The operator puts down a calibration or marker tape or strip with computer-readable glyph symbols onto the weld scene (e.g., on the workpiece or weldment and/or torch body). The markers could be a high contrast glyph symbols of localization code or pattern. Alternatively speckle patterns can be etched into the workpiece for localization. The app identifies the localization codes with a camera equipped on the device executing the app to calibrate the scene objects in the images/videos against real world unit of distance. Using the calibrated distance determined from the markers, the app measures the weld tool (torch) movement such as travel speed. In an example sequence, the operator configures real welding equipment (e.g., sets parameters) and prepares the welding equipment for actual welding, without configuring the welding equipment in simulation mode. The operator pulls the trigger. The VR app in the smartphone or tablet takes real-time images, performs image processing including object recognition and renders reconstructed scene images based on the captured images, and superimposes virtual objects into the scene images. Example virtual objects include a virtual arc, a virtual weld pool, virtual spatter and/or splatter, a virtual wire feed, and/or a virtual weld bead. As the weld parameters are changed or the torch manipulation is changed, and/or head pose, helmet position, and/or helmet orientation are changed, the corresponding reconstructed objects in the real scene, together with virtual arc, virtual pool, and/or virtual weld bead animation also change accordingly based on models of behavior of the arc physics and thermodynamic models. In some cases, the app is equipped with simpler versions of such models to enable adequate performance such as response time. Additionally or alternatively, the app transmits data to a remote server for execution of the models and receives the results via the communications interface. Instead of using localization markers, in some examples an IMU inside the torch provides position data, which the app uses to determine the torch travel speed and to render the virtual pool and weld bead shape. Example techniques to determine the torch travel speed using an IMU are described in U.S. patent application Ser. No. 15/004,801, filed Jan. 22, 2016, entitled “Manual Tool Tracking and Guidance with Inertial Measurement Unit.” The entirety of U.S. patent application Ser. No. 15/004,801 is incorporated herein by reference. In some other examples, the smartphone may be equipped with an infrared accessory, such as an infrared (IR) sensing camera to measure torch travel speed. The IR sensing camera may receive IR light from an IR illuminator and/or IR reflectors arranged on the torch body to capture torch motion. To compensate for head movement, IR reflectors may be placed on one or more stationary objects, such as weld fixtures, to calibrate and/or transform the images captured from a moving camera. The stationary IR reflectors may have a different shape than the reflectors on the torch or different wavelength may be used to distinguish torch from stationary markers or scene anchors. While some examples use a smartphone/tablet to identify torch speed, in some other examples IR detection and processing circuit(s) are separately packaged (e.g., on a circuit board containing IR cameras, optics, sensors and computing hardware and software) to track torch movement, orientation and/or speed. The example IR detection and processing circuit(s) provide the movement, orientation and/or speed information to the smartphone or tablet for use in generating weld records and/or displaying data to the operator. Example markers (e.g., IR reflectors) are described below with reference toFIG.6F. The smartphone or tablet may receive wireless (e.g., Bluetooth) synchronization signals from the welding equipment to start and/or end the VR simulation and welding parameters set by operators on the physical weld equipment. Additionally or alternatively, the smartphone may receive and process voice commands from the operator to perform operations while the smartphone is mounted inside a helmet or otherwise unreachable by the finger touch. The smartphone/tablet may display welding results or a summary ( ) of torch movement (e.g., heat input, bead width, penetration, travel speed, torch angles, etc.) after the welding is complete. Parameters determined from the image processing may be compared against a weld procedure specification WPS for the weld being performed. If there is a deviation from the WPS beyond a determined tolerance window, an alert (e.g., visual, audible, and/or tactile) may be generated. For example, the image processing may measure the weld width and length which the processor410may then compare with the WPS. As another example, the image processing may perform seam tracking to track the joint and measure wire placement relative to the joint and the processor410may compare this measurement to the WPS and alert the operator if the wire is departing from the joint more than a determined tolerance. The image processing to determine the various parameters may take into account, and be aided by, a priori knowledge of the welding job such as the dimensions of the workpiece, wire size, type of gas, etc. The information output from the GPU418to the processor410may comprise the information determined from the pixel analysis. The pixel data output from the GPU418to the display304may provide a mediated reality view for the wearer of the headwear20. In such a view, the wearer experiences the video presented on the display304as if s/he is looking through a lens, but with the image enhanced and/or supplemented by an on-screen display. The enhancements (e.g., adjust contrast, brightness, saturation, sharpness, etc.) may enable the wearer of the headwear20to see things s/he could not see with simply a lens. The on-screen display may comprise text, graphics, etc. overlaid on the video to provide visualizations of equipment settings received from the processor410and/or visualizations of information determined from the analysis of the pixel data. The display driver circuitry420is operable to generate control signals (e.g., bias and timing signals) for the display304and to condition (e.g., level control synchronize, packetize, format, etc.) pixel data from the GPU418for conveyance to the display304. The sensor(s)422may comprise, for example, infrared and/or ultrasonic sensors, accelerometers, gyroscopes, and/or the like. The sensor(s)422may, for example, be operable to track head movement of the weld operator. The power source224may comprise, for example, a battery (e.g., a lithium ion or sodium ion or lithium polymer or dual carbon battery), circuitry for charging the battery from an AC and/or DC power source, and circuitry for conditioning/delivering energy from the battery to the other circuitry of the headwear20. FIG.4Cis another example perspective of the headwear20. The perspective illustrated inFIG.4Cshows a viewpoint from inside the shell306(e.g., from an wearer's perspective). As shown inFIG.4C, the display304(e.g., a smartphone) is mounted in a field of view of the shell306such that a camera on the rear of the smartphone has a view of a weld scene434. The example weld scene434ofFIG.4Cincludes the workpiece24and the torch22. In the weld scene434, the torch22is not operating (e.g., no weld is occurring in the weld scene434). As described in more detail below, the display304may be controlled to display the weld scene434with one or more simulated objects overlaid on the scene observed by the camera414. As illustrated inFIG.4C, the display304may show a simulated weld bead436, a simulated weld puddle438, and/or a simulated arc440in addition to the workpiece24and the torch22that are actually present in the weld scene434. The field of view is illustrated by view lines442that show the outside of the field of view of the camera414of the smartphone mounted in the shell306. The example smartphone ofFIG.4Cis mounted inside the welding helmet (e.g., the shell306) using clips444that hold the smartphone such that, when the mobile device is held by the welding helmet and the welding helmet is worn by a wearer, the display304of the smartphone is viewable by the wearer, the camera of the smartphone has a view through the view port (e.g., through the lens432) such that the display304of the smartphone provides a field of view of the wearer that corresponds to a field of view of the wearer through the view port. While 4 clips444are shown inFIG.4C, more or fewer clips may be used. Additionally or alternatively, any other structure may be used to detachably mount the smartphone inside of the welding helmet, such as one or more of: arm(s), band(s), belt(s), case(s), slot(s), container(s), cover(s), enclosure(s), frame(s), jacket(s), member(s), platform(s), rib(s), ring(s), compression device(s), friction device(s), cable(s), hook(s), nut/bolt(s), adhesive(s), bracket(s), and/or any other type of structure, fastener, and/or mounting device. While example implementations of the headwear20are described with reference toFIGS.3A,3B,3C,4A,4B, and4Cother implementations may be used. For example, any of the example antenna402, the example port404, the example communications interface406, the example user interface driver408, the example processor410, the example speaker driver412, the example camera(s)414, the example image processor416, the example GPU418, the example display driver420, the example sensor(s)422, the example power source424, the example memory426, and/or the example instructions428may be implemented using hardware, software, firmware, and/or any combination of hardware, software, and/or firmware. For example, the example antenna402, the example port404, the example communications interface406, the example user interface driver408, the example processor410, the example speaker driver412, the example camera(s)414, the example image processor416, the example GPU418, the example display driver420, the example sensor(s)422, the example power source424, the example memory426, and/or the example instructions428may be implemented using one or more integrated circuits and/or discrete circuits, such as general purpose processors, special purpose processors (e.g., digital signal processors), programmable logic devices. Furthermore, implementations may include combinations of components and/or functions into single integrated circuit packages and/or divisions of components and/or functions into multiple integrated circuit packages. FIGS.5A-5Cillustrate various parameters which may be determined from images of a weld in progress. Coordinate axes are shown for reference. InFIG.5A, the Z axis points to the top of the paper, the X axis points to the right, and the Y axis points into the paper. InFIGS.5B and5C, the Z axis points to the top of the paper, the Y axis points to the right, and the X axis points into the paper. InFIGS.5A-5C, the equipment12comprises a MIG gun504(e.g., an implementation of the torch22ofFIG.1) that feeds a consumable electrode16to a weld joint512of the workpiece24. During the welding operation, a position of the MIG gun504may be defined by parameters including: contact tip-to-work distance506or507, a travel angle502, a work angle508, a travel speed510, and aim. Contact tip-to-work distance may include the vertical distance506from a tip of the torch22to the workpiece24as illustrated inFIG.5A. In other embodiments, the contact tip-to-work distance may be the distance507from the tip of the torch22to the workpiece24at the angle of the torch22to the workpiece24). The travel angle502is the angle of the gun504and/or electrode16along the axis of travel (X axis in the example shown inFIGS.5A-5C). The work angle508is the angle of the gun504and/or electrode16perpendicular to the axis of travel (Y axis in the example shown inFIGS.5A-5C). The travel speed is the speed at which the gun504and/or electrode16moves along the joint512being welded. In an example implementation, image processing may be used to determine travel speed. For example, weld pool size and shape (e.g., tear, oval, etc.) and/or other stationary features in the welding scene (e.g., a bump or scratch on the workpiece) may be used in image processing algorithms to infer the travel speed (similar to an optical mouse). In an example implementation, weld bead striation of Miller Electric's Profile Pulse (e.g., altering wire speed and/or power and heat input) may be used along with image processing algorithms to infer travel speed in the welding scene. The aim is a measure of the position of the electrode16with respect to the joint512to be welded. Aim may be measured, for example, as distance from the center of the joint512in a direction perpendicular to the direction of travel.FIG.5C, for example, depicts an example aim measurement516. FIGS.6A-6Eillustrate an example welding process using headwear embodying aspects of this disclosure. The process begins with block652, in which one or more welds to be performed are determined by the headwear20. The determination may be based on an identifier (e.g., a work order number, a part number, etc.) entered by a wearer of the headwear20through, for example, voice recognition and/or tactile input. Alternatively, or additionally, the wearer of the headwear20may view the workpiece to be welded from a distance and/or angle that permit(s) the camera(s)302to capture an image of the workpiece from which an image processing algorithm can detect welds to be performed. For example, unique shapes, markings, and/or other features of a workpiece in the captured image view may be detected and used to retrieve an identifier associated with the workpiece. In block654, instructions for the weld(s) to be performed are retrieved from memory (e.g., local memory in the 20 and/or network-based memory). For example, the identifier determined in block652may be used as an index to retrieve a corresponding entry in a database residing in server30(FIG.1). The retrieved instructions may comprise, for example, text and/or images (still images, video, and/or CAD drawings) of any format suitable for presentation on the display304. Information contained in the instructions may include, for example: number of welds to be performed on the workpiece, sequence in which a plurality of welds are to be performed, target welding parameters for each weld to be performed, nominal equipment settings to be used for each weld to be performed, identification of welding materials (electrode, filler material, etc.) to be used for each weld to be performed, how to prepare a workpiece for each weld to be performed (e.g., paint or oxide removal, tack welds, how to put parts in jigs, closing the clamps, screwing/bolting torque values, prepping/cleaning of tools, inspection and measurement of the joint fit-up, etc.), and/or the like. A code scanner function may be used by a smartphone app to recognize objects (e.g., checking the wire type against WPS to flag mistakes, inconsistency, and/or noncompliance. When the trigger is pulled, the app checks a list of requirements and, if an error is identified, flags the identified anomaly and disables the trigger. In block656, a pre-weld interface is presented on display304. The pre-weld interface may provide instructions on setting up for a next weld to be performed and/or for actually performing the weld. Referring toFIG.6B, an example pre-weld interface is shown. The example pre-weld interface comprises graphical elements602,604,606,608, and610overlaid on an image of the workpiece identified in block652. The image of the workpiece may be a photo or drawing received along with the instructions or may be an image of the actual workpiece captured (e.g., in block652) by the camera(s)302. The graphic602(e.g., a text box) provides the wearer of the headwear20with information about the workpiece (e.g., the part number(s) of workpiece(s) to be welded, a work order number for the welds to be performed, and/or the like). The graphic602may also display the username of the wearer of the headwear20, for purposes of storing data to an appropriate user profile. The wearer of the headwear may interact with the graphic604via the user interface208(e.g., using gesture, tactile or voice controls). Activation of the graphic604may cause the headwear20to close the pre-weld interface and bring up the in-weld interface described below. The wearer of the headwear20may interact with the graphic606via the user interface208. Activation of the graphic606may cause the headwear20to bring up additional instructions (e.g., to show a previously-recorded video of the weld(s) to be performed). The graphics608and610identify the next weld to be performed and provide information about performing the weld. In the example shown, the graphic608identifies: characteristics of the workpiece such as the type of metal of which it is made; characteristics of the seam to be welded such as its length and width; target parameters for welding the seam such as target work angle, target travel angle, target travel speed, target weave pattern, target multi-pass stack up sequence, and/or the like; and nominal equipment settings such as whether a constant current or constant voltage mode should be used, the nominal voltage that should be used, the nominal current that should be used, the type/size of electrode and/or filler material that should be used, the nominal wire speed that should be used, etc. Returning toFIG.6A, in block658the wearer of the headwear20triggers (e.g., by activating graphic604) a transition from the pre-weld interface to an in-weld interface. In block660, the in-weld interface is presented. The in-weld interface provides instructions for performing a particular weld. Referring briefly toFIG.6C, an example in-weld interface is shown. The example in-weld interface comprises graphical elements602,612,620,624,628, and630overlaid on real-time video frames captured by the camera(s)302. The real-time video frames may be presented on the display304within, for example, 20 milliseconds or, more preferably, 5 milliseconds, of having been captured by the camera(s)302. The overlaid graphics may be opaque or partially transparent. The graphic602(e.g., a text box) provides the wearer of the headwear20information about the welds to be performed (e.g., the part number of the workpiece, a work order number for the welds to be performed, and/or the like). The wearer of the headwear20may interact with the graphic612via the user interface208(e.g., using tactile or voice or gesture controls). Activation of the graphic612may cause the headwear20to transition from the in-weld interface back to the pre-weld interface for the current weld. In this manner, the operator is enabled to quickly switch back and forth between the pre-weld interface and the in-weld interface. In an example implementation, both interfaces may be viewed simultaneously (e.g., in a side-by-side or picture-in-picture type view). The graphics620,624,628, and630provide feedback to the wearer of the headwear20as to one or more welding parameters measured for a weld in progress. In the example shown, the graphic620comprises positional coordinate axes representing work angle and travel angle. The center of the coordinate system indicates the optimal orientation of the welding torch618during the weld. An actual orientation of the torch is indicated by dot622. Based on this feedback, the operator can re-position the torch in an attempt to bring the dot622back to center. Other graphical representations of torch angle to provide feedback may be used instead of the “bull's-eye” shown inFIG.6C. Some examples are described in United States Patent Application Publication 2009/0298024, which is hereby incorporated herein by reference. In the example shown, the graphic624comprises a graphical speedometer extending between a “too slow” marker and a “too fast” marker. A marker626indicating the actual speed is provided on the graphical speedometer as a feedback to the wearer of the headwear20. Other graphical representations of travels speed to provide feedback may be used instead of the linear speedometer shown inFIG.6C. Some examples are described in United States Patent Application Publication 2009/0298024, which is hereby incorporated herein by reference. The graphic628provides the wearer of the headwear20with feedback as to settings and/or actual measured output of the welding equipment12. The measured output may, for example, present real-time readings from arc monitoring equipment (e.g., presented along a time axis as on an oscilloscope display). The graphic630provides a reference path to aid the operator in aiming the electrode at s/he performs the weld. The graphic630may, for example, coincide with the centerline of the seam and/or may set forth a weaving pattern. Any images and/or other data captured during the weld may be stored to local memory and/or to remote memory such as memory of server30. The stored images and/or other data may thus be made available for later playback, analysis, and/or other interaction. For example, the server30may be configured to enable streaming, 2D Fourier transform, sampling and filtering, motion estimation such as phase correlation, block matching and spatiotemporal gradient analysis, noise smoothing, sharpening, homomorphic filtering, pseudo coloring, segmentation, compression, annotation, sharing, etc. using cloud and web technologies such that computer novices may be provided with tools for viewing, interacting, learning from, and educating with the use of the captured images and/or other data. In another example, the aforementioned image processing can be done in the equipment12before sending on to server30via the communication link29. Returning toFIG.6A, in block662the operator completes the weld. In block664, upon detecting the completion of the weld (e.g., automatically through an image processing algorithm or through input from the operator), the headwear20presents a post-weld interface. The post-weld interface presents a summary of the completed weld (e.g., for training and/or quality control purposes). Referring briefly toFIG.6D, an example post-weld interface is shown. The example post-weld interface comprises graphical elements602,634,638,640, and651overlaid on a video frame captured by the camera(s)302. The graphic602(e.g., a text box) provides the wearer of the headwear20with information about the welds to be performed (e.g., part number of a workpiece involved, a work order number for the welds to be performed, and/or the like). The wearer of the headwear20may interact with graphic634via the user interface208(e.g., using tactile or voice controls). Activation of the graphic634may cause the headwear20to transition from the post-weld interface to the pre-weld interface a next weld to be performed while an audio command instruct operation to look at the finished weld thru pointing the camera to go over the entire weld. The graphics638,640, and651provide a review of the completed weld to the wearer of the headwear20. The graphic638(e.g., a textbox) provides results of an assessment of the completed weld. Such an assessment may comprise a determination of whether welding parameters and/or equipment settings measured and stored during the weld are within determined tolerances (e.g., set forth in the instructions). Such an assessment may include implementing an image processing algorithm for inspecting shape, length, width, height, smut, oxide cleaning track, reflectivity, color, visible discontinuities and defect (e.g. crack, undercut, burn thru, bead humping, concavity, lack of fusion, surface porosity, leftover wire protrusion, spatter and splatter, distortion, deformations, and/or other visual characteristics of the bead614and/or the workpiece). Such assessment may include checking the brightness of the images captured during the weld. For example, dark frames during the weld may indicate places along the weld where the arc was lost, and such locations may be deserving of additional inspection (either through image processing and/or by directing the operator to perform further inspection or testing). Similarly, such an assessment may include checking the equipment settings/outputs shown in graphic640for discontinuities which may correspond to places where the arc was lost, for example. The graphic640provides a histogram of a parameter and/or setting measured during the weld. Although only a single graphic640is shown, any number of them corresponding to any number of parameters and/or settings may be shown. The line650corresponds to a target value for the parameter. The lines646and648correspond to upper and lower tolerances for the parameter. The line644corresponds to the measurements of the parameter for the completed weld. The marker642allows the operator to select any time instant during the weld. The graphic651displays additional information for the time instant selected by the marker642. In an example implementation, the video frame on which the graphic elements602,634,638,640, and651are overlaid is the frame captured at the time instant selected by the marker642. In this manner, by scrolling the marker642or triggering playback (i.e., auto-scrolling of the marker642) a recording of the weld may be viewed on the display304. The data presented in the post-weld interface may be associated in memory with a user profile of the operator who performed the weld. Such user profile information may be used for evaluating/certifying/etc. the operator. In an example implementation, the graphic640may be analyzed to detect potential problems with the weld (e.g., a time graph of the current delivered to the weld may be analyzed for sharp spikes or discontinuities which may be indicative of stubbing, open circuit voltage (OCV), or arc outage, for example). Such a spike, instability or anomalies may then be called out with interface elements (e.g., an alternate marker642, for example) on the post-weld interface. Interaction with such interface elements by the operator may then bring up a recording of the in-weld interface from the time period surrounding the detected spike or instability. Returning toFIG.6A, in block666the wearer of the headwear20triggers (e.g., by activating graphic634) a transition from the post-weld interface to the pre-weld interface for the next weld to be completed.FIG.6Eshows an example of such an interface, which is similar to the interface shown inFIG.6B, but for the next weld on the workpiece600. FIG.6Fillustrates an example training graphic653that may be presented to an operator to perform operator training. The example graphic653may be presented on the display304, and illustrates a weld environment with a workpiece600. The workpiece600in the graphic may be obtained from images of the environment taken with the camera414. In the training mode, the image processor416and the processor410generate and overlay virtualized versions of the electrical arc668and the resulting weld bead670based on weld signal feedback received from the power supply (which is also in a training mode and does not output current). In the example ofFIG.6F, the image processor416determines an orientation of the workpiece600and/or any other objects in the image based on one or more localization markers672a,672b. The localization markers are recognized in the image captured by the camera414and, with knowledge of the size and arrangement of the markers672a,672b, the image processor416can determine an appropriate size and/or orientation of objects generated for display. The markers672a,672bmay be a bar code, a quick response (QR) code, or any other one, two, and/or three-dimensional indicia and/or glyph that is readable by processing an image of the markers672a,672b. Additionally or alternatively, the markers672a,672bmay be implemented using infrared (IR)-frequency reflectors. FIG.6Gillustrates another example weld interface674in which the headwear20measures and/or displays the size of a weld puddle676during a welding operation. To measure the size of the weld puddle676, the headwear20may use one or more camera(s)414. Using one camera, the example processor410identifies and calibrates dimension information using one or more known objects in the weld scene. For example, the processor410may identify (from the images captured by the camera414), a welding wire diameter and/or the localization markers672a,672b, which have a pre-determined size. To obtain an image of the weld puddle676(e.g., without interference by the electrical arc), the processor410may request a short circuit to be performed by the equipment12. In a stable GMAW process, the short circuit events happen at regular intervals, e.g. at 80-250 Hz. When the arc is extinguished during the short circuit, the camera414captures an image of the weld puddle, which can then be measured by the processor410by processing the image with reference to the known reference objects. For example, because the electrode wire is touching the weld pool676(or is at least in close proximity), the processor410may estimate the weld pool dimension(s) from the electrode wire size in the same image. Weld size is usually very close to the weld pool size (proportional or with an offset) if weave is not used. Alternatively, in non-short circuit weld processes such as GTAW (e.g., the electrode is not consumable but has a known diameter), the camera414is an HDR camera that can view the weld pool despite the intense arc light. Additionally or alternatively, the cameras may include 1) stereoscopic HDR optical sensors, which may provide depth perception and dimensional measurement to measure the weld pool676; 2) stereoscopic infrared sensors, which identifies the weld pool676as the highest-temperature object in the infrared image and filters out other objects; 3) a laser scanner; 4) a time of flight camera; 5) a single camera with a laser ranging device for distance; 6) a single camera with an object that has known dimensions mounted on the torch for reference in front of the weld pool676; and/or 7) a single camera with gas nozzle geometric features that have known dimensions, measuring stick-out and arc length through the arc (e.g., via HDR optical sensors) to determine the size of the weld pool676. Using consecutive images, the processor410can identify the weld pool travel direction (e.g., the direction in which the weld pool develops, opposite the direction in which the weld pool cools to the weld bead). From the weld pool travel direction, the processor410measures the width of weld pool676(e.g., perpendicular to the travel direction). After determining the size of the weld pool676, the processor410determines a weld size, which may be a delta offset or proportional to the weld pool676. From the weld size, the processor410further determines the travel speed of the torch504(e.g., using a model or algorithm), heat input (e.g., proportional to the square of the fillet), and/or a weld leg size. The processor410may determine the travel speed as proportional to the welding power divided by the heat input. The example interface674displays the calculated weld puddle diameter and the travel speed in a graphic678that is displayed with the welding scene. In some examples, the processor alerts the operator based on travel speed conformance. Additionally or alternatively, the processor410may request a change to the wire feed rate for weld size closed loop control or heat input per unit length closed loop control (e.g., for constant penetration). FIGS.7A-7Cillustrate an example welding process700using headwear embodying aspects of this disclosure. The process begins at block702in which a distance and viewing angle between the headwear20and a workpiece is determined. The distance may, for example, be determined based using an ultrasonic or infrared sensor integrated into the headwear20. Alternatively, the distance may be determined through image processing algorithms performed by GPU418. In such an embodiment, the captured images of the workpiece may be analyzed to detect characteristics (size, position, etc.) of distinguishing features of the workpiece as they appear in the images. The characteristics may then be used in combination with stored data about the workpiece (e.g., actual dimensions of the features of the workpiece) to determine the viewing distance and angle. For example, the size of the visible markings on the workpiece, the fact that some markings on the workpiece are visible while others are not, the known actual size of the markings, and the known positioning of the markings on the workpiece may be used to determine viewing distance and angle. In block704, instructions for welding the workpiece are retrieved from memory (e.g., from a networked database that the headwear20reaches via a LAN or the Internet). In block706, a portion of the instructions are selected for presentation on the display304based on the determined distance to and/or viewing angle of the workpiece. When the workpiece is viewed from relatively far, the selected portion of the instructions may comprise high-level pictures and instructions that orient the operator to the overall work to assist the operator in planning a sequence of welds to be performed on the workpiece. For example, referring briefly toFIG.7B, when the workpiece is viewed at a relatively far distance d1, instruction portion724is selected for presentation. Instruction portion724is a zoomed-out view of the workpiece comprising graphics726which identify part numbers for the workpiece, and two welds to be performed on the workpiece, and the sequence in which the welds are to be performed. Conversely, when the workpiece is viewed from relatively close, the selected portion of the instructions may comprise low-level pictures and instructions to guide the operator for performing a specific weld. For example, referring toFIG.7C, when the workpiece is viewed at a close distance d2, instruction portion734is selected for presentation. Instruction portion734is a zoomed-out view comprising a portion of the graphics726which are still pertinent to the zoomed-in view, and graphic730which provides more in-depth information for welding the seam at which that the operator is looking. Although two distances and corresponding instruction portions are described, any number of instruction portions corresponding to different view distances and/or angles may be available. Similarly, switching between different instruction portions need not be based entirely, or even at all, on measured distances. Rather, the operator may select (e.g., via voice and/or tactile input, for example) which instruction portions s/he desires to view at any given time. Furthermore, multiple instruction portions may be viewed simultaneously (e.g., in a side-by-side or picture-in-picture type view). For example, instruction portion724may be presented in the corner of the display while instruction portion734is presented on the remainder of the display. Returning toFIG.7A, in block708the wearer of the headwear20triggers (e.g., by activating graphic604) a transition to an in-weld interface, such as the interface ofFIG.6C. In block710, during welding, the headwear20determines the spatial position of the seam being welded, the welding torch, the electrode, and/or other objects in the field of view of camera(s)302. The headwear20uses this determined spatial position information to update one or more graphical overlays in real time. The spatial position information may, for example, be determined using image processing algorithms that determine 3-D position based on pixel data of stereoscopic images captured by the camera(s)302. The spatial position information may, for example, be used for rendering a graphic, such as630, that overlays a real-time video of the workpiece such that the graphic is maintained in proper alignment with the workpiece (i.e., to track and compensate for the changing position of the welder's head as s/he performs the weld). FIGS.8A and8Billustrate the use of a 3-D rendering generated by welding headwear for enhancing an operator's view of a workpiece to be welded. InFIG.8A, a portion of a workpiece800to be welded is blocked by obstruction802. Obstruction802may be, for example, the welding torch and/or hand of the operator performing the weld. InFIG.8B, the 3-D rendering is used to digitally erase the obstruction802such that the wearer of the headwear20can “see through” the obstruction802. For example, a virtual extension of the weld bead804, a virtual electrode808, and virtual extension of the seam810are presented in place of the obstruction802. The rendering may be based on: current position of the workpiece (determined from most-recent images captured by the camera(s)302), known information about the workpiece (e.g., from previously captured images when the obstruction802was not blocking the view of the workpiece), and chroma keying (e.g., the torch and welders gloves may be painted green or some other color). Referring toFIG.9A, a flowchart illustrates an example process900for welding a workpiece24while causing remote storage of image data based on such welding. The process begins with block901, in which one or more welds to be performed are determined by the headwear20. The determination may be based on an identifier (e.g., a work order number, a part number, etc.) entered by the welder18through, for example, voice recognition and/or tactile input. Alternatively, or additionally, the welder18may view the workpiece to be welded from a distance and/or angle that permit the camera(s)302to capture an image of the workpiece from which an image processing algorithm can detect welds to be performed. For example, unique shapes, markings, and/or other features of a workpiece in the captured image view may be detected and used to retrieve an identifier associated with the workpiece. In block902, welder18initiates a welding operation. For example, welder18may give a voice command for welding system10to enter a weld mode, which voice command is responded to by user interface of headwear20. The processor410configures the components of headwear20according to the voice command in order to display, on display304, the live welding operation for viewing by the welder. The welder views the weld on display304and controls operation and positioning of electrode16. The processor410may respond to the voice command and send a signal to equipment12to trigger the weld mode in equipment12. For example, the processor410disables a lock out so that power is delivered to electrode16via power supply212when a trigger on the torch is pulled by the welder. Wire feeder214and gas supply216may also be activated accordingly. Block902thus represents the step of the welder placing the welding system in a weld mode so that the workpiece may be welded. Equipment12is configured by the welder18using a user interface of the headwear20based on the determined characteristics of the weld to be performed. For example, a constant current or constant voltage mode may be selected, a nominal voltage and/or nominal current may be set, a voltage limit and/or current limit may be set, and/or the like. Camera(s)414may be configured via a user interface of the headwear20. For example, expected brightness of the arc may be predicted (based on the equipment configuration and the characteristics of the weld to be made). The electric signals from user interface308may configure the darkness of a lens filter, an exposure time of the camera(s)414, and/or the like. In block904, the operator begins welding. Workpiece24is placed into position, together with the electrode, relative to the field of view of optics302a,302b. The trigger is activated by the welder, and a multimedia file is created/opened in memory and images of the weld operation begin to be captured by the camera(s)414and stored to the multimedia file. The images may be stored as raw unprocessed pixel data coming from camera(s)414. Alternatively (or additionally), the images may be compressed and stored as processed pixel data from GPU418. In an example implementation, these events may be sequenced such that image capture starts first and allows a few frames during which the cameras414and/or display304are calibrated (adjusting focus, brightness, contrast, saturation, sharpness, etc.) before current begins flowing to the electrode, this may ensure sufficient image quality even at the very beginning of the welding operation. The multimedia file may be stored in memory411of headwear20. Alternatively (or additionally), the processor410may transmit the images (unprocessed or processed) to the communication interface406for transmission to a remote memory such as memory in equipment12and/or memory in server30. Still in block904, in addition to storing the captured images, the images may be displayed in real-time on the display304and/or on one or more other remote displays to which the captured images are transmitted in real-time via link(s)25,27, and/or29. In an example implementation, different amounts of image processing may be performed on one video stream output to the display304and another video stream output via communication interface406. In this regard, higher latency may be tolerable to the remote viewer such that additional processing may be performed on the images prior to presentation on the remote display. In block906, as the welding operation proceeds, the captured image data is processed and may be used to determine, in real-time (e.g., with latency less than 100 ms or, more preferably, less than 5 ms), present welding parameters such as those described above with reference toFIGS.5A-5C. The determined welding parameters may be stored to memory along with the processed and/or unprocessed image data. For example, graphical representations of the welding parameters may be synchronized with the captured images and converted to text/graphics which are overlaid on the captured images prior to storing the images. Alternatively (or additionally), the determined welding parameters may be stored as metadata along with the captured image data. Still referring to block906, as the welding operation proceeds, settings and/or measured output of the equipment12may be received via link25. The processor410may adjust the settings based on the parameters determined. In this manner, equipment settings such as voltage, current, wire speed, and/or others may be adjusted in an attempt to compensate for deviations of the parameters from their ideal values. The equipment settings and/or measured output may be stored along with the captured image data. For example, the settings and/or measured output may be synchronized with the captured images and converted to text/graphics which are overlaid on the image data by GPU418prior to storing the image data and/or the identifier may be stored in metadata of the multimedia file in which the image data is stored. Still referring to block906, as the welding operation proceeds, other information may be captured (by the camera(s)414and/or other sensors422) and stored along with the captured images. This other data may then be synchronized to the captured images and stored with the captured images (e.g., as metadata and/or converted to text/graphics and overlaid on the images). Such data may include, for example, an overall identifier of the weld operation determined in block901, individual part numbers of the parts being welded (e.g., barcoded such that they can be automatically detected from the captured images), timestamps, climate (temperature, humidity, etc.), and/or the like. The multimedia file containing the may be indexed by any of this information for later searching and retrieval. In block908, the first weld operation on workpiece24is completed. In block908the multimedia file to which the images and other data were written during blocks904and906may be closed (e.g., file headers added, checksums calculated, etc.). In some instances, the file may be transferred for long term storage (e.g., from memory411of the headwear20to a database residing in memory of server30). Where the captured image data is stored as raw unprocessed pixel data, such raw unprocessed pixel data may be processed externally of headwear20. In block910, the processor410transmits the pixel data to, for example, a memory at server30via antenna402or port404. A processor at server30processes the raw unprocessed data and stores the processed data in memory at server30. There may be more compute power at the server30and greater latency may be tolerated as compared to processing in headwear20prior to presentation on display304. If there is too much latency inside the helmet, the welder may become disoriented. Similarly, pixel data already processed in headwear20under latency constraints (e.g., to condition it for real-time presentation on the display304) may be further processed by the headwear20and/or by an external processor (such as in server30). Such additional processing may enable determining additional and/or more-detailed information about the weld that there wasn't time and/or compute power to determine prior to real-time presentation of the captured images. In block912, the images captured during block904are transmitted from the memory of server30to a second remote location such as a cloud server. For example, the images on the cloud may be retrieved (e.g., using a web-based application accessed through a browser or other web client or a non-web app) by an instructor or supervisor to review the work of a student or employee. As another example, the images may be reviewed by a quality control auditor as part of random quality inspections and/or as part of an investigation into a failed weld (e.g., if the welded part later fails in the quality assurance (QA) department of a fabricator or in the field, the captured images and the information stored along with the images may be viewed to see if the weld process was the likely cause of the failure). In another example implementation, the headwear20may comprise a see-through or transparent optical display mounted behind the conventional auto-darkening lens, operable to perform wavelength selective switching (WSS) to prevent peak arc spectral wavelengths to reach wearer's eyes. The WSS may be controlled based on output of a photodiode sensor which detects presence or absence of the welding arc similar to the sensor used by an auto-darkening lens. When the welding arc is present, the WSS is configured such that the display enables notch filters with wavelengths corresponding to the peaks in the power spectral density of the welding arc. When the welding arc is absent, the WSS is configured such that the display passes all (or most) of the visible spectrum (i.e., the display is substantially transparent when the welding arc is not present). The display may comprise, for example one or more Liquid Crystal on Silicon (LCoS) displays. In an example implementation, the WSS display notch filter wavelengths may be determined based on characteristics of the weld being performed (e.g., depending on welding shielding gas composition, welding materials, etc., which may affect the wavelengths emitted by the arc) so that the WSS display wavelengths are programmed to reject the peaks in the arc spectrum of specific known gas or parent material being used in welding. FIG.9Bis a flowchart illustrating an example welding process920to transmit images during a weld operation. The example process920may be performed to capture images (e.g., video) of an ongoing weld operation and transmit the images to an observation computer for others to view and/or for storage of the images. Blocks901,902, and904are implemented as described above with reference toFIG.9A. In block922, the processor410transmits the captured video via the communications interface406(e.g., via wired and/or wireless communications) to another device. For example, the processor410may transmit the video to the server30ofFIG.1, which may include a display for viewing by an instructor or supervisor of the weld operator. In block924, the processor410determines whether the weld is completed. If the weld is not completed (block924). For example, the processor410may receive a trigger release signal from the weld equipment12via the communications interface406and/or detect a reduction in brightness via the camera(s)414. If the end of the weld is not detected (block924), control returns to block922to continue transmitting the captured video. When the end of the weld is detected (block924), the example instructions900end. FIG.10is a flowchart illustrating example machine readable instructions1000which may be executed by a processor to generate a weld record for a welding process. The example instructions1000may be executed by the example headwear20ofFIGS.3and/or4, by a mobile device and/or other computing device (e.g., a smartphone mounted to a welding helmet or other personal protective equipment). The example instructions1000will be described with reference to the example headwear20ofFIG.4(e.g., the example processor410executing the instructions428stored in the memory426). At block1002, the processor410determines whether a weld operation has started. For example, the processor410may identify a weld operation based on a signal from the sensor(s)422, one or more image(s) from the optical sensor414, and/or via a signal received via the communications interface406(e.g., a synchronization signal from a welding power source and/or from a server). If the weld operation has not started (block1002), the processor410iterates block1002to monitor for the start of a weld operation. At block1004, the example optical sensor414captures a HDR image. For example, the optical sensor414may record high dynamic range image(s), record high dynamic range video, record wide dynamic resolution image(s), record wide dynamic resolution video, recording time-of-flight image(s), recording structured-light three-dimensional image(s), and/or recording images at a frame rate of 500-10,000 frames per second or higher. At block1006, the processor410determines whether a circular buffer is full. For example, the circular buffer may be a designated portion of the memory426and/or a separate buffer or storage device. If the circular buffer is full (block1006), the example processor410overwrites an oldest image stored in the circular buffer with the captured image (block1008). For example, the processor410and the circular buffer may store buffered images in a first-in-first-out sequence to retain the most recent images. If the circular buffer is not full (block1006), the processor410stores the captured image in the circular buffer (block1010). After storing the captured image in the circular buffer (block1008or block1010), at block1012the example processor410monitors welding parameter measurement(s). For example, the processor410may receive one or more welding parameters from a power source being used in the welding operation via the communications interface406, and compare the welding parameter(s) to corresponding range(s) of values. At block1014, the processor410determines whether any of the welding parameter(s) are outside of a corresponding acceptable range (block1014). For example, the processor410may determine if a current or voltage have exceeded a range designated for the welding operation. If none of the welding parameter(s) are outside of the corresponding range (block1014), the processor410returns control to block1004. If any of the welding parameter(s) are outside of the corresponding range (block1014), the example optical sensor414captures a HDR image (block1016). Block1016may be implemented in the same manner as block1004. At block1018, the example processor410stores the captured image(s) in the memory426. In block1018, the processor410does not store the captured image(s) in the circular buffer, and instead stores the captured image(s) in a different portion of the memory426while leaving the circular buffer intact. At block1020, the processor410determines whether the welding operation is finished (block1020). For example, the processor410may determine whether the weld operation is finished based on a signal from the sensor(s)422, one or more image(s) from the optical sensor414, and/or via a signal received via the communications interface406(e.g., from the welding power source and/or the server). If the welding operation is not finished (block1020), the processor410returns control to block1016to continue capturing images. When the welding operation is not finished (block1020), at block1022the processor410appends the images in the circular buffer to the images in the memory to generate a record of the welding operation (e.g., a video of the weld from the operator's approximate point of view, or from another vantage point from which the welding operation can be adequately observed). In some examples, the processor410also appends welding parameter measurements that have been received to the record. The example processor410includes time stamps of the images and/or the parameter measurements to enable any deviations in the welding parameters to be correlated to the images taken at approximately the same time. At block1024, the example processor410transmits the record of the welding operation to a server. For example, the processor410may automatically transmit the record, transmit the record in response to a request, and/or transmit the record when one or more criteria are met (e.g., sufficient battery power to complete transmission, sufficient wireless network connectivity and/or speed, etc.). The example instructions1000ofFIG.10generate a record that enables a review of a welding operation for training, production control, maintenance, and/or for any other purpose. While the example instructions1000trigger the generation of a record in response to identifying welding parameters that fall outside of an acceptable range, in some other examples the instructions1000automatically generate the record of the weld based on another trigger, and/or without any trigger (e.g., to record high quality welds for training and/or to record all welds). In some examples, the weld record further includes audio collected during the weld by one or more microphones. The audio information may be replayed and/or analyzed for audio signatures corresponding to different weld qualities and/or defects. Example methods and systems that may be used to collect and/or analyze welding audio are described in U.S. Pat. No. 5,306,893, issued Apr. 26, 1994. The entirety of U.S. Pat. No. 5,306,893 is incorporated herein by reference. FIG.11is a block diagram of an example implementation of the server30ofFIG.1. The example server30ofFIG.11may be a general-purpose computer, a laptop computer, a tablet computer, a mobile device, a server, and/or any other type of computing device. In some examples, the server30may be implemented in a cloud computing environment using one or more physical machines and, in some examples, one or more virtual machines in the data center. The example server30ofFIG.11includes a processor1102. The example processor1102may be any general purpose central processing unit (CPU) from any manufacturer. In some other examples, the processor1102may include one or more specialized processing units, such as graphic processing units and/or digital signal processors. The processor1102executes machine readable instructions1104that may be stored locally at the processor (e.g., in an included cache), in a random access memory1106(or other volatile memory), in a read only memory1108(or other non-volatile memory such as FLASH memory), and/or in a mass storage device1110. The example mass storage device1110may be a hard drive, a solid state storage drive, a hybrid drive, a RAID array, and/or any other mass data storage device. A bus1112enables communications between the processor1102, the RAM1106, the ROM1108, the mass storage device1110, a network interface1114, and/or an input/output interface1116. The example network interface1114includes hardware, firmware, and/or software to connect the server30to a communications network1118such as the Internet. For example, the network interface1114may include IEEE 802.X-compliant wireless and/or wired communications hardware for transmitting and/or receiving communications. The example I/O interface1116ofFIG.11includes hardware, firmware, and/or software to connect one or more input/output devices1120to the processor1102for providing input to the processor1102and/or providing output from the processor1102. For example, the I/O interface1116may include a graphics processing unit for interfacing with a display device, a universal serial bus port for interfacing with one or more USB-compliant devices, a FireWire, a field bus, and/or any other type of interface. Example I/O device(s)1120may include a keyboard, a keypad, a mouse, a trackball, a pointing device, a microphone, an audio speaker, a display device, an optical media drive, a multi-touch touch screen, a gesture recognition interface, a magnetic media drive, and/or any other type of input and/or output device. The example server30may access a non-transitory machine readable medium1122via the I/O interface1116and/or the I/O device(s)1120. Examples of the machine readable medium1122ofFIG.11include optical discs (e.g., compact discs (CDs), digital versatile/video discs (DVDs), Blu-ray discs, etc.), magnetic media (e.g., floppy disks), portable storage media (e.g., portable flash drives, secure digital (SD) cards, etc.), and/or any other type of removable and/or installed machine readable media. FIG.12is a flowchart illustrating example machine readable instructions1200which may be executed by a processor to implement the server30ofFIGS.1and/or11to store and/or display welding records of welding operations. The example instructions1200may be stored on the any of the non-transitory machine readable media described inFIG.11, and/or executed by the processor1102ofFIG.11. In block1202, the example processor1102determines whether a weld operation has been detected. For example, the processor1102may be in communication with the equipment12(e.g., a welding power supply) ofFIG.1, from which the processor1102receives statuses of welding operations performed using the equipment12. If a weld operation has been detected (block1202), in block1204the processor1102transmits synchronization signal(s) to capture device(s) associated with the welding operation. For example, the processor1102may be in communication with the headwear20and/or the camera32ofFIG.1. In block1206, the example processor1102determines whether an end of the weld operation has been detected. For example, the processor1102may receive an end signal from the equipment12indicating that the equipment12has ended a welding operation (e.g., in response to release of the gun trigger by the operator). If an end of the weld operation has not been detected (block1206), the processor1102returns control to block1204. When the end of the weld operation has not been detected (block1206), the processor1102transmits an end signal to the capture device(s) (e.g., the headwear20, the camera32) (block1208). In some examples, the processor1102may further alert or remind an operator, via the display, to look at the completed weld as a visual inspection. In block1210, the example processor1102transmits requests to the capture device(s) for welding records. In response, the example capture device(s) may generate the welding records as described above with reference toFIG.10. In block1212, the example processor1102transmits requests to the power supply for welding parameter measurements. In response, the example equipment12may collect a set of measurements (e.g., voltage measurements, current measurements, process selection, etc.) generated during the welding operation. In some examples, the equipment12sends the parameter measurements during the welding operation, and the processor1102may access the previously-received measurements in lieu of block1212. In block1214, the processor1102merges the welding record(s) received from the capture device(s) with the welding parameter measurements using corresponding time stamps of the image(s) in the welding records and the welding parameter measurements. Thus, the merged welding records and parameter measurements can synchronize captured images with welding parameter measurements that occurred at the same or approximately the same times as the images. After merging the record(s) (block1214), or if a weld operation was not detected (block1202), in block1216the processor1102determines whether any welding record(s) have been requested. For example, a QA manager, a welder trainer, or a shop supervisor, a service technician may wish to review the images and/or parameter measurements captured for a particular welding operation. If a welding record has been requested (block1216), the processor1102accesses the requested welding record(s) (block1218). For example, the processor1102may access the welding record(s) from a local or remote storage device. The processor1102outputs the synchronized welding image(s) and welding parameter measurements (block1220). For example, the processor1102may generate a web-based interface (e.g., an HTML5 interface, etc.) for display on a display device and/or interactive viewing by a viewer. In some examples, the processor1102transmits the interface to another device (e.g., a tablet computer, a computer terminal, etc.) for viewing and/or interaction. After outputting the synchronized welding image(s) and welding parameter measurements (block1220), and/or if no welding records were requested (block1216), the example instructions1200may end. FIG.13is a flowchart illustrating example computer readable instructions which may be executed to implement the example headwear20ofFIGS.3A-4Bto provide weld operator training. In the example instructions13, the headwear20is implemented using a mobile device such as a smartphone that can be mounted to a helmet or other head-worn device such that the display of the mobile device is facing the weld operator and a camera of the mobile device is facing in same direction as the weld operator. At block1302, the processor410initializes a mobile device application (e.g., an app), which may be stored as the instructions428in the memory426. At block1304, when the mobile device app is initialized, the processor410establishes communications with weld equipment, such as a power supply. For example, the processor410may use the communications interface406and one or more wired or wireless protocols such as Zigbee, Bluetooth, or WiFi, MiFi, cellular, satellite network to communicate with a power supply that is to be used for the training. At block1305, the processor410receives weld configuration information. The example weld configuration information may include, for example, a description of the welding equipment being used. The processor410may receive the weld configuration information via the communications interface406and/or via a user input. At block1306, the processor410initializes a camera414of the mobile device. At block1308, the image processor416and/or the processor410process camera images with image processing techniques to identify a weld scene. As an example, the image processor416may identify localization markers within the images captured by the camera414to identify a weld scene. At block1310, the processor410determines whether the weld scene is detected. If the weld scene is not detected (block1310), control returns to block1308to continue processing the camera images. When the weld scene is detected (block1310), at block1312the processor410monitors the communications interface406. For example, the processor410may wait for a trigger signal from the power supply or wire feeder indicating that the weld operator has pulled the trigger. At block1314, the processor410determines if a weld start command has been detected, such as by receiving a trigger signal, a voice command or other user input, and/or identifying an electrical arc from captured images. In some examples, such as training using actual welding, the processor410may also monitor for an indication from the image processor416whether an arc start has been detected (e.g., via recognizing a high-brightness image). If a weld start is not detected (block1314), control returns to block1308. When a weld start is detected (block1314), at block1316the image processor416processes camera images to identify a weld scene. For example, the image processor416may identify weld objects such as a weld pool, an electrode, an arc, and/or a weld gun in the images captured by the camera414. At block1318, the processor410receives weld parameters from the welding equipment (e.g., via the communications interface406). Example weld parameters may include a voltage setpoint, a current setpoint, a weld process (e.g., MIG, TIG, spray transfer, controlled short circuit, etc.), and/or a wire feed speed. At block1320, the GPU418generates and displays simulated objects with (e.g., overlaid on) the camera images on the display304of the mobile device to display the weld scene to the operator. The simulated objects may include a simulated arc, a simulated weld puddle, graphics illustrating the received weld data, and/or any other training information. In the example, the display304acts as the operator's vision of the weld scene. At block1321, the example processor410adjusts or configures the display of the simulation (e.g., the displayed images of the weld scene and/or the simulated objects) based on the weld parameters and/or features extracted from images of the weld scene. For example, extracted features such as contact-tip-to-work distance indicate how an operator performs, and may be extracted from the images by identifying the electrode and/or the weld torch, identifying the workpiece, calibrating distance measurements using a distance reference, and measuring the distance using the calibrated distances. For example, the processor410may determine how the simulated weld would act based on a model (e.g., a thermodynamic model, a neural network model, etc.), using the wire feed speed and/or a gun travel speed to determine a puddle size and a weld voltage to determine an arc length. The processor410determines how the weld would act in a real welding situation and displays a corresponding image of the weld to the user. At block1322, the processor410determine whether the weld end command is detected. For example, the processor410may receive a trigger release signal from the weld equipment via the communications interface406. If the end of the weld is not detected (block1322), control returns to block1316. When the end of the weld is detected (block1322), at block1324the processor410summarizes and displays the weld performance for the training weld in a post-weld summary interface on the display304. When the weld operator clears the display (e.g., via a voice command or other input), control returns block1308. FIG.14is a flowchart illustrating example computer readable instructions1400which may be executed to implement the example headwear20ofFIGS.3A-4Bto focus and/or zoom an image sensor based on identifying a location of a weld arc. The example instructions1400may be executed by the processor410ofFIGS.3C and/or4Bto focus the image sensor(s)422and/or zoom an image captured by the image sensor(s) for display on the display304. The instructions may be performed in conjunction with any of the other instructions ofFIGS.6A,7A,9,10,12, and/or13. At block1402, the processor410determines whether a weld operation is detected. For example, the processor410may process one or more images from the image sensor(s)422to determine whether an arc is present based on whether a brightness of the image and/or any portion of the image exceeds a threshold. If a weld operation is not detected (block1402), control iterates until a weld operation is detected. When a weld operation is detected (block1402), the image sensor(s)422capture image(s) of the weld scene (block1404). In some examples, the image sensor(s)422capture multiple images to facilitate generation of HDR, WDR, or MDR images. In block1406, the processor410determines a location of a weld arc within the weld scene by detecting the brightest location (e.g., region) in the image(s). In some cases in which multiple (e.g., stereoscopic) image sensors are used, a three-dimensional location of the arc is determined. In block1408, the processor410determines a location of interest in the weld scene based on the location of the weld arc. For example, the processor410may determine the location of the weld puddle as a short distance from the location of the weld arc, due to the relationship between the weld arc and the weld puddle. In block1410, the example processor410controls the camera(s)414(e.g., HDR image sensors) to focus on the location of interest. By focusing on the location of interest, the processor410may improve the operator's view of the location of interest. In block1412, the processor410determines whether a torch zoom is selected. When the torch zoom is selected (block1412), the processor410generates and displays (e.g., via the display304) a zoomed-in image of the location of interest. After generating and presenting the zoomed in image (block1414), or if the torch zoom is not selected (block1412), control returns to block1402. FIG.15is a flowchart representative of example machine readable instructions1500which may be executed to implement the example headwear20ofFIGS.3A-4Bto perform a pre-weld inspection of a weld scene. While the example instructions1500are described with reference to the processor410, the example instructions1500may be implemented using external processing resources such as cloud computing or any other external computing resources. In block1502, the processor410receives weld information, such as a WPS, equipment information, workpiece information and/or any other information describing the weld operation. The processor410may receive the weld information via the communications interface and/or via user input. In block1504, the camera(s)414capture image(s) of the weld scene. In block1506, the processor410processes the images to identify objects related to the weld. For example, the processor410may identify the workpiece (e.g., one or more pieces to be welded), a weld torch, an electrode wire, and/or any other objects in the images of the weld scene. In block1508, the processor410analyzes the alignment of the objects to be welded. For example, the processor410may identify the outline(s) of the pieces to be welded and compare the positions of the pieces based on the outline(s). For example, if a surface of a first piece to be welded is abutting an incorrect surface of a second piece to be welded, the processor410may identify that the orientation of identified edges, surfaces, and/or cross-sections of the first and second pieces do not match the weld information. In block1510, the processor410measures gap(s) present between the pieces to be welded. In block1512, the processor410measures electrode wire size based on the image. For example, the processor410may use a reference to determine measurements of distance, and apply the measurements to one or more gaps between pieces to be welded and/or to determine the electrode wire size. Example references include markers having known sizes, orientations, and/or spacing in the image, and/or a known distance between multiple image sensors (e.g., stereoscopic image sensors). From the reference(s), the processor410can measure the gap(s) and/or electrode wire sizes. In some examples, the measurements may be determined from stereoscopic images taken by the camera(s)414. The example electrode wire may be identified for measurement based on performing edge detection and/or other image processing techniques on the image to identify a welding gun and the electrode wire in proximity. Additionally or alternatively, the processor410identifies and verifies acceptable weld conditions and/or unacceptable weld conditions such as: whether the appropriate welding tool (e.g., torch) is identified; whether identified welding consumable(s) matches the consumable(s) that are specified in a WPS (e.g., based on matching an identification code, such as a QR code, with a code specified in the WPS); whether there is a proper fixture engagement of workpiece (e.g., if work clamp(s) are engaged, whether tack welds exist and/or are in the correct pattern(s) and/or location(s)); whether a voltage sense lead is connected; whether the contact tip and/or nozzle are in acceptable condition; whether the workpiece surface has been properly cleaned in accordance with a WPS (e.g., based on color); whether the workpiece fit-up (e.g., gap between parts) is within a tolerance window, and/or any other visually identifiable weld condition. In block1514, the processor410compares the measured characteristics (e.g., alignment, gap sizes, electrode wire sizes, etc.) to the weld information (e.g., from the WPS). In block1516, the processor410determines whether a discrepancy is detected between the measured characteristics and the weld information. For example, the processor410may determine whether the workpieces are out of alignment by more than threshold, whether any gaps are larger than is permissible, and/or whether the electrode wire size is incorrect, based on the weld information for the weld to be performed. If the processor410detects any discrepancies (block1516), in block1518the processor410generates a pre-weld alert signal. The pre-weld alert signal may be displayed via the display304, output via the speaker driver412, and/or communicated to the equipment12via the communications interface406. In block1520, the processor410disables (e.g., prevents) the weld by communicating a disable signal to the equipment via the communications interface406. In some examples, the pre-weld alert signal serves as the disable signal to the equipment12. While the weld is disabled, a pull of the weld torch trigger by the user does not result in an arc start. In some examples, the weld is disabled until another pre-weld inspection is passed, or the weld is manually enabled by the operator and/or a supervisor. If the processor410does not detect any discrepancies (block1516), in block1522the processor410enables the weld. For example, the processor410provides an enable signal to the equipment12. In some examples, the weld is enabled until the processor410sends a disable signal. After disabling the weld (block1520) or enabling the weld (block1522), the example instructions1500end. FIG.16is a flowchart representative of example machine readable instructions1600which may be executed to implement the example headwear20ofFIGS.3A-4Bto perform a post-weld inspection of a weld scene. While the example instructions1600are described with reference to the processor410, the example instructions1600may be implemented using external processing resources such as cloud computing or any other external computing resources. In block1602, the processor410receives weld information, such as a WPS, equipment information, workpiece information and/or any other information describing the weld operation. The processor410may receive the weld information via the communications interface and/or via user input. In block1604, the camera(s)414capture image(s) of the weld scene. In block1606, the processor410processes the images to identify objects related to the weld. For example, the processor410may identify the workpiece (e.g., one or more pieces to be welded), a weld torch, an electrode wire, and/or any other objects in the images of the weld scene. In block1608, the processor410identifies hole(s) and/or crack(s) present in the completed weld. For example, the processor410may identify holes and/or cracks based on identifying colors and/or shapes in the weld bead that are substantially different than the surrounding weld bead. In block1610, the processor410identifies burn-through present in the completed weld. For example, the processor410may identify burn-through by analysis of the images using burn-through shapes and/or colors based on the material. In block1612, the processor410identifies the weld geometry. For example, the processor410may analyze the path of the completed weld bead to determine the size of the weld and/or the length of the weld. In block1614, the processor410identifies the placement of the weld. For example, the processor410may determine whether the workpiece was welded at a correct location and/or whether spot welds were properly placed. The processor410may use reference points to determine measurements of distance, and apply the measurements to analyze the weld geometry and/or placement. In some examples, the measurements may be determined from stereoscopic images taken by the camera(s)414. In block1616, the processor410determines whether any discrepancies between the measured characteristics and the weld information are detected. For example, the processor410may determine whether there are any holes, cracks, and/or burn-through present, if the weld geometry is outside of a threshold acceptable geometry, and/or if the weld was improperly located. If discrepancies are identified (block1616), in block1618the processor410generates a post-weld alert signal. The post-weld alert signal may be displayed via the display304, output via the speaker driver412, and/or communicated to the equipment12and/or to a weld monitoring server via the communications interface406. On the other hand, if no discrepancies are identified (block1616), in block1620the processor410approves the weld. The processor410may send a weld approval signal to the equipment and/or to a weld monitoring server. After generating the post-weld alert signal (block1618), or after approving the weld (block1620), the example instructions1600end. FIG.17illustrates another example of the welding system10in which an operator18is wearing welding headwear20and welding workpieces24aand24busing the torch22to which power or fuel is delivered by equipment12via a conduit14. In the example ofFIG.17, the camera(s)414may be used for operator identification. For example, the operator may face the camera and may be logged into the welding system by facial recognition software analyzing the facial features of the operator and compare it with a database of authorized operators for particular equipment or for a particular weld job. A qualification record of the operator may be automatically checked for presence and expiration. Similarly, when the operator is wearing the helmet, the camera may capture identifying features (e.g., information tags or high contrast markers50) of welding gun, power source, consumables such as gas and wire, etc. Image processing software may log in the proper gun, consumables, etc. for the weld job and check against a WPS (weld procedure specification) for mistakes. The markers50could be, for example, barcodes or QR codes printed on the packaging of the welding consumables (e.g., QR code50bon the spool52and QR code50con gas cylinder54) so that proper consumables can be identified for conformance to WPS prior to welding. Another example is that a QR code50gon the operator gloves can be used to log in the operator to the welding system and the operator credentials (his/her WPQ) are verified and accepted. Another example is that a QR code near the joint (e.g. QR code50d) is used to identify the weld number within a weldment assembly so that the proper weld sequence can be enforced and the weld procedure for that particular joint can be recalled or set automatically by the welding equipment12. Alternatively, high contrast markers can be printed on stickers or pre-etched on the workpieces24aand24b(e.g., marker strips50e, and500and welding gun (e.g., marker50a) to track gun position, orientation and motion relative to the seam. Information such as gun travel speed, gun orientation relative to the joint (i.e. torch angle and travel angle) and wire placement relative to the center of the joint can be extracted from image processing. The marker50a,50e, and50fmay, for example, be printed with near-infrared reflective ink or pigments so that they are more visible under the bright arc conditions if the imager of the camera is sensitive to IR but rejects visible arc spectrum. In yet another example, operator18may look at the spool52and the camera414in the headwear20can capture the image of wire spool52and the corresponding image processing will determine if the spool is low on wire and needs replenishment. Similarly, operator18may hold the torch22close to the helmet and visually inspect the front end of the torch22for the Third Eye camera to capture the tip and the nozzle conditions. The corresponding image processing will determine if the tip or the nozzle need to be changed based on a predetermined criteria. Another example is, after welding, operator18may visually inspect the weld he/she just completed. The Third Eye camera may automatically capture the image of the weld and compute the actual length, width, bead shape and exterior defects or discontinuities of the weld and compare the measurements with the quality criteria for visual acceptance. Missing weld, oversized weld, undersized weld or poor quality weld can be automatically flagged in the system. Operator18may be notified on the spot via speaker driver412. FIG.18illustrates another example welding headwear20including electromagnetic shielding1810, a light source1812, and solar cells1814. The example headwear20further includes the optical components302, the display304, the user interface components308, the antenna402, the camera(s)414, the sensors422, the power source424, and the PCB430. The shielding1810may be positioned to shield the wearer from electromagnetic emissions from the antenna402and other electronic components of the headwear20. The light source1812may include, for example, a super bright LED array to help illuminate the weld scene. To conserve battery, control circuitry may activate the light source312only it is determined that additional lighting would be beneficial (e.g., when the brightness of the weld scene without the additional lighting is beyond the capabilities of the camera(s)414, such as before the arc is lit). Additionally, or alternatively, the light source may be activated and deactivated by an operator interface, such as a voice command, upon pull of the trigger of the welding torch, etc. FIG.19is a flowchart illustrating a process1900for automatic exposure control. The example method1900may be performed by the headwear20ofFIGS.3A-4B and/or18. In block1902, the camera(s)414are ready to capture an image. In block1904, circuitry of the headwear20determines whether a welding arc will be present when the image is captured. If the arc will be present, then in block1906a relatively shorter exposure time, and first set of image processing parameters and algorithms, are used to reveal the details of the dark areas such as joint and wire extension. The first set of image processing parameters may comprise, for example, relatively more aggressive image compression and digital image filtering of the bright scene. Returning to block1904, if the arc will not be present during the image capture, longer exposure can be used together with a second set image processing parameters and algorithms may be used. The second set of image processing parameters and algorithms may comprise, for example, relatively less aggressive image compression ratio and digital image filtering for the dark scene. In block1910the image is captured using the exposure and parameters and algorithms determined in either block1906or1908and then the process returns to block1902for the next capture. Returning to block1904, there are a variety of ways in which it may be determined whether the arc will be present during the capture. In an example implementation, arc signals (e.g., communicated to the headwear20from equipment12) may be used as a feed forward signal to adapt the exposure time. For example, if the arc voltage sensed (not including the welding cable voltage and electrode stickout voltage) is greater than 14V, it may be determined that an arc is present and will likely remain present for the impending image capture. In another example implementation, rather than predicting merely the presence or absence of the arc, brightness of the arc may be predicted and used for adapting the exposure time and/or image processing parameters. For example, the level of arc voltage or arc current (or the product of voltage and current which is the arc power) can be used to predict the brightness of the scene and choose exposure and image processing parameters and algorithms accordingly. This is useful not only during arc start and stop, but in welding process where the arc brightness changes quickly, (e.g., in 20 Hz to 250 Hz frequency), such as in pulse welding and short circuiting welding. FIG.20is a state diagram illustrating example operation of welding headwear in accordance with aspects of this disclosure. The headwear20may power up in state2002in which circuitry such as the circuitry residing on PCB430is in a power save state. While in state2002, the camera(s)414may not be capturing photos, the communication interface406may not be transmitting or receiving any data, etc. While the headwear20is in the power save mode, a photodiode of sensor(s)422may monitor light incident on it to detect the presence of a welding arc. Upon sensing the presence of a welding arc, the photodiode generates an interrupt which triggers a transition from state2002to state2004. In state2004, circuitry which was in a power save mode in state2002is awakened. For example, the camera(s)414may start capturing video, the GPU418may process the video, and the communication interface406may start streaming the video wirelessly with P2P networking such that they video may be displayed in a web browser of a device nearby. When the photodiode detects that the arc is extinguished, it may trigger circuitry to do transition back to power save mode (including doing some “housekeeping” such as storing state information to memory, etc. and then the system returns to state2002. FIGS.21A and21Billustrate an example of capturing an image of a weld environment2100during a short circuit condition of a welding operation. The example ofFIGS.21A and21Bmay be performed by the example headwear20ofFIGS.3A-4Cto capture images of the welding environment during a welding operation. For example, the headwear20may use low-cost optical sensors or cameras (e.g., the camera(s)414) that do not rely on techniques such as HDR, WDR, MDR, ToF sensing, or any other techniques used to capture images while an electrical arc is present. Referring toFIG.21A, the example weld environment2100includes a workpiece2102which is being welded during an ongoing welding operation. In the welding operation, a torch2104is feeding an electrode wire2106to the welding operation, and an electrical arc2108that has a high brightness is present between the electrode wire2106and the workpiece2102. Due to the high brightness of the arc2108, images captured of the weld environment2100during the presence of the arc2108may not show the weld puddle or other features of interest in the weld environment2100without the use of HDR and/or other techniques as described above. Referring toFIG.21B, the welding operation ofFIG.21Ahas experienced a short circuit condition in which the electrode wire2106makes direct contact with the weld puddle2112and/or the workpiece2102. As a result, the electrical arc2106is extinguished (e.g., temporarily) and current flows directly from the electrode wire2106to the weld puddle2112. Because the electrical arc2106is not present, the weld environment2100has a lower brightness difference between different elements in the weld environment2100, and images of the weld environment2100can be captured using lower-dynamic-range image techniques. The example processor410of the headwear20identifies the time periods in which short circuit conditions are present in the welding operation and, during the identified time periods, captures images via the camera(s)414. The images may be displayed to the wearer of the headwear20(e.g., on the display304), overlaid with one or more simulated objects, stored, and/or transmitted for remote viewing, as described herein. In some examples, the processor410identifies the time periods during which the short circuit conditions are present by receiving one or more signals from the sensor(s)422and/or the camera(s)414(e.g., brightness values) and/or by receiving data from the equipment12via the communications interface406. Example data that may be received from the equipment12includes measured voltage and/or current values output by the power supply and/or the wire feeder, and/or an operating mode of the power supply indicating that the power supply is operating based on a short circuit condition. For example, if the processor410receives welding variable values from the equipment12that indicates a short circuit condition (e.g., a drop in voltage and/or an increase in current to threshold levels), the processor410causes the camera(s)414to capture one or more image(s). In some examples, the processor410receives an identification that a controlled short circuit process is being used. Based on the controlled short circuit process and/or weld variable data provided by the equipment, the example processor410predicts times at which the short circuit is present and captures images at the predicted time(s). In some examples, the processor410transmits a signal to command the equipment12to cause a short circuit condition at a particular time. FIG.22is a flowchart representative of example machine readable instructions2200which may be executed by the processor410ofFIGS.3A-4Cto capture an image of a weld environment (e.g., the weld environment2100ofFIGS.21A-21B) during a short circuit condition of a welding operation. At block2202, the example processor410ofFIGS.3A-4Cdetermines whether a welding operation is occurring. If a welding operation is not occurring (block2202), control returns to block2202to await a welding operation. If a welding operation is occurring (block2202), at block2204the processor410receives one or more sensor value(s) from the sensor(s)422and/or the camera(s)414. For example, the sensor value(s) may include a brightness (e.g., luminance) value of an environment around the headwear20. At block2206, the processor410determines whether the sensor value(s) indicate a low light intensity condition. A low light intensity condition may occur during a short circuit (e.g., the arc is extinguished) and/or during a low-current condition. In some examples, a low light intensity condition may be determined in a similar or identical manner as an automatically-dimming welding visor determines a condition to reduce a dimming effect. For example, if the brightness values are greater than a threshold brightness value indicating an arc is present, the processor410may determine that a low light intensity condition does not exist. If the sensor value(s) do not indicate a low light intensity condition (block2206), at block2208the processor410determines whether a low light intensity condition indication has been received from a welding device. For example, the processor410may determine whether a voltage variable, a current variable, and/or any other signal or data has been received (e.g., from the equipment12and/or via the communications interface406) that indicates that a low light intensity condition is present. Example indications of a low light intensity condition include a voltage change measurement (e.g., a threshold voltage drop in a time period) and/or an arc voltage measurement that is less than a threshold (e.g., less than 14V). In some examples in which a controlled short circuit weld process is being used, the processor410may receive information identifying the frequency of the short circuit process and/or a waveform of the controlled short circuit process. The processor410may use the frequency and/or waveform information to predict the times at which the short circuit conditions occur during the welding operation. If a short circuit condition indication has not been received (block2208), at block2210, the processor410determines whether a low light intensity request is to be transmitted (e.g., to cause a short circuit in the welding operation). For example, the processor410may request a low light intensity condition to provide an opportunity to capture image(s) if no images have been captured for at least a threshold time period. In some other examples, a low light intensity condition may be requested in response to another condition, such as detecting an anomaly in the welding operation and/or other condition. The example low light intensity request may be formatted to cause a power supply and/or a wire feeder to cause a low light intensity condition by, for example, temporarily increasing a wire feed speed and/or temporarily reducing a weld voltage or weld current (e.g., less than 50 amperes) to reduce an light intensity from the arc. The example weld equipment12may respond to the low light intensity request by briefly reducing current in the weld cable by, for example, sinking current output by an inverter to divert the current from the weld cable. The current diversion causes a rapid inverse spike in the weld cable current, which reduces the intensity of the arc light and enables capture of one or more images by the camera(s)414. If there are no data or communications that indicate that a low light intensity condition exists (blocks2206-2210), control returns to block2202. If the sensor value(s) indicate a low light intensity condition (block2206), if a low light intensity condition indication has been received (block2208), and/or if a low light intensity request is transmitted (block2210), at block2212the processor synchronizes the camera(s)414with a low light intensity condition based on the sensor value(s), the received low light intensity condition indication, and/or the low light intensity request. For example, the processor410may determine, based on sensor values, that a low light intensity condition has already begun and/or currently exists (e.g., there is a short circuit occurring based on a brightness sensor value, and the image(s) should be captured immediately). Additionally or alternatively, the processor410may predict a present and/or future low light intensity condition based on received low light intensity indications and/or low light intensity requests. For example, the processor410may use the frequency and/or waveform information to predict the times at which the low light intensity conditions occur during the welding operation. At block2214, the processor410controls the camera(s)414to capture one or more image(s) during the time period of the short circuit condition. In some examples, the processor410controls an illumination device, such as a light emitting diode (LED) or other light source, to illuminate the area for which images are being captured. When using the illumination source, in some examples the processor410turns off the illumination source when not capturing images to conserve energy. At block2216, the processor410verifies the captured image(s) to determine that the images have suitable brightness and/or contrast characteristics for viewing and/or analysis. At block2218, the processor410determines whether usable image(s) have been captured. If no usable images have been captured (e.g., due to interference or an incorrectly calculated time period) (block2218), control returns to block2202. If usable images have been captured (block2218), at block2220the processor410processes the image(s) to determine characteristics of the welding operation (e.g., as described herein). At block2222, the processor410overlays the image(s) (e.g., using simulated objects), displays the image(s) (e.g., on the display304, with or without the simulated objects), stores the image(s), and/or transmits the image(s), as described herein. Control returns to block2202to continue capturing images during low light intensity conditions while the welding operation is ongoing. The present methods and systems may be realized in hardware, software, and/or a combination of hardware and software. The present methods and/or systems may be realized in a centralized fashion in at least one computing system, or in a distributed fashion where different elements are spread across several interconnected computing systems. Any kind of computing system or other apparatus adapted for carrying out the methods described herein is suited. A typical combination of hardware and software may include a general-purpose computing system with a program or other code that, when being loaded and executed, controls the computing system such that it carries out the methods described herein. Another typical implementation may comprise one or more application specific integrated circuit or chip. Some implementations may comprise a non-transitory machine-readable (e.g., computer readable) medium (e.g., FLASH memory, optical disk, magnetic storage disk, or the like) having stored thereon one or more lines of code executable by a machine, thereby causing the machine to perform processes as described herein. As used herein, the term “non-transitory machine-readable medium” is defined to include all types of machine readable storage media and to exclude propagating signals. While the present method and/or system has been described with reference to certain implementations, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted without departing from the scope of the present method and/or system. In addition, many modifications may be made to adapt a particular situation or material to the teachings of the present disclosure without departing from its scope. For example, blocks and/or components of disclosed examples may be combined, divided, re-arranged, and/or otherwise modified. Therefore, it is intended that the present method and/or system not be limited to the particular implementations disclosed, but that the present method and/or system will include all implementations falling within the scope of the appended claims. As utilized herein the terms “circuits” and “circuitry” refer to physical electronic components (i.e. hardware) and any software and/or firmware (“code”) which may configure the hardware, be executed by the hardware, and or otherwise be associated with the hardware. As used herein, for example, a particular processor and memory may comprise a first “circuit” when executing a first one or more lines of code and may comprise a second “circuit” when executing a second one or more lines of code. As utilized herein, “and/or” means any one or more of the items in the list joined by “and/or”. As an example, “x and/or y” means any element of the three-element set {(x), (y), (x, y)}. In other words, “x and/or y” means “one or both of x and y”. As another example, “x, y, and/or z” means any element of the seven-element set {(x), (y), (z), (x, y), (x, z), (y, z), (x, y, z)}. In other words, “x, y and/or z” means “one or more of x, y and z”. As utilized herein, the term “exemplary” means serving as a non-limiting example, instance, or illustration. As utilized herein, the terms “e.g.,” and “for example” set off lists of one or more non-limiting examples, instances, or illustrations. As utilized herein, circuitry is “operable” to perform a function whenever the circuitry comprises the necessary hardware and code (if any is necessary) to perform the function, regardless of whether performance of the function is disabled or not enabled (e.g., by a user-configurable setting, factory trim, etc.). The present methods and/or systems may be realized in hardware, software, or a combination of hardware and software. The present methods and/or systems may be realized in a centralized fashion in at least one computing system, or in a distributed fashion where different elements are spread across several interconnected computing systems. Any kind of computing system or other apparatus adapted for carrying out the methods described herein is suited. A typical combination of hardware and software may be a general-purpose computing system with a program or other code that, when being loaded and executed, controls the computing system such that it carries out the methods described herein. Another typical implementation may comprise an application specific integrated circuit or chip. Some implementations may comprise a non-transitory machine-readable (e.g., computer readable) medium (e.g., FLASH drive, optical disk, magnetic storage disk, or the like) having stored thereon one or more lines of code executable by a machine, thereby causing the machine to perform processes as described herein. While the present method and/or system has been described with reference to certain implementations, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted without departing from the scope of the present method and/or system. In addition, many modifications may be made to adapt a particular situation or material to the teachings of the present disclosure without departing from its scope. Therefore, the present method and/or system are not limited to the particular implementations disclosed. Instead, the present method and/or system will include all implementations falling within the scope of the appended claims, both literally and under the doctrine of equivalents. | 172,371 |
11862036 | The drawings referred to in this description are not to be understood as being drawn to scale except if specifically noted, and such drawings are only exemplary in nature. DETAILED DESCRIPTION In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the present disclosure. It will be apparent, however, to one skilled in the art that the present disclosure can be practiced without these specific details. Reference in this specification to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present disclosure. The appearance of the phrase “in an embodiment” in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Moreover, various features are described which may be exhibited by some embodiments and not by others. Similarly, various requirements are described which may be requirements for some embodiments but not for other embodiments. Moreover, although the following description contains many specifics for the purposes of illustration, anyone skilled in the art will appreciate that many variations and/or alterations to said details are within the scope of the present disclosure. Similarly, although many of the features of the present disclosure are described in terms of each other, or in conjunction with each other, one skilled in the art will appreciate that many of these features can be provided independently of other features. Accordingly, this description of the present disclosure is set forth without any loss of generality to, and without imposing limitations upon, the present disclosure. Term Descriptions A play as described herein includes an electronic play including a plurality of players. The play is generated or drawn using an electronic forum. The electronic forum can be an application or a plugin or a website. In one embodiment, the electronic forum may correspond to an online whiteboard or a clipboard based application. The electronic forum provides a plurality of buttons or drawing markers to a user for defining moves of the plurality of players. The play may relate to any game, for example football, cricket, baseball etc. The play can be shared with one or more users. The movements of the play or details of the play includes positioning of the players in a physical field, such as football field, details regarding whether the player is in forward position or a goalkeeper, details regarding how the players will move to execute a strategy or plan etc. It is to be appreciated that the electronic forum supports all minute level of details that the user wants to be generated and shared with other users. A playbook as described herein includes one or more plays having at least one common characteristic, for example same game, same team, same arrangement of players, a common player, a common country or any other characteristics shared by one or more plays. The playbook can be generated based on inputs received from the user. A team as described herein includes one or more users that are provided access to a particular play or playbook. The team is created by an administrator user or any other user with appropriate rights. The administrator user then assigns different rights to different users. Based on the assigned rights the access to different portions of the play or playbook is controlled. The assigned rights also indicate if the user can further perform acts of an administrator user or other rights. A war room is described as an area or page of the electronic forum that provides access to a member(s) of the team. The war room can hold different plays within the playbook feature. The war room also includes notes and other material generated by the team. The war room also has a white board and other functionalities to enable communication and collaboration among team members. FIG.1illustrates an example environment100, where various embodiments of the present disclosure may be practiced. An example representation of the environment100is shown depicting a network110that connects entities such as a plurality of users (e.g., users102A and102B) and a server112. The network110may be a centralized network or may comprise a plurality of sub-networks that may offer a direct communication between the entities or may offer indirect communication between the entities. Examples of the network110include wireless networks, wired networks, and combinations thereof. Some non-exhaustive examples of wireless networks may include wireless local area networks (WLANs), Bluetooth or Zigbee networks, cellular networks and the like. Some non-exhaustive examples of wired networks may include Local Area Networks (LANs), Ethernet, Fiber Optic networks and the like. An example of a combination of wired networks and wireless networks may include the Internet. The plurality of users may have one or more electronic devices to communicate with other entities of the environment100via the network110. For example, the user102A is connected to the network110via a device104, and the user102B is connected to the network110via a device108. The devices may be connected over various types of the network110, for example a home network, a LAN, a wireless network, a Bluetooth based network, or such other types of network. It is understood that devices104and108may not be limited to a desktop computer and a mobile phone, respectively, as shown in the environment100and that the users may connect to the network using various devices. Examples of the devices include, but are not limited to, laptops, smartphones, tablets, smart watches, smart televisions, smart devices in homes and/or vehicles, and other such systems having capability to access an electronic forum. In some example embodiments, the server112may include one or more processing elements (e.g., computing systems, databases, etc.) to process the information received from the users' devices and to facilitate an electronic forum. In some example embodiments, the server112maintains an infrastructure for hosting applications, such as an electronic forum or an electronic forum application, an instance of which may be installed on the devices of the users. The server112is exemplarily depicted to include electronic forum applications106A and106B. It is to be appreciated that the terms “electronic forum” or “electronic forum application” is used interchangeably throughout this draft. However, both refer to an electronic portal that enables users to create and manage plays. The users using the electronic forum can interact with the server112, for example, they can provide data and receive data from the server112. It is understood that the functionalities of the server112can be embodied in the form of cloud services and/or subscription services. In one embodiment, the server112and one or more devices of the user execute the method as described in present disclosure. In another embodiment, the server112receives one or more inputs from the one or more devices of the user and performs the entire method as described in present disclosure. In yet another embodiment, the one or more devices of the user perform entire method as described herein. The device104is depicted to include the electronic forum application106A. The electronic forum application106A enables the user102A to provide data and receive data from the server112. Similarly, the device108is depicted to include the electronic forum application106B. In one embodiment, the electronic forum application106A may differ from the electronic forum application106B, as different versions of the applications may be provisioned based on different devices and the device form-factors. In another embodiment, the electronic forum application106A may be similar to the electronic forum application106B and these applications may correspond to mere different instances of same application running on different devices. In many example scenarios, an electronic forum is required by coaches or team members to create, manage and share plays with other coaches or team members or any other person with which such sharing or collaboration is desired. In such scenarios, one or more software applications are offered for download on respective user devices so as to enable the users (coaches) to perform various activities related to creation, sharing and management of plays. Examples of such activities include, but are not limited to, creation of plays, sharing of plays, managing plays, creating playbooks, sharing playbooks, managing playbooks, creating war rooms, creating teams, managing war rooms, managing teams, editing teams, editing war rooms, editing plays, editing playbooks, and any other activity that relates to play or workflows desired by coaches or users. The devices104and108are depicted to include such electronic forum applications in form of applications106A and106B, respectively. The applications can be installed on the user devices or can be accessed in form of electronic portals or websites using browser applications or browsers. In some embodiments, the application includes a social media website that allows coaches to collaborate and exchange ideas. The application allows the coaches to draw sports plays on a virtual clipboard offered via the application. The virtual clipboard is also a type of application using which or where users can post formations, configurations, annotations and plays. Other users will be able to view the plays, open the plays, make their own updates and repost them as part of a reply or their own post. The plays allow the users to communicate back and forth complex ideas pictographically. It is to be appreciated that the terms “users” and “coaches” are used interchangeably to indicate a user of the application. The application can be offered in various modes, such as a free mode or a premium mode. For example, in free mode the application allows the users to share and discuss the plays in a forum, find news, videos, blogs and sports events, as well as connect with other users to discuss sports topics. As part of a premium service, users are able to save plays to their personal account and share them privately with other users, i.e. their team members. The users are able to organize their plays in to playbooks and each team has a war room where the team members can discuss and make shared plays. In addition, the users can use the virtual clipboard, make and manage connections, use the forum, find the latest news, watch interesting videos, check interesting blogs, find upcoming events, and manage their account. The application is now explained in detail using following figures. FIG.2illustrates a navigation bar200of an electronic forum, in accordance with an example embodiment of the invention. The users are able to navigate through various actions or workflows offered by the electronic forum using the navigation bar200. The navigation bar200includes one or more tabs such as “Connections202”, “Forums204”, “News206”, “Videos208”, “Blogs210”, and “Events212”. The navigation bar200also includes a search box214. FIG.3illustrates a virtual clipboard300of an electronic forum, in accordance with an example embodiment of the invention. Through the virtual clipboard300, users are able to draw plays and tactics that they would like to discuss with other users or to save. The virtual clipboard300enables the coaches to draw plays on sidelines of most sporting events. The virtual clipboard300includes one or more drawing markers302or buttons304as shown. The virtual clipboard300allows users to use frequent symbolism, i.e. offensive players are typically depicted with the letter “o” and defensive player with the letter “x”. An arrow306and a crossbar308are placed at the end of a line to show how players finish their movements. The buttons304provide various formations or drawing options to the users. The virtual clipboard300also includes different colored markers, custom text for description and annotation, as well as the preset formations and background based on the sport involved. The preset formations allow user to quickly apply standard alignments by position. This allows users to easily place players into standard starting points based upon position. This is especially convenient for coaches as they do not have to start a new play by drawing standard player formations. FIG.4illustrates a connection page400of an electronic forum, in accordance with an example embodiment of the invention. The connection page400includes a connections section that allows users to connect with other fellow users. The connection section acts as the social wall where users can see what other users are discussing. The connection sections list various connections of the users and provide option402to the user to sort connections using various options. The connection page400provides options to the users to add others as friends, by clicking the “plus” icon404, in which case, a request is sent to the second user, so that the other user can confirm if he/she wants to accept the friend request. To search for a specific person, the user can use the search box502as shown inFIG.5.FIG.5illustrates a connection search page500of an electronic forum, in accordance with an example embodiment of the invention. The connection search page includes one or more search boxes504that give the user multiple options to find a person they're looking for through name, title, level, region, state, offense, defense or combinations of all these items. FIG.6illustrates a sidebar600of an electronic forum, in accordance with an example embodiment of the invention. On the sidebar600, the users are able to see who else is online. FIG.7illustrates a main page700of an electronic forum, in accordance with an example embodiment of the invention. The main page700allows users to discuss tactics and opinions. The main page700is depicted to display example headings in form of forums, topics, posts and freshness. There are three main forum subjects702: Offense, Defense and Strength & Conditioning. Each of these subjects is composed of sub-subjects704. The users can create topics, and make posts on them, as well as reply to others' posts. The freshness column helps users know if the topics have been discussed recently or not. After entering a specific subject, users will be able to choose a sub-subject. FIG.8illustrates an example page800showing forum sub-subjects of an electronic forum, in accordance with an example embodiment of the invention. The forum subject is exemplarily depicted to be “offense”. The users are able to check how many topics each sub-subject, such as for example, “general offense”, “passing game”, “pass protection” and “running game” has, how many replies, and how long ago discussions in these sub-subjects were last active. After the user enters this sub-subject, the list of topics is displayed to the user. FIG.9illustrates an example page900showing topics of an electronic forum, in accordance with an example embodiment of the invention. The topics are exemplarily depicted to correspond to a sub-subject “general offense”. The user may view individual posts and replies by clicking on the respective topics. The user may also subscribe to receive notifications by clicking on the subscribe button in order to receive notifications whenever someone contributes to the discussion. FIG.10illustrates example posts1000related to one or more topics of an electronic forum, in accordance with an example embodiment of the invention. The users can subscribe to the topic, bookmark it, reply to it or like it. The users can also share it on some of the main social networks using social sharing options, such as the social sharing options1100ofFIG.11. Additionally, users can use tags on the electronic forum, to facilitate search for a specific topic or post at a later point in time. The users can also include in their posts and replies, files that might help them explain their point, or even a clipboard play they have drawn with the virtual clipboard, such as the virtual clipboard300explained with reference toFIG.3. FIG.12illustrates a news main page1200of an electronic forum, in accordance with an example embodiment of the invention. The news main page1200includes a news section using which users can scroll through the latest sports news. The users can share this news on the main social networks, and if they find one that interests them, they can open it to read it. The source of the news is also provided. FIG.13illustrates an exemplary news page1300of an electronic forum, in accordance with an example embodiment of the invention. FIG.14illustrates a videos main page1400of an electronic forum, in accordance with an example embodiment of the invention. More specifically, the electronic forum includes a ‘videos’ section, where users are able to find a large set of videos1402related to sports. The users can scroll through this set of videos and choose which ones they would like to watch. The videos are divided into three sections: popular videos, recent videos and featured videos, in order to facilitate users in finding desired videos. The users can initiate play-back of the videos by clicking on the respective video icons. FIG.15illustrates a blogs main page1500of an electronic forum, in accordance with an example embodiment of the invention. The blogs section allows users to scroll through relevant sports blogs. By opening one of these blogs, the users are able to see the latest posts that were made there, for example as shown in a blog page1600of the electronic forum inFIG.16. FIG.17illustrates an events main page1700of an electronic forum, in accordance with an example embodiment of the invention. The events main page1700allows users to find more information about the upcoming events. They can also choose which dates to search events in. When opening one of these events, users are able to see all the relevant information, add the event to Google Calendar, or export it to iCal. They are also able to share the events to the main social networks. FIG.18illustrates an exemplary event page1800of an electronic forum, in accordance with an example embodiment of the invention. On a given event, users are able to move on to the next or previous event by clicking arrows1902that allows them to navigate as shown in event section1900ofFIG.19. FIG.20illustrates an accounts main page2000of an electronic forum, in accordance with an example embodiment of the invention. The users also have access to a personal space2002as shown on the accounts main page2000where they can manage their accounts. They are able to manage their posts, messages, friends, notifications, profile information and settings. When the users open this section, they are able to see their feed, where the latest contributions they have done, or that their friends have done, are visible. They are able to view conversations, mark them as favorites, check comments, make new comments, or delete comments they have done as shown on activity page2100of the electronic forum ofFIG.21. FIG.22illustrates a navigation menu2200of accounts page of an electronic forum, in accordance with an example embodiment of the invention. The navigation menu2200shows tabs for various sections such as posts, messages, friends, notifications, profile and settings. FIG.23illustrates a posts section page2300of an electronic forum, in accordance with an example embodiment of the invention. In the posts section page2300, users are able to find posts they have participated in. The posts are divided in: Topics started, Replies created, Bookmarks they have done and Subscriptions they have done. FIG.24illustrates a messages section2400of an electronic forum, in accordance with an example embodiment of the invention. The messages section2400allows users to check messages they have received, and send new messages to other users. They can also mark certain messages (for example, by using a star) to make them easier to find. FIG.25illustrates a friends section2500of an electronic forum, in accordance with an example embodiment of the invention. The friends section2500is divided in two portions: 1) check the users' Friends or 2) check the requests other users have send them. In this last case, users are able to accept the requests users have sent them. FIG.26illustrates a notifications section2600of an electronic forum, in accordance with an example embodiment of the invention. Using the notification section2600, the users are able to see their notifications that are read or unread. FIG.27illustrates a profile section2700of an electronic forum, in accordance with an example embodiment of the invention. The profile section2700allows the user to view or edit their profile, including their profile picture. FIG.28illustrates a settings section2800of an electronic forum, in accordance with an example embodiment of the invention. Using the settings section2800, the users are able to change their personal settings, including general settings, email related settings and profile visibility settings. They can also delete their account. FIG.29illustrates team creation page2900of an electronic forum, in accordance with an example embodiment of the invention. In some embodiments, the team creation page2900is part of a premium service. As part of the premium service, the users are able to save plays to their personal account and share them privately with other users, i.e. their team members. The users are able to organize their plays in to playbooks and each team has a war room where the team members can discuss and make shared plays. Further, each team has an administrator (also referred to hereinafter as admin), i.e. the user who created the team. This user is able to manage everything about their team, including who is a part of it. Besides the admin, the team has two types of users, i.e. members and leaders. Leaders are able to manage the war room and the playbooks besides the basic functionalities, while members perform the more basic tasks. The admin is also able to make other members as administrators. In some embodiments, in order for the war room to be active, a leader or the admin might have to be present, otherwise, a message may be displayed to the member informing them that the room is closed. In an embodiment, a link is added to the main navigation page to direct the users to the team creation page2900. If the user clicks on the link without being a premium customer, an informational screen may appear to explain the advantages of subscribing to the service. The information regarding the premium service is then presented to the user on team creation page3000inFIG.30trying to sell the server to this user. A free month subscription could be offered, in order to have the users try out this service. An explainer video3002can be inserted as shown in the team creation page3000to quickly inform the user on the main functionalities the premium service offers. When the user chooses to subscribe, the users are redirected to a team creation page3100(shown inFIG.31), where they are able to create their team. The basic information for creating a team, such as its' name, its' logo and its' description, or other fields that might be essential, are asked to the user. All of this information can later be edited by the admin in the team settings. After this, the user may choose a package that the user likes to buy as shown in team creation page3200ofFIG.32. The team creation page3200shows three packages3202(for example, a basic package, a blue chip package and a MVP package) to choose from, offering different functionalities to the users. Another team creation page, such as the team creation page3300ofFIG.33, may then be displayed to the user for the user to provision their credit card information for making payments. User is informed that he is not charged right away, but only after the trial month, in case the user does not unsubscribe. It is understood that the user has to read the terms and conditions in order to proceed. Thereafter, the user is redirected to the “My Teams” page, i.e. a team selection page3400, with the role of Admin, as exemplarily shown inFIG.34. When the admin enters the team selection page3400, the admin is able to perform a set of functionalities, namely: Add and remove users Change users' role Create, edit and save plays Create and manage playbooks Manage the war room Manage the team's forum When entering the team selection page3400, the user has to choose a team. If the user has a team, this operation could be hidden for UX purposes and the “Create new team” button would be placed in the settings, so that they could still create a new team. A team can be created for each section of the team. For example, the admin could decide to have a team for the offense, one for defense and a general one. If the user chooses to create a new team, the user is redirected to a team information input page, such as a team information input page3500shown inFIG.35, that asks them to insert their team information, followed by the page to choose a package, and the payment page, as mentioned earlier. FIG.36illustrates a war room page3600of an electronic forum, in accordance with an example embodiment of the invention. The war room3600is the place for the team to collaborate and discuss tactics and plays. There is a common board3602where the users can draw their plays together. The war room has a group chat box3604, as well as private chat boxes3606. It also allows the users to talk through voice as exemplarily depicted using a microphone icon3608. It is understood that various voice-based technologies, such as Voice over IP (VOIP), may be utilized for facilitating such voice-based interaction. Furthermore, in some embodiments, video based interactions may also be facilitated to enable users to see each other during an ongoing interaction on the electronic forum. The admin is able to control the war room3600, as well as the leaders that the admin has chosen. It is important that the clipboard has a “Save” button, so that the current play can be saved before a new one is added to the board. User can save a draft of the play when a new one is added to the board, otherwise the new play may replace the old one and it may never be recovered, if the admin/leaders have forgotten to save it, which might become frustrating to the users. These drafts could be saved in a playbook called “My team's drafts”, and the admin and leaders could then choose to either save these plays or delete them. In one embodiment, to avoid saving a lot of unnecessary drafts, only the latest 20 plays may be saved, and the older ones may be deleted. The war room page3600shows two chat rooms: the group chat and the private chat. On the group chat, every member can write and every member can read what is written. On the private chat, two members are discussing only with each other. To make space for the clipboard, the chat windows can be minimized as shown in the war room page3600and can expand when the user clicks on one of them as shown in a war room page3700ofFIG.37. To facilitate adding a new play to the war room, one of the team's playbooks could be chosen under the clipboard. Choosing a playbook will present all of the plays in that playbook that can be dragged to the clipboard, to replace the current one. This chosen play could be added to the war room to replace the current one. As mentioned before, it is important that the play that will be replaced is saved as a draft, to prevent frustration in case the user makes a mistake. Another layout option would be to present the place to choose the plays first, followed by the clipboard and then the chat. This way, the chat is given less importance, and is more adaptable for mobile. Such layout is shown in a war room page3800ofFIG.38. FIG.39illustrates plays main page3900of an electronic forum, in accordance with an example embodiment of the invention. A plays tab3902allows the user to see, edit and create their team's plays. The first screen shows the user, thumbnails3904of all the plays they have and a button to add a new one as well as a button to upload a previously saved play. Each play has a select box, and when at least one of those is selected, a delete button3906and a “choose a playbook” dropdown3908appears. When the user clicks on a play, it opens that play and allows the user to edit it as shown in a play's creation or editing page4000ofFIG.40. The editing page may have a save button and a field to insert the name. Accordingly, the editing page4000shows a field to insert this play directly into a certain playbook, and also a “delete” button (smaller, on top), in case the user does not longer want that play. A confirmation message may be mandatorily provided before deleting the play, to make sure that the user really wants to delete the play. FIG.41illustrates a playbooks main page4100of an electronic forum, in accordance with an example embodiment of the invention. A playbook tab4102allows the user to manage their team's playbooks. A playbook is a set of plays, organized together due to something characteristic they have in common. The playbooks can serve to organize the team into groups. Another aspect where these playbooks will be helpful will be to gather all the plays of a certain future game. The initial screen allows the user to see their playbooks and add a new one. As in the plays page, if the user selects one or more playbooks, a delete button would appear so they could choose to delete them as shown in another playbooks main page4200ofFIG.42. A confirmation message may be provided before deleting the playbook, to make sure that the user really wants to delete the playbook. The last playbook may be the “drafts” playbook that would include the drafts that have been saved from the war room. When clicking on “Add a new playbook” or on one of the playbooks, the user will be redirected to a new screen, i.e. a playbook's addition page4300ofFIG.43that allows the user to create/edit the playbook, by adding/changing its' name and adding/removing plays from it. When clicking to “Add a play”, a pop-up/modal will appear so that the user can choose a play to add. The user can click on more than one play to add to the playbook and then choose to add them. FIG.44illustrates a team member management page4400of an electronic forum, in accordance with an example embodiment of the invention. A team members tab4402will allow the admin to control who is on the team and what role they have. The team member management page4400makes it easy for a user to add new users by either searching by their name, mail or username. At some point in the page, there is an explanation of the existing user roles and the functionalities each of them allows. When a user is added, they are sent a message to confirm if they want to be on this team. In this case, the icon of a message appears next to the member. If the admin clicks on it, it will resend the message (in some embodiments, a confirmation may be sought from the admin if he/she truly wishes to resend the message before doing so). If they have already accepted, the check icon would be visible next to that member. When the admin searches through the search box, the list of users that correspond to that name/nickname/email will appear. By clicking on them, the user will be added to the team as shown in a team member management page4500ofFIG.45. FIG.46illustrates a team setting page4600of an electronic forum, in accordance with an example embodiment of the invention. A team settings tab4602will allow the admin to manage certain aspects of their team, namely: Team's name Team's Description Team's logo Existent forums When users are added to a team, they are given a role. They can either be leaders or members. According to this role, they will be allowed to perform different tasks. The screens for each kind of user are designed accordingly to the functionalities they are allowed to use. For example, in one embodiment, the tabs for these roles will only include the following sections: war room, forum, plays and playbook. In another embodiment, the war room will be the same for the leaders, as it is designed for the admins. In yet another embodiment, the war room will be presented differently for the members as they won't be able to add plays to the clipboard, nor silence users. Further, the users may not be able to enter the war room if there is not an admin or a leader online. In that case, a window may be presented informing the user that he/she cannot enter the war room at that time. FIG.47illustrates a system4700for implementing the electronic forum and performing methods described herein, in accordance with an example embodiment of the invention. The system4700includes at least one processor such as a processor4702and at least one memory such as a memory4704. The system4700also includes an I/O module4706and a communication interface4708. The system4700can be embodied in the server112, or it may be deployed in any user device such as the user device104, or the user device108explained with reference toFIG.1. Although the system4700is depicted to include only one processor4702, the system4700may include more number of processors therein. In an embodiment, the memory4704is capable of storing platform instructions4705, where the platform instructions4705are machine executable instructions associated with generating and managing plays in an electronic forum. Further, the processor4702is capable of executing the stored platform instructions4705. In an embodiment, the processor4702may be embodied as a multi-core processor, a single core processor, or a combination of one or more multi-core processors and one or more single core processors. For example, the processor4702may be embodied as one or more of various processing devices, such as a coprocessor, a microprocessor, a controller, a digital signal processor (DSP), a processing circuitry with or without an accompanying DSP, or various other processing devices including integrated circuits such as, for example, an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a microcontroller unit (MCU), a hardware accelerator, a special-purpose computer chip, or the like. In an embodiment, the processor4702may be configured to execute hard-coded functionality. In an embodiment, the processor4702is embodied as an executor of software instructions, wherein the instructions may specifically configure the processor4702to perform the algorithms and/or operations described herein when the instructions are executed. The memory4704may be embodied as one or more volatile memory devices, one or more non-volatile memory devices, and/or a combination of one or more volatile memory devices and non-volatile memory devices. For example, the memory4704may be embodied as magnetic storage devices (such as hard disk drives, floppy disks, magnetic tapes, etc.), optical magnetic storage devices (e.g., magneto-optical disks), CD-ROM (compact disc read only memory), CD-R (compact disc recordable), CD-R/W (compact disc rewritable), DVD (Digital Versatile Disc), BD (Blu-ray® Disc), and semiconductor memories (such as mask ROM, PROM (programmable ROM), EPROM (erasable PROM), flash ROM, RAM (random access memory), etc.). The system4700also includes an input/output module4706(hereinafter referred to as ‘I/O module4706’) for providing an output and/or receiving an input. The I/O module4706is configured to be in communication with the processor4702and the memory4704. Examples of the I/O module4706include, but are not limited to, an input interface and/or an output interface. Examples of the input interface may include, but are not limited to, a keyboard, a mouse, a joystick, a keypad, a touch screen, soft keys, a microphone, and the like. Examples of the output interface may include, but are not limited to, a display such as a light emitting diode display, a thin-film transistor (TFT) display, a liquid crystal display, an active-matrix organic light-emitting diode (AMOLED) display, a microphone, a speaker, a ringer, a vibrator, and the like. In an example embodiment, the processor4702may include I/O circuitry configured to control at least some functions of one or more elements of the I/O module4706, such as, for example, a speaker, a microphone, a display, and/or the like. The processor4702and/or the I/O circuitry may be configured to control one or more functions of the one or more elements of the I/O module4706through computer program instructions, for example, software and/or firmware, stored on a memory, for example, the memory4704, and/or the like, accessible to the processor4702. The communication interface4708may enable the system4700to communicate with other devices such as users' devices, and the server112. The communication interface4708may be configured to communicate to various types of networks such as the network110as explained with reference toFIG.1. In an embodiment, various components of the system4700, such as the processor4702, the memory4704, the I/O module4706and the communication interface4708are configured to communicate with each other via or through a centralized circuit system4710. The centralized circuit system4710may be various devices configured to, among other things, provide or enable communication between the components (4702-4708) of the system4700. In certain embodiments, the centralized circuit system4710may be a central printed circuit board (PCB) such as a motherboard, a main board, a system board, or a logic board. The centralized circuit system4710may also, or alternatively, include other printed circuit assemblies (PCAs) or communication channel media. It is understood that the system4700as illustrated and hereinafter described is merely illustrative of a system that could benefit from embodiments of the invention and, therefore, should not be taken to limit the scope of the invention. It is noted that the system4700may include fewer or more components than those depicted inFIG.47. In an embodiment, the system4700may be implemented as a platform including a mix of existing open systems, proprietary systems and third party systems. In another embodiment, the system4700may be implemented completely as a platform including a set of software layers on top of existing hardware systems. In an embodiment, one or more components of the system4700may be deployed in a web server. In another embodiment, the system4700may be a standalone component in a remote machine connected to a communication network (such as the network110explained with reference toFIG.1) and capable of executing a set of instructions (sequential and/or otherwise). Moreover, the system4700may be implemented as a centralized system, or, alternatively, the various components of the system4700may be deployed in a distributed manner while being operatively coupled to each other. In an embodiment, one or more functionalities of the system4700may also be embodied as a client within devices, such as users' devices. In another embodiment, the system4700may be a central system that is shared by or accessible to each of such devices. FIG.48depicts an example system4800for generating and managing one or more plays, in accordance with an example embodiment of the invention. The system4800is an example representation of modules implemented in either the server112or the user device104or the user device108or in both. The system4800includes one or more modules. A clipboard module4802provides options for creating and managing clipboard. A connection module4804provides options for creating and managing connections of a user. A forum module4806provides options for creating and managing forum. The forum module4806also includes a topic module4808for creating and managing topics. A sharing module4810provides options for sharing content via social means. A news module4812provides options for providing news to the users. A video module4814provides set of videos to the users for viewing. A blog module4816provides options to blog and to read blogs to the users. An event module4818provides options to find events, and to create and share events with others. An account module4820facilitates creation and management of account by the user. A team creation module4822provides option for team creation. A team management module4824provides options for managing content for team sharing and managing team. A play module4826provides options for creation and management of plays. A playbook module4828provides options for creation and management of playbooks. A war room module4830provides options for creation and management of war rooms. The details of the options provided by each module are explained in detail in conjunction withFIGS.3to46. FIG.49depicts an example method for generating and managing one or more plays, in accordance with an example embodiment of the invention. The method4900can be performed by the server112or by any of the users' devices, or by a combination of the server112and any of the users' devices. The method starts at operation4902. At operation4904, a play is drawn via an electronic forum. The electronic forum is an electronic portal or a software application that enables various functionalities defined herein. A user (creator) registers with the electronic forum, creates an account and then sees options for creating the play. In response to user inputs, the play is created. In some embodiments, the user is also able to create a team or is already a part of the team. The drawing includes one or more of defining movements of the players, alignment of the players, and location of the players. In case sport is a field game such as football, then the location include formation of the game. The drawing also includes allocating offensive or defensive spots or tags to the players to indicate a player being offensive or defensive. At operation4906, the play is posted on the electronic forum and is available for access by other users of the electronic forum. The play is saved and can be searched. In addition, players can also be searched using tags, i.e. offensive or defensive, location at which user plays, or any other data associated with the players. At operation4908, a user, desires to access the play and the user is provided access to the play. Access rights of the user or those associated with the play may be checked before providing access of the play to the team member. At operation4910, an input for editing the play is received from the user or the creator is received and the edit is performed at operation4912. At operation4914, the edited play is provided or made available to the users or the user who created the play. The method4900stops at operation4916. FIG.50depicts another example method5000for generating and managing one or more plays, in accordance with another example embodiment. The method5000can be performed by the server112or by any of the users' devices, or by a combination of the server112and any of the users' devices. The method starts at operation5002. At operation5004, a team is created in response to a team creation input provided by a user. Creation of the team includes selection of team members. The creation also includes allocating rights to team members. For example, the team members can be at least one of a member, a leader or an administrator. The administrator has rights to assign admin rights to other team members. The administrator also has rights to make the team member as a member or as a leader. The leader has more rights compared to the member. At operation5006, a play is generated in response to a play generation input provided by the user. The play generation includes marking players and their moves. Virtual clipboard is used for this purpose. Drawing markers and various other buttons present on the clipboard are used for drawing the play. The play includes formations, configurations, annotations or any other form of play. At operation5008, a playbook is generated. The playbook is a collection of plays that share some common concept. Many such playbooks can be generated. At operation5010, a particular play or entire playbook is shared with team members, i.e. user's part of the team. The access is provided to the team members based on access rights associated with the play or playbook or with the team members. Access rights here indicate authentication or any other types of digital rights management. At operation5012, one or more inputs are received from one or more team members for collaborating on the play or playbook. At operation5014, the play or the playbook is edited based on the inputs. In addition, various other options are provided to enable communication regarding the play or playbook from within the game. Posting, commenting, blogging, new sharing etc. is also possible. The ability to draw the play and post the play to the electronic forum where another user can open and edit the play and then post a response provides an efficient way to create and manage the electronic forum. Also, the shared team space enables usage of the same tool for planning as for in game communications. In some embodiments, access rights are also assigned to team members, or users for accessing the play or playbooks. The method5000stops at operation5016. FIG.51illustrates play sharing on a play-sharing page5100on an electronic forum, in accordance with an example embodiment of the invention. The pay sharing page5100shows play shared on the electronic forum. Other users are able to view the shared plays, open the plays and make their own updates and repost as part of a reply or their own post. The plays allow participant to communicate back and forth complex ideas pictographically. FIG.52illustrates presets of virtual clipboard5200of an electronic forum, in accordance with another example embodiment. The virtual clipboard5200facilitates drawing typical plays by supplying preset “x” and “o” icons. There are also standard arrow and crossbar headings. There are also different colored markers, custom text for description and annotation, as well as the preset formations and background based on sports. FIG.53illustrates drawing on an image underlay5300on an electronic forum, in accordance with an example embodiment of the invention. The image underlay5300is a photo of a formation. This virtual clipboard enables the user to draw directly over the image underlay5300. The server112uses algorithmic recognition to determine the teams and location of players. The server112then places those players in their corresponding positions on a standard clipboard with standard iconography as shown inFIG.54.FIG.54illustrates interpretation of formation fromFIG.53on an electronic forum, in accordance with an example embodiment of the invention. FIG.55depicts an example method5500for generating and managing one or more plays, in accordance with an example embodiment of the invention. The method5500starts at operation5502. At operation5504, a request is received to access a play by a user. At operation5506, a check is made to determine if at least one of an administrator or a leader of the play is present in a war room. At operation5508, a check is performed to determine access rights of the user. At operation5510, the user is provided access to the play if at least one of the administrator or the leader is present in the playroom and if the user has access rights to the play. In some embodiments, the user then modifies the play. The user inputs are received by the electronic forum and corresponding changes are made to the play. The user communicates with other team members via at least one of an audio or video. The method5500stops at operation5512. FIG.56depicts an example method5600for generating and managing one or more plays, in accordance with an example embodiment of the invention. The method5600starts at operation5602. At operation5604, a first input is received from a user defining one or more players. The user may select an option provided via user interface of electronic forum to define a player. A node corresponding to the player is defined. One or more players can be defined in similar manner. At operation5606, a second input is received from the user defining at least one of position or strategic function of the one or more players. For example, the user may indicate that a particular player will be a forward position player for a game like football. The position can be defined by the user and also the strategic function or role of the forward player gets assigned or defined for that player. At operation5608, a third input is received from the user defining movements of the one or more players. Different positions and functions can be assigned to the player for different instants. For example, at one instant the player can be in forward position while at other instant the player can be a mid fielder. Various user interface options are provided to the user to define the position and the role for the one or more players at different instances of game. Similarly, how the player moves and plays during the entire game can be defined. Relative positions with respect to different players can also be defined. Various formations of the players can also be defined. At operation5610, the play including various inputs received from the user for various players indicating how the players will play during the game are stored for further access and editing. The method5600stops at operation5612. FIG.57depicts an example method5700for generating and managing one or more teams, in accordance with an example embodiment of the invention. The method5700starts at operation5702. At operation5704, a first input indicative of a first team member of a team is received. The first input can be provided via a user interface provided for generating team. At operation5706, a second input indicative of rights assigned to the first team member is received. The rights indicate access rights for the first team member and role of the first team member. For example, the rights indicate if the first team member is a mere team member or have administrative or leadership rights. Several similar inputs can be received from the user creating the team regarding multiple team members. For each team member rights are assigned. The team member selection and rights allocation can happen one by one for a team member or can happen in parallel for several team members. At operation5708, the team along with appropriate rights of each team member is stored for later access. The team is then used for various workflows such as sharing of plays, playbooks, participation in war rooms etc. The method5700stops at operation5710. Without in any way limiting the scope, interpretation, or application of the claims appearing below, a technical effect of one or more of the example embodiments disclosed herein is to provide an electronic forum that overcomes shortcomings of conventional mechanisms for generating and sharing of sports related information and further enables generating and managing plays in an efficient manner. More specifically, an electronic forum is disclosed that allows sports players/coaches to exchange ideas. Further, the electronic forum allows users to draw sports plays on a virtual clipboard. The virtual clipboard enables users to post formations, configurations, annotations and plays. Other users may view the plays, open the plays, make their own updates and repost them as part of a reply or their own post. The plays allow the users to communicate back and forth complex ideas pictographically. Furthermore, the electronic forum allows the users to share and discuss these plays in a forum, find news, videos, blogs and sports events, as well as connect with other users to discuss sports topics. A premium service of the electronic forum allows the users to save plays to their personal account and share them privately with other users: their team members. Further, the users can organize their plays in to playbooks and each team will have a war room where the team members can discuss and make shared plays Although the invention has been described with reference to specific exemplary embodiments, it is noted that various modifications and changes may be made to these embodiments without departing from the broad spirit and scope of the invention. For example, the various operations, blocks, etc., described herein may be enabled and operated using hardware circuitry (for example, complementary metal oxide semiconductor (CMOS) based logic circuitry), firmware, software and/or any combination of hardware, firmware, and/or software (for example, embodied in a machine-readable medium). For example, the systems and methods may be embodied using transistors, logic gates, and electrical circuits (for example, application specific integrated circuit (ASIC) circuitry and/or in Digital Signal Processor (DSP) circuitry). Particularly, the system4700and its various components may be enabled using software and/or using transistors, logic gates, and electrical circuits (for example, integrated circuit circuitry such as ASIC circuitry). Various embodiments of the invention may include one or more computer programs stored or otherwise embodied on a computer-readable medium, wherein the computer programs are configured to cause a processor or computer to perform one or more operations (for example, operations explained herein with reference toFIG.49,50,55,56,57). A computer-readable medium storing, embodying, or encoded with a computer program, or similar language, may be embodied as a tangible data storage device storing one or more software programs that are configured to cause a processor or computer to perform one or more operations. Such operations may be, for example, any of the steps or operations described herein. In some embodiments, the computer programs may be stored and provided to a computer using any type of non-transitory computer readable media. Non-transitory computer readable media include any type of tangible storage media. Examples of non-transitory computer readable media include magnetic storage media (such as floppy disks, magnetic tapes, hard disk drives, etc.), optical magnetic storage media (e.g. magneto-optical disks), CD-ROM (compact disc read only memory), CD-R (compact disc recordable), CD-R/W (compact disc rewritable), DVD (Digital Versatile Disc), BD (BLU-RAY® Disc), and semiconductor memories (such as mask ROM, PROM (programmable ROM), EPROM (erasable PROM), flash memory, RAM (random access memory), etc.). Additionally, a tangible data storage device may be embodied as one or more volatile memory devices, one or more non-volatile memory devices, and/or a combination of one or more volatile memory devices and non-volatile memory devices. In some embodiments, the computer programs may be provided to a computer using any type of transitory computer readable media. Examples of transitory computer readable media include electric signals, optical signals, and electromagnetic waves. Transitory computer readable media can provide the program to a computer via a wired communication line (e.g. electric wires, and optical fibers) or a wireless communication line. Various embodiments of the invention, as discussed above, may be practiced with steps and/or operations in a different order, and/or with hardware elements in configurations, which are different than those which, are disclosed. Therefore, although the invention has been described based upon these exemplary embodiments, it is noted that certain modifications, variations, and alternative constructions may be apparent and well within the spirit and scope of the invention. Although various exemplary embodiments of the invention are described herein in a language specific to structural features and/or methodological acts, the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as exemplary forms of implementing the claims. | 56,611 |
11862037 | Certain implementations will now be described more fully below with reference to the accompanying drawings, in which various implementations and/or aspects are shown. However, various aspects may be implemented in many different forms and should not be construed as limited to the implementations set forth herein; rather, these implementations are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art. Like numbers in the figures refer to like elements throughout. Hence, if a feature is used across several drawings, the number used to identify the feature in the drawing where the feature first appeared will be used in later drawings. DETAILED DESCRIPTION Overview Example embodiments described herein provide certain systems, methods, and devices for detection and correction of eating behavior. Consumable products, such as foods, beverages, nutritional products, vitamins, pharmaceuticals, biologics, and others, may cause effects on people who consume them. For example, because some carbohydrates may be broken down into sugar, such as glucose, some foods high in carbohydrates may cause an increase in a person's blood sugar. Alcohol content may cause an increase in a person's blood alcohol levels or heart rate. Foods or beverages with high fat, salt, or caloric content may cause higher blood pressure. Caffeine may increase a person's heart rate. Consumable products also may exhibit characteristics associated with their consumption. For example, the opening of a can or bottle with a carbonated beverage may exhibit a distinct sound. The chewing of crunchy food such as potato chips may sound different than the chewing of an apple. Patients and medical professionals may benefit from customizing the monitoring of biomedical data based on when and what a patient may be consuming. For example, the sampling rate for a person's heartrate may increase when a person eats or drinks to better capture the effects of the consumable product on the person, and the movement or activity of a person may be monitored differently when a user is consuming a product. Because an audio sampling may be captured at a low frequency at small intervals, the audio sampling may result in the capture of audio data used to identify when a person is eating or drinking, and to trigger a higher frequency sampling or higher sampling rate for more detailed data collection. Therefore, by detecting audio associated with consumption of a product and/or opening of a package, the process for determining that someone is consuming a consumable product and identifying that consumable product may be automated. By identifying biomedical data which may be affected by the consumption of a particular type of consumable product, additional data including the biomedical data may be monitored in customizable ways for the effects that the consumable product has on the person who consumed the product. In this manner, systems, devices, and methods may detect eating and drinking behavior, and may correct the behavior by encouraging a person to avoid consuming certain products or to substitute healthier alternative products. In one or more embodiments, a wearable device such as a watch, a ring, glasses, headbands, or medical device may record audio (with a user's consent) using one or more audio sensors (e.g., microphones, electromyography sensors). Captured audio data may be analyzed by the device or sent to another device for analysis. The audio data may be converted to a sound profile. For example, a sound profile may include a frequency distribution of captured audio signals over time. A device may compare the sound profile to known sound profiles of consumable products. For example, the crunch of potato chips may match a known sound profile for potato chips. The crisp sound of a user biting into an apple may have a distinct sound profile, as may the sound of swallowing a liquid, opening a carbonated beverage or bag, opening and closing a refrigerator, an active microwave, and the like. Audio profiles of consumable products may be differentiated from audio profiles of other types of noises or sounds, such as talking (e.g., voice) or certain types of background noise (e.g., sounds of musical instruments, automobiles, computer devices, etc.). Machine learning using neural networks or other types of machines may be used to identify sounds and words to identify when a user is consuming a product, about to consume a product, and has recently consumed a product. Using sound profiles, a device may determine a specific product or type of product that a person may be consuming. In one or more embodiments, a device may determine characteristics of a product once the product has been identified. For example, a cheeseburger may have high cholesterol and may trigger a higher blood pressure for a person, as may potato chips or other foods known to be salty. Candy may include sugar which may cause an increase in a person's blood glucose levels. Spicy or acidic products may cause indigestion or acid reflux. A caffeinated product may increase a person's heart rate. When a device determines the product or type of product that a person may be consuming, the device may determine corresponding characteristics of the product, and may determine data which may be associated with the effects of the characteristics. For example, if a characteristic of a sugary food or drink is to increase blood glucose levels, a device may determine that blood glucose data may indicate the effects of consuming the sugary food or drink. If caffeine products are known to increase heartrate, a device may determine that monitoring a user's heartrate may provide an indication of the effects of consuming caffeine. In one or more embodiments, a device may determine that another device or an application is responsible for detecting or otherwise collecting data associated with a characteristic of a consumable product. For example, a blood glucose monitor may measure blood glucose levels. A heartrate monitor may capture heartrate data. A hydration sensor may measure a user's dehydration. A pacemaker may recognize a person's electrocardiogram. A thermometer may measure a person's temperature. An accelerometer, magnetometer, wireless signals (e.g., Bluetooth or Wi-Fi signals), global navigation satellite system signals may be used (with a user's consent) to determine a device's motion or location, and the motion or location data may confirm if the user is at a location (e.g., a restaurant) or moving (e.g., motioning an arm or hand toward the face) in a manner which indicates a likely consumption of a product (e.g., and may be used to supplement audio data for the purpose of determining when a user is consuming a product). A hydrogen sensor may measure a user's indigestion. When a device determines a characteristic of a consumable product and an associated type of data which may measure the effects of the characteristic on a person consuming the consumable product, the device may identify another device or an application responsible for capturing the associated type of data, and may request the associated data. The request for the data may include specification of a sampling rate or frequency. For example, one device may request that another device provide data captured at a particular rate or frequency (e.g., a higher sampling rate or frequency than normal). Such may allow devices to conserve power and resources (e.g., by not sampling at higher rates or frequencies unless a user is consuming something). In one or more embodiments, with a user's consent, a device may help a user regulate their intake of consumable products and may provide recommendations for products, when to consume or not consume, locations where consumable products are available, nutritional information, warnings/alerts, alarms to medical professionals or other parties or devices, and the like. For example, when a device detects that a user is eating food late at night (e.g., outside of a normal window of time associated with eating meals), the device may present alarms or messages encouraging the user to eat something healthier or to wait until the next meal, or to indicate the effects that consuming a product may have on the person. The device may provide recommendations of healthier products to substitute, such as substituting fruit and vegetables for a less healthy product. In one or more embodiments, devices may connect to one another using a variety of connection methods such as Wi-Fi, Bluetooth, and ultrasound, and may use direct or other peer-to-peer connections (e.g., Wi-Fi Direct, neighbor awareness networking, etc.) to communicate with one another. For example, a smart phone, tablet, or other mobile device may execute applications which may collect data from other devices, such as heartrate monitors, blood glucose monitors, pacemakers, thermometers, hydrogen sensors, exercise monitors, step monitors, and the like. A device may capture audio, and when the audio indicates a user's consumption of a product, the device may request that a biomedical sensor or another mobile device in communication with the biomedical sensor provide other data, such as additional audio data, biomedical data, user profile or preference data, and the like. A device may collect such data, with user consent, and may analyze the data to determine that a user is consuming a product, what the product is, characteristics of the product, data indicative of or effected by the characteristics, and the effects of the product on a person. When audio data is strongly indicative of consumption (e.g., a confidence level associated with the sound profile of a consumable product exceeds a threshold confidence), the data collected by a mobile device may be analyzed for the effects that a consumable product has on a person. When the confidence level associated with the sound profile of a consumable product does not exceed a threshold confidence, data may be collected from or sent to a remote service (e.g., a cloud server) for analysis or collection. In one or more embodiments, time and duration of consumption may be indicative of an amount of a product consumed and/or when the product is consumed. If audio data indicates that a user is consuming products outside of normal meal times or that a user is consuming products for a long time (e.g., longer than a time threshold), a device may determine that a user may be consuming too much, too little, and/or consuming products and unhealthy times of day (e.g., right before sleeping). Such may trigger the generation of messages or alarms, and/or the capturing of relevant biomedical data to monitor. The above descriptions are for purposes of illustration and are not meant to be limiting. Numerous other examples, configurations, processes, etc., may exist, some of which are described in greater detail below. Example embodiments will now be described with reference to the accompanying figures. Illustrative Processes and Use Cases FIG.1illustrates an example system100for detection and correction of eating behavior, in accordance with one or more example embodiments of the present disclosure. Referring toFIG.1, the system100may include a user102wearing a wearable device104shown as a watch. The user102may have a user device106, which may be in communication with one or more biomedical devices (e.g., device108, device110). The user device106may collect data using one or more applications (e.g., application112). For example, the application112may collect audio data captured by the wearable device104, time data (e.g., a time of day, a duration associated with captured data), heart data (e.g., heartrate data or electrocardiogram data captured by the device108), blood glucose data (e.g., captured by the device108or the device110), movement data (e.g., captured by the user device106), and location data (e.g., as captured by the user device106). The data captured by the wearable device104, the user device106, the device108, and/or the device110may be collected with user consent (e.g., the user may be prompted to confirm whether to allow data to be collected and/or tracked). Still referring toFIG.1, the wearable device104may capture (e.g., record) audio data of the user102(with user consent) consuming one or more consumable products according to a variety of scenarios. In scenario114, a user may be drinking a liquid with one hand, and wearing the wearable device104with the other hand/arm. In scenario116, a user may be eating a product with one hand, and wearing the wearable device104with the other hand/arm. In scenario118, a user may be drinking a liquid with the same hand/arm wearing the wearable device104. In scenario120, a user may be eating a product with the same hand/arm wearing the wearable device104. In any scenario, the wearable device104may capture audio such as chewing, swallowing, opening a package or container (e.g., a bottle, can, bag, box, jar, etc.), opening or closing a refrigerator or microwave, or audio of a person talking (e.g., audio including keywords regarding the consumption of a product or location where consumable products may be sold). Still referring toFIG.1, the user device106may communicate (e.g., using one or more communication networks130) with one or more servers140(e.g., cloud-based servers), and with one or more devices150(e.g., computer device152, treadmill154, refrigerator156) using the one or more communication networks130or using a direct connection (e.g., Wi-Fi, Bluetooth, ultrasound). The one or more servers140may receive data captured by the wearable device104, the user device106, the device108, the device110, and/or the one or more devices150and may analyze the data, or any combination of the one or more servers140, the user device106, and the wearable device104may analyze the captured data. With user consent, the one or more servers140may provide user data, such as health data, data regarding the user's product consumption habits and history, exercise and other activity data, and the like. The one or more devices150may provide data indicating when a user exercised or bought consumable products (e.g., using browsing or other search history from the computer device152, or medical data such as medical history or prescription product history from the computer device152). Such data from the one or more devices150may indicate activity options (e.g., exercising options available to a user) and for analysis regarding whether a user is exercising after consuming certain types of products. In one or more embodiments, the user device106may include any suitable processor-driven device including, but not limited to, a mobile device or a non-mobile (e.g., a static) device. For example, the user device106may include, a user equipment (UE), a station (STA), an access point (AP), a software enabled AP (SoftAP), a personal computer (PC), a wearable wireless device (e.g., bracelet, watch, glasses, ring, etc.), a desktop computer, a mobile computer, a laptop computer, an Ultrabook™ computer, a notebook computer, a tablet computer, a server computer, a handheld computer, a handheld device, an internet of things (IoT) device, a sensor device, a PDA device, a handheld PDA device, an on-board device, an off-board device, a hybrid device (e.g., combining cellular phone functionalities with PDA device functionalities), a consumer device, a vehicular device, a non-vehicular device, a mobile or portable device, a non-mobile or non-portable device, a mobile phone, a cellular telephone, a PCS device, a PDA device which incorporates a wireless communication device, a mobile or portable GPS device, a DVB device, a relatively small computing device, a non-desktop computer, a “carry small live large” (CSLL) device, an ultra mobile device (UMD), an ultra mobile PC (UMPC), a mobile internet device (MID), an “origami” device or computing device, a device that supports dynamically composable computing (DCC), a context-aware device, a video device, an audio device, an A/V device, a set-top-box (STB), a blu-ray disc (BD) player, a BD recorder, a digital video disc (DVD) player, a high definition (HD) DVD player, a DVD recorder, a HD DVD recorder, a personal video recorder (PVR), a broadcast HD receiver, a video source, an audio source, a video sink, an audio sink, a stereo tuner, a broadcast radio receiver, a flat panel display, a personal media player (PMP), a digital video camera (DVC), a digital audio player, a speaker, an audio receiver, an audio amplifier, a gaming device, a data source, a data sink, a digital still camera (DSC), a media player, a smartphone, a television, a music player, or the like. Other devices, including smart devices such as lamps, climate control, car components, household components, appliances, etc. may also be included in this list. In one or more embodiments, the user device106may execute one or more applications, such as the application112, which may collect biomedical and/or other data from the device108and/or the device110. For example, the device108and/or the device110may include a blood glucose monitor, a heartrate monitor, electrodes, a pacemaker, a thermometer, a hydration monitor, a hydrogen sensor, or other sensors or devices capable of detecting user data with a user's consent. One or more applications executable by the user device106may collect and analyze data from the device108and/or the device110, and may send the data to the one or more servers140for analysis, or may analyze the data locally on the user device106. The analysis of the data may be supplemented by data from the one or more devices150(e.g., to determine user purchasing and/or exercising habits). In one or more embodiments, the wearable device104may include a wearable wireless device (e.g., bracelet, watch, glasses, ring, etc.) capable of capturing audio with one or more sensors (e.g., one or more microphones and/or electromyography sensors, not shown). The one or more sensors may be arranged to detect sounds at different levels and/or in different ranges or directions from the wearable device104. The use of multiple sensors may allow for noise cancelation (e.g., background noise suppression) while preserving sounds relevant to consumption of a product. The audio data may be analyzed by the wearable device104, the user device106, and/or the one or more servers140to determine whether the audio data indicates that the user102is consuming a product. For example, captured audio data may be analyzed by the wearable device104or sent to another device (e.g., the user device106or the one or more servers140) for analysis. The audio data may be converted to a sound profile. For example, a sound profile may include a frequency distribution of captured audio signals over time. The wearable device104, the user device106, or the one or more servers140may compare the sound profile to known sound profiles of consumable products. For example, the crunch of potato chips may match a known sound profile for potato chips. The crisp sound of a user biting into an apple may have a distinct sound profile, as may the sound of swallowing a liquid, opening a carbonated beverage or bag, opening and closing a refrigerator, an active microwave, and the like. Audio profiles of consumable products may be differentiated from audio profiles of other types of noises or sounds, such as talking (e.g., voice) or certain types of background noise (e.g., sounds of musical instruments, automobiles, computer devices, etc.). Machine learning modules142may use neural networks or other types of machines (e.g., implemented by the one or more servers140) may be used to identify sounds and words to identify when a user is consuming a product, about to consume a product, and has recently consumed a product. Using sound profiles, the wearable device104, the user device106, or the one or more servers140may determine a specific product or type of product that a person may be consuming. In one or more embodiments, when captured audio by the wearable device104or another device matches an audio profile of a consumable product, the wearable device104, the user device106, or the one or more servers140may determine one or more characteristics associated with the consumable product. For example, a cheeseburger may have high cholesterol and may trigger a higher blood pressure for a person, as may potato chips or other foods known to be salty. Candy may include sugar which may cause an increase in a person's blood glucose levels. Spicy or acidic products may cause indigestion or acid reflux. A caffeinated product may increase a person's heart rate. When the wearable device104, the user device106, or the one or more servers140determines the product or type of product that a person may be consuming, the wearable device104, the user device106, or the one or more servers140may determine corresponding characteristics of the product, and may determine data which may be associated with the effects of the characteristics. For example, if a characteristic of a sugary food or drink is to increase blood glucose levels, the wearable device104, the user device106, or the one or more servers140may determine that blood glucose data may indicate the effects of consuming the sugary food or drink. If caffeine products are known to increase heartrate, the wearable device104, the user device106, or the one or more servers140may determine that monitoring a user's heartrate may provide an indication of the effects of consuming caffeine. In one or more embodiments, when the wearable device104, the user device106, or the one or more servers140determines one or more characteristics associated with a consumable product, the wearable device104, the user device106, or the one or more servers140may determine a measurable attribute to capture, an application (e.g., the application112) that may capture data indicative of the measurable attribute, and/or a device (e.g., the device108, the device110, the one or more devices150) associated with capturing and/or providing the data indicative of the measurable characteristic. The measurable attribute may include blood glucose levels or other blood sugar levels, heartrate, electrocardiogram data, hydrogen data, breathing data, perspiration data, movement or activity data, biomedical cell data, skin data, tissue data, circulatory data, blood content, blood alcohol data, and the like. With user consent, the wearable device104, the user device106, or the one or more servers140may determine which device and/or application may provide such data for the measurable attribute. In one or more embodiments, when the wearable device104, the user device106, or the one or more servers140determines a device and/or application which may capture and provide data for the measurable attribute associated with a characteristic of a consumable product, the wearable device104, the user device106, or the one or more servers140may request the data for the measurable attribute at a particular sampling rate or frequency. For example, a device may sample the data for the measurable characteristic at one sampling rate or frequency, and the wearable device104, the user device106, or the one or more servers140may request that the sampling rate or frequency be increased at least for a time period (e.g., until it is determined that the user is no longer consuming the product or a threshold time after consumption of the product). The wearable device104, the user device106, or the one or more servers140may receive the data captured at an increased frequency or sampling rate, allowing the capturing device to conserve resources by sampling at a lower frequency or sampling rate outside of requests from the wearable device104, the user device106, or the one or more servers140. In one or more embodiments, the wearable device104, the user device106, or the one or more servers140may receive captured data associated with a measurable attribute, may analyze the data, and may determine an association between the data of the measurable attribute and the product, along with similar products. The wearable device104, the user device106, or the one or more servers140may store data associated with a user and with a product that indicates user reactions associated with the measurable attribute (e.g., changes in heartrate, blood sugar, hydration levels, breathing levels, heart waves, blood pressure, neurological data, and the like). In one or more embodiments, the wearable device104, the user device106, or the one or more servers140may generate one or more messages, alarms, or alerts based on the data. For example, if the wearable device104, the user device106, or the one or more servers140determines from prior association data that a product or similar product causes a negative biomedical effect of a user, the wearable device104, the user device106, or the one or more servers140may generate a message or alarm intended to discourage the user from consuming the product, may recommend substitute products known to cause less of the biomedical effect (e.g., not associated with the consumable product's characteristics), and/or may send alerts to other devices to let other users know that the person is consuming a product known to cause a negative biomedical effect. As shown inFIG.1, the wearable device104may display nutritional information, including the content of sodium and fat, along with a recommendation to try a healthier product (e.g., to substitute an apple for salty potato chips). Messages and alerts may provide any combination of information related to a user's health (e.g., heartrate, blood glucose, blood pressure, etc.), information about what the user is consuming (e.g., nutritional information, health-related effects, etc.), messages to encourage or discourage consumption of certain products, offers for similar or substitute products, notifications of locations where products may be purchased, notifications regarding a user's exercise habits and/or exercise options and locations (e.g., that the treadmill154or other exercise equipment like an exercise bicycle are available and have not been used since a given time), and the like. In one or more embodiments, with user consent, recording audio or other data by the wearable device104may activate or change based on a time, location, position, or movement of the wearable device104. For example, when a time of day is within a selected or known meal time (e.g., a range of time in the morning for breakfast, a range of time in the afternoon for lunch, a range of time in the evening for dinner), the wearable device104may activate recording or increase sampling frequency. In this manner, activation may include initiating or powering on one or more components of the wearable device104(e.g., such as microphones or other audio sensors), and may include adjusting a sampling rate or frequency. When the wearable device104is in a position or orientation associated with consumption of a product, the wearable device104may activate recording or increase sampling frequency. For example, using accelerometer, magnetometer, or other device data, the wearable device104may determine that a user's arm or hand is at an angle (e.g., within an angular range) with respect to one or more additional sensors on other devices and/or with respect to gravity known to be associated with bringing a consumable product to a user's face for consumption. In one or more embodiments, with user consent, when the wearable device104is in such a position or orientation, the wearable device104may activate a timer to determine the duration that the wearable device104is in the position or orientation. The timer may capture time indicating a duration (e.g., how long a user is consuming a product), which may be correlated with an amount of product consumption (e.g., the longer the duration, the more product is consumed). The wearable device104may deactivate recording or decrease sampling frequency when the wearable device104determines that it is no longer in a consumption position or orientation. The wearable device104may determine the time at which a user may be consuming a product, and may generate messages based on the time (e.g., to not eat in between meals). The wearable device104may use global navigation satellite system data, Wi-Fi data, Bluetooth data, ultrasound data, accelerometer data, magnetometer data, or other data to identify its location. The wearable device104may determine (e.g., using a map or other type of application executable on the wearable device104or by the user device106) whether the device's current location is at or near (e.g., within a distance threshold) of a restaurant or other provider of consumable products, and may generate offers, incentives, alternative options, or messages discouraging the consumption of certain products. In one or more embodiments, to identify products based on what the user is determined to be consuming, the wearable device104, the user device106, and/or the one or more servers140may use product identifiers. For example, when audio data matches a sound profile associated with consumption of a product, the product may have a product identifier. The wearable device104, the user device106, and/or the one or more servers140may store and/or access data including related or different products. For example, given a product identifier, the wearable device104, the user device106, and/or the one or more servers140may identify other products having similar characteristics (e.g., health characteristics, nutritional content, types of products, a same brand, same effects on a person's health, such as decreased heartrate or blood pressure, etc.) or substitute products (e.g., healthier products not known to cause the same level of effects such as heartrate or blood pressure changes, products with less content of certain ingredients such as sugar or fat, etc.). In one or more embodiments, the one or more communications networks130may include, but not limited to, any one of a combination of different types of suitable communications networks such as, for example, broadcasting networks, cable networks, public networks (e.g., the Internet), private networks, wireless networks, cellular networks, or any other suitable private and/or public networks. Further, any of the one or more communications networks130may have any suitable communication range associated therewith and may include, for example, global networks (e.g., the Internet), metropolitan area networks (MANs), wide area networks (WANs), local area networks (LANs), or personal area networks (PANs). In addition, any of the one or more communications networks may include any type of medium over which network traffic may be carried including, but not limited to, coaxial cable, twisted-pair wire, optical fiber, a hybrid fiber coaxial (HFC) medium, microwave terrestrial transceivers, radio frequency communication mediums, white space communication mediums, ultra-high frequency communication mediums, satellite communication mediums, or any combination thereof. FIG.2illustrates an example process200for detection and correction of eating behavior, in accordance with one or more example embodiments of the present disclosure. Referring toFIG.2, a person202wearing a wearable device204(e.g., having functionality as described with regard to the wearable device104ofFIG.1) may consume a product206(e.g., a liquid or type of beverage). With user consent, the wearable device204may capture audio208of the person202consuming the product (e.g., a swallowing sound). At block212of the process200, the wearable device204may detect the audio208(e.g., using one or more microphones or other audio sensors). The audio208may include chewing, swallowing, opening the product206, words spoken by the person202, sounds made by the product206(e.g., carbonated beverage sounds), or other audio. Still referring toFIG.2, the wearable device204optionally may consider its position, orientation, and/or movement. For example, at block214, the wearable device204may detect its position, movement, or orientation (e.g., angles, rotation, movement directions, etc.), and at block216may determine that the wearable device204is in a position within a threshold range (e.g., angular range indicative of a tilt, height within a threshold distance of a user's face, angular range with respect to gravity or one or more other devices, etc.). Based on the audio208and optionally the position, movement, or orientation data, the wearable device at block218may determine that the person202is consuming the product206. For example, the wearable device204may determine that the audio208matches one of more sound profiles for various products or types of products (e.g., food, beverage, etc.). The sound profiles may be ranked (e.g., with respective scores indicating the likelihood that the sound profile matches the audio), and the sound profile with the highest score may be selected. The product or product type corresponding to the selected sound profile may be identified by the wearable device204as the product206. Still referring toFIG.2, at block220, the wearable device204may determine that additional data associated with a measurable attribute (e.g., heartrate, blood pressure, blood sugar, etc.) is to be captured. For example, when the product206is known (e.g., based on a product profile stored and accessed based on the corresponding product identifier) to have high cholesterol content, the wearable device204may determine corresponding characteristics of the product (e.g., increased blood pressure), and may determine data (e.g., blood pressure data) which may be associated with the effects of the characteristic. When a characteristic of a sugary food or drink is to increase blood glucose levels, the wearable device204may determine that blood glucose data may indicate the effects of consuming the sugary food or drink. If caffeine products are known to increase heartrate, the wearable device204may determine that monitoring a heartrate of the person202may provide an indication of the effects of consuming caffeine. The wearable device204may determine that another device (e.g., the device108or the device110ofFIG.1) or an application (e.g., the application112ofFIG.1) is responsible for detecting or otherwise collecting data associated with a characteristic of the product206. At block222, when the wearable device204determines a characteristic of the product206and an associated type of data (e.g., a measurable attribute) which may measure the effects of the characteristic on the person202consuming the product206, the wearable device204may identify another device (e.g., the device108or the device110ofFIG.1) or an application (e.g., the application112ofFIG.1) responsible for capturing the associated type of data (e.g., as additional data), and may request and receive the associated data. The additional data may include additional audio data, existing audio data, biomedical data, data from one or more other devices, and/or other types of data. The request for the data may include specification of a sampling rate or frequency with which to capture or otherwise detect the additional data. At block224, the wearable device may receive the additional data. In one or more embodiments, the request for additional data (e.g., biomedical data) sent by the wearable device204may be sent to another device (e.g., the user device106ofFIG.1), which may execute one or more applications (e.g., the application112ofFIG.1), which may collect biomedical and/or other data from other devices (e.g., the device108and/or the device110ofFIG.1). The other devices may include a blood glucose monitor, a heartrate monitor, electrodes, a hydration monitor, a hydrogen sensor, or other sensors or devices capable of detecting user data with a user's consent. In one or more embodiments, the wearable device204may include a wearable wireless device (e.g., bracelet, watch, glasses, ring, etc.) capable of capturing audio with one or more sensors (e.g., a microphone, not shown). The audio data may be analyzed by the wearable device204or another device to determine whether the audio208indicates that the person202is consuming the product206and what the product206is. In one or more embodiments, the wearable device204or another device may analyze the additional data, and may determine an association between the additional data of a measurable attribute and the product206, along with similar products (e.g., identified using the product identifier for the product206). The wearable device204or another device may store data associated with the person202and with the product206that indicates user reactions associated with the measurable attribute (e.g., changes in heartrate, blood sugar, hydration levels, breathing levels, heart waves, blood pressure, neurological data, and the like). In one or more embodiments, the wearable device204may generate one or more messages, alarms, or alerts based on the audio208and/or the additional data. For example, if the wearable device204determines from prior association data that the product206or a similar product causes a negative biomedical effect of the person202or another person, the wearable device204may generate a message or alarm intended to discourage the person202from consuming the product206, may recommend substitute products known to cause less of the biomedical effect (e.g., not associated with the consumable product's characteristics), and/or may send alerts to other devices to let other users know that the person202is consuming the product206known to cause a negative biomedical effect. The wearable device204may display nutritional information, including the content of sodium and fat, along with a recommendation to try a healthier product (e.g., to substitute water for a sugary drink). Messages and alerts may provide any combination of information related to the person's health (e.g., heartrate, blood glucose, blood pressure, etc.), information about what the person202is consuming (e.g., nutritional information, health-related effects, etc.), messages to encourage or discourage consumption of certain products, offers for similar or substitute products, notifications of locations where products may be purchased, notifications regarding a user's exercise habits and/or exercise options and locations, and the like. In one or more embodiments, with user consent, recording the audio208or other data by the wearable device204may activate or change based on a time, location, position, or movement of the wearable device204. For example, when a time of day is within a selected or known meal time (e.g., a range of time in the morning for breakfast, a range of time in the afternoon for lunch, a range of time in the evening for dinner), the wearable device204may activate recording or increase sampling frequency. In this manner, activation may include initiating or powering on one or more components of the wearable device204(e.g., such as microphones or other audio sensors), and may include adjusting a sampling rate or frequency. When the wearable device204is in a position or orientation associated with consumption of a product, the wearable device204may activate recording or increase sampling frequency. For example, using accelerometer, magnetometer, or other device data, the wearable device204may determine that a user's arm or hand is at an angle (e.g., within an angular range) with respect to one or more additional sensors on other devices and/or with respect to gravity known to be associated with bringing a consumable product to a user's face for consumption. In one or more embodiments, with user consent, when the wearable device204is in such a position or orientation, the wearable device204may activate a timer to determine the duration that the wearable device204is in the position or orientation. The timer may capture time indicating a duration (e.g., how long a user is consuming a product), which may be correlated with an amount of product consumption (e.g., the longer the duration, the more product is consumed). The wearable device204may deactivate recording or decrease sampling frequency when the wearable device204determines that it is no longer in a consumption position or orientation. The wearable device204may determine the time at which a user may be consuming a product, and may generate messages based on the time (e.g., to not eat in between meals). The wearable device204may use global navigation satellite system data, Wi-Fi data, Bluetooth data, ultrasound data, accelerometer data, magnetometer data, or other data to identify its location. The wearable device204may determine (e.g., using a map or other type of application executable on the wearable device204) whether the device's current location is at or near (e.g., within a distance threshold) of a restaurant or other provider of consumable products, and may generate offers, incentives, alternative options, or messages discouraging the consumption of certain products. WhileFIG.2shows the user202consuming the product206with the same hand/arm that is wearing the wearable device204, as discussed below with regard toFIG.3, the wearable device204may be used in a similar manner to determine that the person202is consuming the product206even when the wearable device204is worn by the opposite arm/hand (or is at or near another part of the person's body). FIG.3illustrates an example process300for detection and correction of eating behavior, in accordance with one or more example embodiments of the present disclosure. Referring toFIG.3, a person302wearing a wearable device304(e.g., having functionality as described with regard to the wearable device104ofFIG.1) may consume a product306(e.g., a food, nutritional product, or other type of consumable product) from a package307. With user consent, the wearable device304may capture audio308of the person302consuming the product (e.g., a chewing and/or swallowing sound) and/or from the package307(e.g., a crinkling sound of a bag, a sound of a top being popped or unscrewed, a sound of a box being ripped open, etc.). At block312of the process300, the wearable device304may detect the audio308(e.g., using one or more microphones or other audio sensors). The audio308may include chewing, swallowing, opening the package307, words spoken by the person302, sounds made by the product306(e.g., crunchy sounds, chewy sounds, etc.), or other audio. Still referring toFIG.3, the wearable device304optionally may consider its position, orientation, and/or movement. For example, at block314, the wearable device304may detect its position, movement, or orientation (e.g., angles, rotation, movement directions, etc.), and may determine that the wearable device304is in a position within a threshold range. Based on the audio308and optionally the position, movement, or orientation data, the wearable device at block316may determine that the person302is consuming the product306(e.g., eating). For example, the wearable device304may determine that the audio308matches one of more sound profiles for various products or types of products (e.g., food, beverage, etc.). The sound profiles may be ranked (e.g., with respective scores indicating the likelihood that the sound profile matches the audio), and the sound profile with the highest score may be selected. The product or product type corresponding to the selected sound profile may be identified by the wearable device304as the product306. Still referring toFIG.3, at block318, the wearable device304may determine that additional data associated with a measurable attribute (e.g., heartrate, blood pressure, etc.) is to be captured. For example, when the product306is known (e.g., based on a product profile stored and accessed based on the corresponding product identifier) to be salty (e.g., if the product306is a potato chip), the wearable device304may determine corresponding characteristics of the product (e.g., increased blood pressure), and may determine data (e.g., blood pressure data) which may be associated with the effects of the characteristic. When a characteristic of a product having high sugar content is to increase blood glucose levels, the wearable device304may determine that blood glucose data may indicate the effects of consuming the sugary product. The wearable device304may determine that another device (e.g., the device108or the device110ofFIG.1) or an application (e.g., the application112ofFIG.1) is responsible for detecting or otherwise collecting data associated with a characteristic of the product306. At block320, when the wearable device304determines a characteristic of the product306and an associated type of data (e.g., additional data) which may measure the effects of the characteristic on the person302consuming the product306, the wearable device304may identify another device (e.g., the device108or the device110ofFIG.1) or an application (e.g., the application112ofFIG.1) responsible for capturing the associated type of data, and may request and receive the associated data. The additional data may include additional audio data, existing audio data, biomedical data, data from one or more other devices, and/or other types of data. The request for the data may include specification of a sampling rate or frequency with which to capture or otherwise detect the additional data. At block322, the wearable device may receive the additional data. In one or more embodiments, the request for additional data (e.g., biomedical data) sent by the wearable device304may be sent to another device (e.g., the user device106ofFIG.1), which may execute one or more applications (e.g., the application112ofFIG.1), which may collect biomedical and/or other data from other devices (e.g., the device108and/or the device110ofFIG.1). The other devices may include a blood glucose monitor, a heartrate monitor, electrodes, a hydration monitor, a hydrogen sensor, or other sensors or devices capable of detecting user data with a user's consent. In one or more embodiments, the wearable device304may include a wearable wireless device (e.g., bracelet, watch, glasses, ring, etc.) capable of capturing audio with one or more sensors (e.g., a microphone, not shown). The audio data may be analyzed by the wearable device304or another device to determine whether the audio308indicates that the person302is consuming the product306and what the product306is (e.g., the specific product as identified by type and brand, or a type or category of the product306such as food, beverage, medicine, nutritional product, fruit, vegetable, snack, candy, burger, chips, water, cola/soda, sugary drink, vitamin, etc.). In one or more embodiments, the wearable device304or another device may analyze the additional data, and may determine an association between the additional data of a measurable attribute and the product306, along with similar products (e.g., identified using the product identifier for the product306). The wearable device304or another device may store data associated with the person302and with the product306that indicates user reactions associated with the measurable attribute (e.g., changes in heartrate, blood sugar, hydration levels, breathing levels, heart waves, blood pressure, neurological data, and the like). In one or more embodiments, the wearable device304may generate one or more messages, alarms, or alerts based on the audio308and/or the additional data. For example, when the wearable device304determines from prior association data that the product306or a similar product causes a negative biomedical effect of the person202or another person, the wearable device304may generate a message or alarm intended to discourage the person232from consuming the product306, may recommend substitute products known to cause less of the biomedical effect (e.g., not associated with the consumable product's characteristics), and/or may send alerts to other devices to let other users know that the person302is consuming the product306known to cause a negative biomedical effect. The wearable device304may display nutritional information, including the content of sodium and fat, along with a recommendation to try a healthier product (e.g., to substitute water for a sugary drink). Messages and alerts may provide any combination of information related to the person's health (e.g., heartrate, blood glucose, blood pressure, etc.), information about what the person302is consuming (e.g., nutritional information, health-related effects, etc.), messages to encourage or discourage consumption of certain products, offers for similar or substitute products, notifications of locations where products may be purchased, notifications regarding a user's exercise habits and/or exercise options and locations, and the like. In one or more embodiments, with user consent, recording the audio308or other data by the wearable device304may activate or change based on a time, location, position, or movement of the wearable device304. For example, when a time of day is within a selected or known meal time (e.g., a range of time in the morning for breakfast, a range of time in the afternoon for lunch, a range of time in the evening for dinner), the wearable device304may activate recording or increase sampling frequency. In this manner, activation may include initiating or powering on one or more components of the wearable device304(e.g., such as microphones or other audio sensors), and may include adjusting a sampling rate or frequency. When the wearable device304is in a position or orientation associated with consumption of the product306, the wearable device304may activate recording or increase sampling frequency. For example, using accelerometer, magnetometer, or other device data, the wearable device304may determine that a user's arm or hand is at an angle (e.g., within an angular range) with respect to one or more additional sensors on other devices and/or with respect to gravity known to be associated with bringing a consumable product to a user's face for consumption. In one or more embodiments, with user consent, when the wearable device304is in such a position or orientation, the wearable device304may activate a timer to determine the duration that the wearable device304is in the position or orientation. The timer may capture time indicating a duration (e.g., how long a user is consuming a product), which may be correlated with an amount of product consumption (e.g., the longer the duration, the more product is consumed). The wearable device304may deactivate recording or decrease sampling frequency when the wearable device304determines that the person302is no longer consuming the product (e.g., the audio308stops and/or no additional sound is identified from the package307). The wearable device304may determine the time at which a user may be consuming a product, and may generate messages based on the time (e.g., to not eat in between meals). The wearable device304may use global navigation satellite system data, Wi-Fi data, Bluetooth data, ultrasound data, accelerometer data, magnetometer data, or other data to identify its location. The wearable device304may determine (e.g., using a map or other type of application executable on the wearable device304) whether the device's current location is at or near (e.g., within a distance threshold) of a restaurant or other provider of consumable products, and may generate offers, incentives, alternative options, or messages discouraging the consumption of certain products. FIG.4Aillustrates a flow diagram for a process400for detection and correction of eating behavior, in accordance with one or more example embodiments of the present disclosure. At block402, a device (e.g., the wearable device104ofFIG.1), with user consent, may receive audio data associated with the consumption of a consumable product (e.g., the product206ofFIG.2, the product306and/or the package307ofFIG.3). For example, with user consent, the device may record audio with one or more audio sensors (e.g., microphones). The recording may be constant, periodic, based on user input, based on times of day, based on movement, position, or orientation of the device, based on the location of the device (e.g., global positioning coordinates), or the like. In one example, a user may be drinking a liquid with one hand, and wearing the device with the other hand/arm. In another example, a user may be eating a product with one hand, and wearing the device with the other hand/arm. In another example, a user may be drinking a liquid with the same hand/arm wearing the device. In another example, a user may be eating a product with the same hand/arm wearing the device. In any scenario, the device may capture audio such as chewing, swallowing, opening a package or container (e.g., a bottle, can, bag, box, jar, etc.), opening or closing a refrigerator or microwave, or audio of a person talking (e.g., audio including keywords regarding the consumption of a product or location where consumable products may be sold). The device may be worn on a hand, arm, leg, ankle, around the head or neck, or at another location of a body. The product may include any combination of food, beverage, medical products, nutritional products, or any other consumable product. At block404, the device may determine that the audio data matches an audio profile for a consumable product or multiple consumable products. For example, the sound of one consumable product may be different when combined with another consumable product. The audio data may be converted to a sound profile. For example, a sound profile may include a frequency distribution of captured audio signals over time. A device may compare the sound profile to known sound profiles of consumable products. For example, the crunch of potato chips may match a known sound profile for potato chips. The crisp sound of a user biting into an apple may have a distinct sound profile, as may the sound of swallowing a liquid, opening a carbonated beverage or bag, opening and closing a refrigerator, an active microwave, and the like. Audio profiles of consumable products may be differentiated from audio profiles of other types of noises or sounds, such as talking (e.g., voice) or certain types of background noise (e.g., sounds of musical instruments, automobiles, computer devices, etc.). Machine learning using neural networks (e.g., the one or more machine learning modules142ofFIG.1) or other types of machines may be used to identify sounds and words to identify when a user is consuming a product, about to consume a product, and has recently consumed a product. Using sound profiles, the device may determine a specific product or type of product that a person may be consuming (or combination of products). At block406, the device may determine a characteristic of the consumable product or products. For example, when the device determines the product that is being consumed, the device may identify the product and an associated product identifier (or multiple identifiers for a combination of products). The product identifier may be stored on the device or elsewhere (e.g., the user device106or the one or more servers140ofFIG.1) in addition to characteristics of the product, such as nutrition content, ingredients, product categories, health effects (e.g., changes to blood sugar, blood pressure, cholesterol, heartrate, perspiration, breathing rate, acid reflux, indigestion, fatigue, blood flow, blood alcohol level, medical side effects, etc.). At block408, the device may determine, based on the characteristic, one or more measurable attributes of a user to assess. For example, when a characteristic of a product is associated with impacting heartrate, the measurable attribute may be a user's heartrate. When a characteristic is a change in blood pressure, the measurable attribute may be a user's blood pressure. When a characteristic is blood sugar, the measurable attribute may be a user's blood sugar. When the characteristic is blood alcohol, the measurable attribute may be a user's blood alcohol level. When the characteristic is a hydration level, the measurable attribute may be a user's hydration or perspiration. When the characteristic is indigestion or heartburn, the measurable attribute may be a user's hydrogen level. When the characteristic is a known medical side effect, the measurable attribute may be any combination of readings which may indicate whether the side effect is occurring (e.g., breathing, allergic reactions, arrhythmias, etc.). The device may determine another device (e.g., the device108or the device110ofFIG.1) and/or application (e.g., the application112ofFIG.1) which may detect and/or collect data measuring the measurable attribute. For example, when the measurable attribute is a user's heartrate, the device may identify a heartrate monitor and/or an application which may collect heartrate data of a user. When the measurable attribute is blood sugar, the device may identify a glucose monitor and/or application which collects blood glucose data. At block410, with user consent, the device may obtain data based on the measurable attribute. In particular, the device may send a request for the data associated with the measurable attribute. For example, when the measurable attribute is heartrate, the request may indicate that heartrate data is requested. When the measurable attribute is blood sugar, the request may indicate that blood sugar data is requested. The request may provide parameters, such as time/duration of the recorded data, and at which sampling rate or frequency. For example, the device which detects or captures the data for the measurable attribute (e.g., a heartrate monitor which captures heartrate data) may capture or detect the data at a sampling rate or frequency. Because of the determination that a user may be consuming a product, the requesting device may request sampling at higher sampling rates or frequencies to collect more data for analysis. The request may indicate multiple sampling rates and frequencies based on different times (e.g., a first sampling rate or frequency at one time, and a second sampling rate or frequency at another time). The measurable attribute may be additional audio data at the same frequency or sampling rate as the previously received audio data, or may be additional audio data at a different frequency or sampling rate. The device may receive the data from the capturing device, or from another device which collects the data (e.g., the user device106or the one or more servers140ofFIG.1). The device may specify the time for the data to be delivered and in what format. The device may receive the data according to the requested time and/or format. The request may indicate a device or application associated with capturing and/or providing the requested data. At block412, with user consent, the device may analyze the data. For example, the device may determine that the data associated with the measurable attribute is further associated with the consumable product. For example, the device may analyze the effects that the consumable product has on a user, and may provide an indication of the effects to be stored with the product identifier for future use. When the effects are a change in heartrate, blood pressure, blood sugar, breathing rate, acid reflux, blood alcohol, perspiration, or the like, the device may associate the effects with the product that was consumed. In this manner, when the device determines a characteristic of a product (e.g., at block406), the device may consider the effects as characteristics of the product or similar products. The device may determine whether the data confirms the characteristic of the product and/or whether the product was correctly identified. receive the data associated with the measurable attribute. The device may determine that the data exceeds a threshold (e.g., a heartrate threshold, a blood pressure threshold, a blood sugar threshold, an electrocardiography threshold, other biomedical thresholds, a threshold time associated with consumption, etc.). Based on the data exceeding the threshold, the device may determine that consumption of a product has caused the consumer's biomedical data to change, that the user is consuming a product outside of a time range, that the user may be consuming too much or too little of the product, that the user is swallowing or chewing too much or too little, and the like. The messages may be generated to indicate such findings, to recommend adjustments to consumption habits, and the like. At block414, with user consent, the device may perform one or more actions based on the data. The device may generate, based on the data, one or more messages for presentation. For example, the messages may display nutrition or other health information related to the product, a representation of the data for the measureable attribute (e.g., an indication that the data shows the user's blood sugar, heartrate, etc. are effected by consumption of the product). The one or more messages may provide product recommendations for similar or different products, may recommend consumption adjustments, may notify other devices that the person is consuming the product at a particular time, may notify the user of exercise options, and more. The device may request additional information for analysis at the same or different frequency or sampling rates. For example, the device may request additional audio data, time data, device data (e.g., orientation data, movement data, location data, etc.). The device may send instructions to other devices (e.g., user devices, smart home devices, exercise equipment, microwaves, refrigerators, etc.), such as instructions to change or stop detection of data, instructions to display messages (e.g., requesting that a person adjust behavior or consumption habits), instructions to log that a user is consuming the product at a given time, etc. FIG.4Billustrates a flow diagram for a process450for detection and correction of eating behavior, in accordance with one or more example embodiments of the present disclosure. At block452, a device (e.g., the wearable device104ofFIG.1) may determine identifiers associated with a consumable product (e.g., the product206ofFIG.2, the product306ofFIG.3), packaging (e.g., the package307ofFIG.3), and/or environmental sounds (e.g., background noises, sounds of devices such as refrigerators, microwaves, cooking appliances, product storage, etc.). For example, when audio data matches a sound profile associated with consumption of a product, the product may have a product identifier. Packaging, such as bottles, cans, bags, boxes, containers, etc. may have product identifiers. Environmental sounds may have identifiers. Any combination of identifiers may be assessed by the device to determine that someone is consuming a product or type of product. For example, the combination of a chewing or swallowing sound and a package sound may indicate a product or type of product. The device may identify certain background noises and cancel the background noises, allowing for analysis on other sounds that may be more relevant to product consumption. The device or another device (e.g., the user device106and/or the one or more servers140ofFIG.1) may store and/or access data including related or different products. When a product has been identified (e.g., based on an audio profile match from received audio data of a person consuming the product or discussing the consumption of the product), the device may identify the product identifier of the product. For example, the matching sound profile for the product identified as being consumed may be stored on the device or the other device with data such as the product identifier, characteristics of the product, measurable attributes of the product, effects that the product has had on one or more users, and the like. At block454, the device may determine, based on the product identifier, one or more similar products (e.g., having one or more of the same characteristics or effects on a user as the identified product, products having one or more of the same ingredients, products having nutritional content within a range of the product, etc.) and/or one or more different products (e.g., products not having one or more of the same characteristics or user effects as the product, products known to cause the opposite effects of a user, products having nutritional content outside of a range of the product, etc.). For example, given a product identifier, the device or the other device may identify other products having similar characteristics (e.g., health characteristics, nutritional content, types of products, a same brand, same effects on a person's health, such as decreased heartrate or blood pressure, etc.) or substitute products (e.g., healthier products not known to cause the same level of effects such as heartrate or blood pressure changes, products with less content of certain ingredients such as sugar or fat, etc.). At block456, with user consent, the device may generate one or more messages for presentation (e.g., using the device or another device) based on one or more similar products or one or more different products. For example, as shown inFIG.1, when the identified product is potato chips, the device may display a recommendation for a substitute product such as an apple. In such an example, the apple may be one of multiple products identified as having different characteristics, effects, ingredients, nutritional content, etc. from potato chips. The device may identify the potato chip product identifier (e.g., a specific potato chip product or a categorical product identifier for potato chips in general), and based on characteristics, measurable attributes, content, and/or effects stored in association with the product identifier, may find a corresponding product identifier stored with similar or different characteristics, measurable attributes, content, and/or effects. The messages may select one or more products or product categories (e.g., eat fruit instead of a less healthy product) and may include a recommendation, offer, incentive, or nutritional or health information for the similar or different product. For example, the messages may indicate similar products, similar products that are healthier, products that may be purchased nearby (e.g., within a distance threshold), substitute products, nutritional information for other products (e.g., content of nutritional ingredients such as sodium, fat, sugar, etc.), and/or health information (e.g., effects that a product may have on biomedical data such as heart rate, blood pressure, blood sugar, body temperature, etc.). The messages may be displayed using one or more methods, including text, graphs, audio, vibrations, and the like. Referring toFIG.4B, the process450may refer to one or more steps associated with block414ofFIG.4A. FIG.4Cillustrates a flow diagram for a process470for detection and correction of eating behavior, in accordance with one or more example embodiments of the present disclosure. At block472, with user consent, a device (e.g., the wearable device104ofFIG.1) may determine a time and/or duration of a consumption of a consumable product (e.g., the product206ofFIG.2, the product306ofFIG.3). The time may refer to the time of day. For example, the device may determine the time when any portion of captured audio (e.g., the audio208ofFIG.2, the audio308ofFIG.3) occurred. The duration may refer to an entire time of a captured clip of audio, or to a portion of audio beginning when the device determines that the user is consuming a product and ending when the device determines that the user is no longer consuming a product (e.g., when the audio no longer matches a sound profile associated with consumption of a product). At block474, with user consent, the device may determine whether the time is within a time range. For example, certain time ranges may be associated with times when a person is expected to consume a product (e.g., meal times, times to take medicine or nutritional products, times to eat or drink based on a user's health, times to eat or drink based a user's schedule, times to eat or drink based on a user's exercise habits, times selected by a user, etc.). When the device determines that the consumption is within a time range (e.g., normal meal hours, times to take medicine, etc.), the device may continue to block476. When the user is consuming a product outside of normal times (e.g., a late-night snack, taking medicine too soon, eating or drinking when the user's health or medical treatment does not allow eating or drinking), the device may proceed to block478. At block476, with user consent, the device may generate one or more messages with information about one or more products. For example, if the product being consumed is associated with positive effects on the user, the messages may include offers or incentives to consume and/or purchase more of the product or similar products. The messages may encourage a user to continue to consume such products within the time range. The messages may include information or offers regarding substitute (e.g., healthier) products. The device may indicate that a user is chewing too loudly or talking while eating. At block478, with user consent, the device may generate one or more messages discouraging consumption. For example, the messages may indicate that a user should not consume during this time, may suggest substitute products to consume, may sound alarms, and/or may notify other devices that the user is consuming a product outside of the approved time range. At block480, the device may determine whether the duration exceeds a threshold duration. For example, when a consumption time is limited to a duration (e.g., thirty minutes to complete a meal), and the duration exceeds the duration, the device may determine that the user is consuming too much. A drink of a liquid may be associated with the duration, which may correspond to an amount of liquid. For example, when a user is supposed to drink a certain amount of water, the duration may indicate whether the user consumed that amount, consumed too much, or consumed too little. The duration may be set based on user preferences and/or schedules, based on known meal times, or based on quantities of products to consume at a given time. When the duration is within the threshold duration, the device may proceed to block482. When the duration exceeds the threshold duration, the device may continue to block484. At block482, with user consent, the device may generate one or more messages encouraging a user to maintain the consumption level associated with the duration and/or to increase consumption. For example, when a user is expected to drink a certain volume of water associated with the duration, and the user's consumption time for the liquid is below the threshold duration, the device may generate messages encouraging the user to drink more water. When the user is determined to be eating for a time within the threshold duration, the device may determine that the user has not eaten too much or too long, and may generate messages encouraging the use to continue to eat within the threshold duration of time. At block484, with user consent, the device may generate one or more messages encouraging a user to reduce consumption. For example, if the user is determined to be eating a product for a duration longer than the threshold duration, the device may generate messages reminding a user of the duration, indicating the nutritional content of the product, or indicating the effects that the product may have on one or more measurable attributes. Referring toFIG.4C, the process470may refer to one or more steps associated with block414ofFIG.4A. FIG.4Dillustrates a flow diagram for a process490for detection and correction of eating behavior, in accordance with one or more example embodiments of the present disclosure. At block492, with user consent, a device (e.g., the wearable device104ofFIG.1) may identify one or more devices (e.g., the user device106, the one or more devices150ofFIG.1). For example, block492represent actions corresponding to block414ofFIG.4A. When the device has identified that a user is consuming a product, has obtained data for a measurable attribute of the product, and has analyzed the data, the device may identify one or more devices to which to send instructions or messages. At block494, the device may generate one or more instructions based on the data for the product. For example, when the device determines that a user is consuming a product, the device may generate a notification for a smart refrigerator or other smart device (e.g., a smart microwave) to log the consumption, to display or otherwise output messages requesting that the person adjust consumption behavior (e.g., to prevent dispensing a product, to block access to a product, to change or stop operation, etc.). The instructions may indicate to a device to stop recording or otherwise detecting data (e.g., stop recording audio, stop or change collection of biomedical data, etc.). At block496, the device may send the one or more instructions. The instructions may be sent directly to a device, or may be sent through another device (e.g., the user device106, the one or more servers140ofFIG.1). FIG.5illustrates a flow diagram for a process500for detection and correction of eating behavior, in accordance with one or more example embodiments of the present disclosure. At block502, with user consent, a device (e.g., the wearable device104ofFIG.1) may determine a time of day when a user is to consume a product (e.g., the product206ofFIG.2, the product306ofFIG.3). The time of day may correspond to meal times, times when a user is to take a medicinal or other health product, times set based on a user's schedule, times when a user is known to perform activities (e.g., exercising), or other times. The times may be preset, selected by users, or based on data (e.g., calendar data from an application executing on the device or another device). At block504, with user consent, the device may activate one or more sensors on the device. Activating may include powering on a sensor or changing the operating state of a sensors, such as modifying a sampling rate or frequency. For example, when the sensors are microphones, the device may activate a microphone by powering on the microphone and/or by setting a sampling rate or frequency with which to capture audio. The sensors may capture audio at one sampling rate or frequency, and based on the time of day, the device may change (e.g., increase) the sampling rate or frequency for more data to analyze over a time period. At block506, with user consent, the device may receive captured data by the one or more sensors. Captured audio data may be analyzed by the device or sent to another device for analysis. The audio data may be converted to a sound profile. For example, a sound profile may include a frequency distribution of captured audio signals over time. A device may compare the sound profile to known sound profiles of consumable products. For example, the crunch of potato chips may match a known sound profile for potato chips. The crisp sound of a user biting into an apple may have a distinct sound profile, as may the sound of swallowing a liquid, opening a carbonated beverage or bag, opening and closing a refrigerator, an active microwave, and the like. Audio profiles of consumable products may be differentiated from audio profiles of other types of noises or sounds, such as talking (e.g., voice) or certain types of background noise (e.g., sounds of musical instruments, automobiles, computer devices, etc.). Machine learning using neural networks or other types of machines may be used to identify sounds and words to identify when a user is consuming a product, about to consume a product, and has recently consumed a product. Using sound profiles, a device may determine a specific product or type of product that a person may be consuming. At block508, the device may determine that the user is no longer consuming the product. The device may determine, based on captured audio, that the use is no longer consuming a product based on whether the captured audio matches audio associated with a known product and when the audio no longer matches audio associated with a known product. At block510, the device may deactivate the one or more sensors. For example, the device may deactivate (e.g., lower power, sampling rate, or frequency) the one or more sensors when the time of day (or time period of day) has passed. FIG.6illustrates a block diagram of an example of a machine600(e.g., implemented in whole or in part by the wearable device104ofFIG.1, the user device106ofFIG.1, the one or more servers140ofFIG.1, the device108ofFIG.1, the device110ofFIG.1, the one or more devices150ofFIG.1, the wearable device204ofFIG.2, the wearable device304ofFIG.3) or system upon which any one or more of the techniques (e.g., methodologies) discussed herein may be performed. In other embodiments, the machine600may operate as a standalone device or may be connected (e.g., networked) to other machines. In a networked deployment, the machine600may operate in the capacity of a server machine, a client machine, or both in server-client network environments. In an example, the machine600may act as a peer machine in Wi-Fi direct, peer-to-peer (P2P) (or other distributed) network environments. The machine600may be a wearable device or any machine capable of executing instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, while only a single machine is illustrated, the term “machine” shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein, such as cloud computing, software as a service (SaaS), or other computer cluster configurations. Examples, as described herein, may include or may operate on logic or a number of components, modules, or mechanisms. Modules are tangible entities (e.g., hardware) capable of performing specified operations when operating. A module includes hardware. In an example, the hardware may be specifically configured to carry out a specific operation (e.g., hardwired). In another example, the hardware may include configurable execution units (e.g., transistors, circuits, etc.) and a computer readable medium containing instructions where the instructions configure the execution units to carry out a specific operation when in operation. The configuring may occur under the direction of the executions units or a loading mechanism. Accordingly, the execution units are communicatively coupled to the computer-readable medium when the device is operating. In this example, the execution units may be a member of more than one module. For example, under operation, the execution units may be configured by a first set of instructions to implement a first module at one point in time and reconfigured by a second set of instructions to implement a second module at a second point in time. The machine (e.g., computer system)600may include any combination of the illustrated components. For example, the machine600may include a hardware processor602(e.g., a central processing unit (CPU), a graphics processing unit (GPU), a hardware processor core, or any combination thereof), a main memory604and a static memory606, some or all of which may communicate with each other via an interlink (e.g., bus)608. The machine600may further include a power management device632, a graphics display device610, an alphanumeric input device612(e.g., a keyboard), and a user interface (UI) navigation device614(e.g., a mouse). In an example, the graphics display device610, alphanumeric input device612, and UI navigation device614may be a touch screen display. The machine600may additionally include a storage device (i.e., drive unit)616, a signal generation device618(e.g., a biomedical data signal or other data signal), a consumption regulation device619, a network interface device/transceiver620coupled to antenna(s)630, and one or more sensors628, such as a sound detecting sensor (e.g., a microphone), one or more electromyography sensors (e.g., to detect swallowing), accelerometers, magnetometers, location sensors, and the like. When using multiple sensors628, the sensors may be arranged to detect sounds in different directions and at different distances. The machine600may include an output controller634, such as a serial (e.g., universal serial bus (USB), parallel, or other wired or wireless (e.g., infrared (IR), near field communication (NFC), etc.) connection to communicate with or control one or more peripheral devices (e.g., a printer, a card reader, other sensors, etc.)). The storage device616may include a machine readable medium622on which is stored one or more sets of data structures or instructions624(e.g., software) embodying or utilized by any one or more of the techniques or functions described herein. The instructions624may also reside, completely or at least partially, within the main memory604, within the static memory606, or within the hardware processor602during execution thereof by the machine600. In an example, one or any combination of the hardware processor602, the main memory604, the static memory606, or the storage device616may constitute machine-readable media. The consumption regulation device619may carry out or perform any of the operations and processes (e.g., process400ofFIG.4A, process450ofFIG.4B, process470ofFIG.4C, process490ofFIG.4D, process500ofFIG.5) described and shown above. In one or more embodiments, the consumption regulation device619may be implemented a wearable device (e.g., the wearable device104ofFIG.1) or as a medical device (e.g., the device108or the device110ofFIG.1). The consumption regulation device619may record audio (with a user's consent) using one or more audio sensors (e.g., the one or more sensors628). Captured audio data may be analyzed by the consumption regulation device619or sent to another device (e.g., the user device106or the one or more servers140ofFIG.1) for analysis. In one or more embodiments, the consumption regulation device619may be implemented in a user device (e.g., the user device106) or a server device (e.g., the one or more servers150ofFIG.1). The consumption regulation device619may convert audio data to a sound profile. For example, a sound profile may include a frequency distribution of captured audio signals over time. The consumption regulation device619may compare the sound profile to known sound profiles of consumable products. The consumption regulation device619may differentiate audio profiles of consumable products from audio profiles of other types of noises or sounds, such as talking (e.g., voice) or certain types of background noise (e.g., sounds of musical instruments, automobiles, computer devices, etc.). The consumption regulation device619may be used to identify sounds and words to identify when a user is consuming a product, about to consume a product, and has recently consumed a product. Using sound profiles, the consumption regulation device619may determine a specific product or type of product that a person may be consuming. In one or more embodiments, the consumption regulation device619may determine characteristics of a product once the product has been identified. For example, a cheeseburger may have high cholesterol and may trigger a higher blood pressure for a person, as may potato chips or other foods known to be salty. Candy may include sugar which may cause an increase in a person's blood glucose levels. Spicy or acidic products may cause indigestion or acid reflux. A caffeinated product may increase a person's heart rate. When the consumption regulation device619determines the product or type of product that a person may be consuming, the device may determine corresponding characteristics of the product, and may determine data which may be associated with the effects of the characteristics. For example, if a characteristic of a sugary food or drink is to increase blood glucose levels, the consumption regulation device619may determine that blood glucose data may indicate the effects of consuming the sugary food or drink. When caffeine products are known to increase heartrate, the consumption regulation device619may determine that monitoring a user's heartrate may provide an indication of the effects of consuming caffeine. In one or more embodiments, the consumption regulation device619(e.g., when implemented on the wearable device104ofFIG.1) may determine that another device or an application is responsible for detecting or otherwise collecting data associated with a characteristic of a consumable product. For example, a blood glucose monitor may measure blood glucose levels. A heartrate monitor may capture heartrate data. A hydration sensor may measure a user's dehydration. An accelerometer, magnetometer, wireless signals (e.g., Bluetooth or Wi-Fi signals), global navigation satellite system signals may be used (with a user's consent) to determine a device's motion or location, and the motion or location data may confirm if the user is at a location (e.g., a restaurant) or moving (e.g., motioning an arm or hand toward the face) in a manner which indicates a likely consumption of a product (e.g., and may be used to supplement audio data for the purpose of determining when a user is consuming a product). A hydrogen sensor may measure a user's indigestion. When the consumption regulation device619determines a characteristic of a consumable product and an associated type of data which may measure the effects of the characteristic on a person consuming the consumable product, the consumption regulation device619may identify another device or an application responsible for capturing the associated type of data, and may request the associated data. The request for the data may include specification of a sampling rate or frequency. For example, consumption regulation device619may request that another device provide data captured at a particular rate or frequency (e.g., a higher sampling rate or frequency than normal). Such may allow devices to conserve power and resources (e.g., by not sampling at higher rates or frequencies unless a user is consuming something). In one or more embodiments, with a user's consent, the consumption regulation device619may help a user regulate their intake of consumable products and may provide recommendations for products, when to consume or not consume, locations where consumable products are available, nutritional information, warnings/alerts, alarms to medical professionals or other parties or devices, and the like. For example, when the consumption regulation device619detects that a user is eating food late at night (e.g., outside of a normal window of time associated with eating meals), the consumption regulation device619may present alarms or messages encouraging the user to eat something healthier or to wait until the next meal, or to indicate the effects that consuming a product may have on the person. The de consumption regulation device619vice may provide recommendations of healthier products to substitute, such as substituting fruit and vegetables for a less healthy product. In one or more embodiments, the consumption regulation device619may be implemented at the one or more servers140ofFIG.1to receive data from the wearable device104or the user device106ofFIG.1. For example, the consumption regulation device619may receive captured audio data and determine that the user is consuming one or more products. The consumption regulation device619, implemented at the one or more servers140, the wearable device104, and/or the user device106ofFIG.1may determine characteristics and measurable attributes of a product. The consumption regulation device619may request additional data for the measurable attributes, including by specifying a frequency, sampling rate, time, and/or format of the data. In one or more embodiments, the consumption regulation device619may be implemented at the one or more devices150ofFIG.1. For example, the consumption regulation device619may record data of a user such as biomedical data, exercise data, consumption data, product inventory (e.g., a smart refrigerator or freezer), and may send and receive data associated with consumption recommendations, exercise recommendations, etc. It is understood that the above are only a subset of what the consumption regulation device619may be configured to perform and that other functions included throughout this disclosure may also be performed by the consumption regulation device619. While the machine-readable medium622is illustrated as a single medium, the term “machine-readable medium” may include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) configured to store the one or more instructions624. Various embodiments may be implemented fully or partially in software and/or firmware. This software and/or firmware may take the form of instructions contained in or on a non-transitory computer-readable storage medium. Those instructions may then be read and executed by one or more processors to enable performance of the operations described herein. The instructions may be in any suitable form, such as but not limited to source code, compiled code, interpreted code, executable code, static code, dynamic code, and the like. Such a computer-readable medium may include any tangible non-transitory medium for storing information in a form readable by one or more computers, such as but not limited to read only memory (ROM); random access memory (RAM); magnetic disk storage media; optical storage media; a flash memory, etc. The term “machine-readable medium” may include any medium that is capable of storing, encoding, or carrying instructions for execution by the machine600and that cause the machine600to perform any one or more of the techniques of the present disclosure, or that is capable of storing, encoding, or carrying data structures used by or associated with such instructions. Non-limiting machine-readable medium examples may include solid-state memories and optical and magnetic media. In an example, a massed machine-readable medium includes a machine-readable medium with a plurality of particles having resting mass. Specific examples of massed machine-readable media may include non-volatile memory, such as semiconductor memory devices (e.g., electrically programmable read-only memory (EPROM), or electrically erasable programmable read-only memory (EEPROM)) and flash memory devices; magnetic disks, such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks. The instructions624may further be transmitted or received over a communications network626using a transmission medium via the network interface device/transceiver620utilizing any one of a number of transfer protocols (e.g., frame relay, internet protocol (IP), transmission control protocol (TCP), user datagram protocol (UDP), hypertext transfer protocol (HTTP), etc.). Example communications networks may include a local area network (LAN), a wide area network (WAN), a packet data network (e.g., the Internet), mobile telephone networks (e.g., cellular networks), plain old telephone (POTS) networks, wireless data networks (e.g., Institute of Electrical and Electronics Engineers (IEEE) 802.11 family of standards known as Wi-Fi®, IEEE 802.16 family of standards known as WiMax®), IEEE 802.15.4 family of standards, and peer-to-peer (P2P) networks, among others. In an example, the network interface device/transceiver620may include one or more physical jacks (e.g., Ethernet, coaxial, or phone jacks) or one or more antennas to connect to the communications network626. In an example, the network interface device/transceiver620may include a plurality of antennas to wirelessly communicate using at least one of single-input multiple-output (SIMO), multiple-input multiple-output (MIMO), or multiple-input single-output (MISO) techniques. The term “transmission medium” shall be taken to include any intangible medium that is capable of storing, encoding, or carrying instructions for execution by the machine600and includes digital or analog communications signals or other intangible media to facilitate communication of such software. The operations and processes described and shown above may be carried out or performed in any suitable order as desired in various implementations. Additionally, in certain implementations, at least a portion of the operations may be carried out in parallel. Furthermore, in certain implementations, less than or more than the operations described may be performed. The word “exemplary” is used herein to mean “serving as an example, instance, or illustration.” Any embodiment described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other embodiments. The terms “computing device,” “user device,” “communication station,” “station,” “handheld device,” “mobile device,” “wireless device” and “user equipment” (UE) as used herein refers to a wireless communication device such as a cellular telephone, a smartphone, a tablet, a netbook, a wireless terminal, a laptop computer, a femtocell, a high data rate (HDR) subscriber station, an access point, a printer, a point of sale device, an access terminal, or other personal communication system (PCS) device. The device may be either mobile or stationary. As used within this document, the term “communicate” is intended to include transmitting, or receiving, or both transmitting and receiving. This may be particularly useful in claims when describing the organization of data that is being transmitted by one device and received by another, but only the functionality of one of those devices is required to infringe the claim. Similarly, the bidirectional exchange of data between two devices (both devices transmit and receive during the exchange) may be described as “communicating,” when only the functionality of one of those devices is being claimed. The term “communicating” as used herein with respect to a wireless communication signal includes transmitting the wireless communication signal and/or receiving the wireless communication signal. For example, a wireless communication unit, which is capable of communicating a wireless communication signal, may include a wireless transmitter to transmit the wireless communication signal to at least one other wireless communication unit, and/or a wireless communication receiver to receive the wireless communication signal from at least one other wireless communication unit. As used herein, unless otherwise specified, the use of the ordinal adjectives “first,” “second,” “third,” etc., to describe a common object, merely indicates that different instances of like objects are being referred to and are not intended to imply that the objects so described must be in a given sequence, either temporally, spatially, in ranking, or in any other manner. Some embodiments may be used in conjunction with various devices and systems, for example, a personal computer (PC), a desktop computer, a mobile computer, a laptop computer, a notebook computer, a tablet computer, a server computer, a handheld computer, a handheld device, a personal digital assistant (PDA) device, a handheld PDA device, an on-board device, an off-board device, a hybrid device, a vehicular device, a non-vehicular device, a mobile or portable device, a consumer device, a non-mobile or non-portable device, a wireless communication station, a wireless communication device, a wireless access point (AP), a wired or wireless router, a wired or wireless modem, a video device, an audio device, an audio-video (A/V) device, a wired or wireless network, a wireless area network, a wireless video area network (WVAN), a local area network (LAN), a wireless LAN (WLAN), a personal area network (PAN), a wireless PAN (WPAN), and the like. Some embodiments may be used in conjunction with one way and/or two-way radio communication systems, biomedical sensors, wearable devices or sensors, cellular radio-telephone communication systems, a mobile phone, a cellular telephone, a wireless telephone, a personal communication system (PCS) device, a PDA device which incorporates a wireless communication device, a mobile or portable global positioning system (GPS) device, a device which incorporates a GPS receiver or transceiver or chip, a device which incorporates an RFID element or chip, a multiple input multiple output (MIMO) transceiver or device, a single input multiple output (SIMO) transceiver or device, a multiple input single output (MISO) transceiver or device, a device having one or more internal antennas and/or external antennas, digital video broadcast (DVB) devices or systems, multi-standard radio devices or systems, a wired or wireless handheld device, e.g., a smartphone, a wireless application protocol (WAP) device, or the like. Some embodiments may be used in conjunction with one or more types of wireless communication signals and/or systems following one or more wireless communication protocols, for example, radio frequency (RF), infrared (IR), frequency-division multiplexing (FDM), orthogonal FDM (OFDM), time-division multiplexing (TDM), time-division multiple access (TDMA), extended TDMA (E-TDMA), general packet radio service (GPRS), extended GPRS, code-division multiple access (CDMA), wideband CDMA (WCDMA), CDMA 2000, single-carrier CDMA, multi-carrier CDMA, multi-carrier modulation (MDM), discrete multi-tone (DMT), Bluetooth®, global positioning system (GPS), Wi-Fi, Wi-Max, ZigBee, ultra-wideband (UWB), global system for mobile communications (GSM), 2G, 2.5G, 3G, 3.5G, 4G, fifth generation (5G) mobile networks, 3GPP, long term evolution (LTE), LTE advanced, enhanced data rates for GSM Evolution (EDGE), or the like. Other embodiments may be used in various other devices, systems, and/or networks. It is understood that the above descriptions are for purposes of illustration and are not meant to be limiting. Although specific embodiments of the disclosure have been described, one of ordinary skill in the art will recognize that numerous other modifications and alternative embodiments are within the scope of the disclosure. For example, any of the functionality and/or processing capabilities described with respect to a particular device or component may be performed by any other device or component. Further, while various illustrative implementations and architectures have been described in accordance with embodiments of the disclosure, one of ordinary skill in the art will appreciate that numerous other modifications to the illustrative implementations and architectures described herein are also within the scope of this disclosure. Program module(s), applications, or the like disclosed herein may include one or more software components including, for example, software objects, methods, data structures, or the like. Each such software component may include computer-executable instructions that, responsive to execution, cause at least a portion of the functionality described herein (e.g., one or more operations of the illustrative methods described herein) to be performed. A software component may be coded in any of a variety of programming languages. An illustrative programming language may be a lower-level programming language such as an assembly language associated with a particular hardware architecture and/or operating system platform. A software component comprising assembly language instructions may require conversion into executable machine code by an assembler prior to execution by the hardware architecture and/or platform. Another example programming language may be a higher-level programming language that may be portable across multiple architectures. A software component comprising higher-level programming language instructions may require conversion to an intermediate representation by an interpreter or a compiler prior to execution. Other examples of programming languages include, but are not limited to, a macro language, a shell or command language, a job control language, a script language, a database query or search language, or a report writing language. In one or more example embodiments, a software component comprising instructions in one of the foregoing examples of programming languages may be executed directly by an operating system or other software component without having to be first transformed into another form. A software component may be stored as a file or other data storage construct. Software components of a similar type or functionally related may be stored together such as, for example, in a particular directory, folder, or library. Software components may be static (e.g., pre-established or fixed) or dynamic (e.g., created or modified at the time of execution). Software components may invoke or be invoked by other software components through any of a wide variety of mechanisms. Invoked or invoking software components may comprise other custom-developed application software, operating system functionality (e.g., device drivers, data storage (e.g., file management) routines, other common routines and services, etc.), or third-party software components (e.g., middleware, encryption, or other security software, database management software, file transfer or other network communication software, mathematical or statistical software, image processing software, and format translation software). Software components associated with a particular solution or system may reside and be executed on a single platform or may be distributed across multiple platforms. The multiple platforms may be associated with more than one hardware vendor, underlying chip technology, or operating system. Furthermore, software components associated with a particular solution or system may be initially written in one or more programming languages, but may invoke software components written in another programming language. Computer-executable program instructions may be loaded onto a special-purpose computer or other particular machine, a processor, or other programmable data processing apparatus to produce a particular machine, such that execution of the instructions on the computer, processor, or other programmable data processing apparatus causes one or more functions or operations specified in any applicable flow diagrams to be performed. These computer program instructions may also be stored in a computer-readable storage medium (CRSM) that upon execution may direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable storage medium produce an article of manufacture including instruction means that implement one or more functions or operations specified in any flow diagrams. The computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational elements or steps to be performed on the computer or other programmable apparatus to produce a computer-implemented process. Additional types of CRSM that may be present in any of the devices described herein may include, but are not limited to, programmable random access memory (PRAM), SRAM, DRAM, RAM, ROM, electrically erasable programmable read-only memory (EEPROM), flash memory or other memory technology, compact disc read-only memory (CD-ROM), digital versatile disc (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the information and which can be accessed. Combinations of any of the above are also included within the scope of CRSM. Alternatively, computer-readable communication media (CRCM) may include computer-readable instructions, program module(s), or other data transmitted within a data signal, such as a carrier wave, or other transmission. However, as used herein, CRSM does not include CRCM. Although embodiments have been described in language specific to structural features and/or methodological acts, it is to be understood that the disclosure is not necessarily limited to the specific features or acts described. Rather, the specific features and acts are disclosed as illustrative forms of implementing the embodiments. Conditional language, such as, among others, “can,” “could,” “might,” or “may,” unless specifically stated otherwise, or otherwise understood within the context as used, is generally intended to convey that certain embodiments could include, while other embodiments do not include, certain features, elements, and/or steps. Thus, such conditional language is not generally intended to imply that features, elements, and/or steps are in any way required for one or more embodiments or that one or more embodiments necessarily include logic for deciding, with or without user input or prompting, whether these features, elements, and/or steps are included or are to be performed in any particular embodiment. | 105,938 |
11862038 | DETAILED DESCRIPTION In the following description, for the purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the method described herein. It will be apparent, however, that the present approach may be practiced without these specific details. In some instances, well-known structures and devices are shown in a block diagram form to avoid unnecessarily obscuring the present approach. Embodiments are disclosed in sections according to the following outline:1. GENERAL OVERVIEW2. EXAMPLE COMPUTER ENVIRONMENTS2.1. EXAMPLE SENSOR DEVICE INTEGRATED IN A MOVEABLE OBJECT2.2. EXAMPLE HARDWARE CONFIGURATION OF A SENSOR DEVICE2.3 EXAMPLE OBJECT IMPLEMENTED AS A TOY-CAR2.4. EXAMPLE TRACKS2.5. STUDENT DEVICES2.6. TEACHER DEVICES2.7. MANAGER SERVER2.8. STORAGE3. EXAMPLE EXPERIMENTS3.1. EXAMPLE EXPERIMENT FOR DETERMINING MOTION OF AN OBJECT3.2. EXAMPLE EXPERIMENT FOR INTERPRETING DIGITAL DATA3.3. EXAMPLE EXPERIMENT FOR DEVELOPING A MODEL3.4. EXAMPLE EXPERIMENT FOR RESEARCHING KINETIC ENERGY4. INTEGRATION AND IMPLEMENTATION OPTIONS4.1. OPERATING PLATFORMS4.2. AUXILIARY DEVICES4.2. VISUALIZATION4.3. INTENDED USER5. EXAMPLE PROCESS FOR CONDUCTING A COMPUTER-AND-SENSOR BASED PHYSICS EXPERIMENT6. EXAMPLE PROCESS FOR CONDUCTING A MULTIPLE-OBJECTS EXPERIMENT7. EXAMPLE PROCESS FOR CONDUCTING A PARAMETRIC EXPERIMENT8. IMPROVEMENTS PROVIDED BY CERTAIN EMBODIMENTS9. IMPLEMENTATION MECHANISMS 1. General Overview In some embodiments, a system, a sensor device integrated in a moveable, physical object, and a method for capturing movement values as the object drives along a physical track are disclosed. One of the benefits of measuring the movement values using the sensor device integrated into the moveable object as the object is driven along the track includes the ability to collect information pertaining to performing scientific experiments related to physics, kinematics, and mechanics. Another benefit includes the ability to compare the collected information with predicted or anticipated results provided by, for example, students and faculty. A moveable object may be, for example, a toy such as a toy-car or a toy-truck. The object is typically configured to receive wireless instructions for driving the toy along a track. Implementation examples may include driving the toy along a flat track, driving the toy along an up-hill track, driving the toy along a down-hill track, driving the toy along a curved track, and the like. A moveable object may provide housing to computer processors, wireless transceivers, various sensors, and other components that are described later. The computer processors may be configured to execute program instructions for controlling movements of the object. The wireless transceivers may be configured to wirelessly receive the program instructions, and to transmit the experiment results to user devices. Some moveable objects may be equipped with, for example, a battery-operated motor that allows turning the wheels and thus driving the object along the track. The objects may be also equipped with a simple brake-system that allows applying brakes to the wheels of the object to cause the object to slow down and/or to stop. A processor of the moveable toy may receive instructions from a user device, such as a laptop, a PC, and the like. The instructions may be wirelessly communicated from the user device to one or more wireless transceivers implemented in the toy-object. The transceivers may communicate the instructions to the processors implemented in the toy, and the processors may use the instructions to determine the manner in which the toy is to be driven along the track. The transceivers may also be used to transmit, to user devices, information about the physical, kinetical, and mechanical characteristics of the toy, including the speed, velocity, acceleration, and the like, of the toy as the toy is driven along the track. A processor implemented in a toy-object may be configured to execute the instructions to drive the toy along the track, and as the toy is driving along the track, use the sensors to collect movements information and use the transceivers to transmit the information to user devices to cause the devices to display the movements information on display devices of the user devices. In some embodiments, a system comprises a plurality of toy-objects and a plurality of tracks. In those embodiments, the system allows to simultaneously conduct a plurality of experiments in which each of the toy-objects is driven along its own track, and information about the speed, velocity, acceleration, and other movement characteristics of the toy-object is captured and transmitted to user computers. Implementation examples may include performing an experiment in which, for example, a few toy-objects race against each other, an experiment in which a few toy-objects drive on separate tracks that intersect with each other, and the like. Information collected as a toy-object is driven along a track may be compared with prediction information provided by, for example, students. For instance, prior to the ride, the students may compute, based on the characteristics of the toy-object and the characteristics of the track, an arrival time at which the toy reaches, for example, the end of the track. The computed arrival time may be used as the prediction and may be compared with the actual arrival time determined based on the readings provided by the sensors integrated in the toy. A toy-object may include a compartment for storing one or more computer processors, one or more wireless transceivers, one or more circuit boards with one or more processors and one or more computer-based storage units. The toy may also be equipped with wheels that allow moving the toy along a track and according to driving instructions received by the wireless transceivers of the toy from a user computer. A track along which a toy may be driven may be constructed using any type of material and may have any type of size or shape. For example, the track may be a competition track having a racing design, a roller-coaster design, and the like. The tracks may be used by groups of students to race their toy-cars, participate in scientific experiments related to physics, kinematics, and mechanics, and learn the movement characteristics of the physical objects. Tracks may also have different shapes. For example, some tracks may be shaped as a loop that may be either flat or non-flat. Other tracks may be shaped as slopes with different degrees of the slopes and curvatures. Yet other tracks may have an obstacle course and/or one or more crash barriers. In some embodiments, a sensor device is integrated in a moveable, physical object and is configured to measure movement values as the object drives along a physical track. The sensor device may include one or more processors, and one or more sensors coupled to the one or more processors and may be configured to measure motion values associated with movements of the object as the object drives along a track. The sensor device may also include a wireless network transceiver coupled to the one or more processors, and a non-transitory computer-readable storage medium coupled to the one or more processors and storing one or more sequences of instructions which, when executed by the one or more processors, cause the one or more processors to receive, from the wireless network transceiver, one or more experiment instructions for performing an experiment. Based on the experiment instructions, the processors may generate one or more driving instructions and may execute the one or more driving instructions to cause the object to drive along the track. As the object is driving along the track, the motion values associated with the movements of the object along the track may be received from the one or more sensors installed in the object. The wireless network transceiver may generate motion signals that represent the motion values associated with the movements of the object and transmit the motion values to one or more user devices. In some embodiments, transmitting the motion values to the one or more user devices causes a user device, from the one or more user devices, to generate a graphical representation of the motion values and display the graphical representation on a display device of the user device. In some embodiments, a method for measuring movement values of a moveable, physical object as the object drives along a physical track is presented. The method may be implemented in any type of computer device, including, for example, a sensor device that can be integrated in the object that is moveable and configured to drive along the track. For example, the method may be implemented in a toy, such as a toy-car, a toy-track, and the like. The method comprises receiving, from a wireless network transceiver, one or more experiment instructions for performing an experiment. Based on the experiment instructions, one or more driving instructions for causing the object to drive along the track are generated and executed to cause the object to drive along the track. As the object is driving along the track, motion values associated with the movements of the object along the track are received from one or more sensors. The motion values may be transmitted, using the wireless network transceiver, to one or more user devices. Transmitting the motion values to the one or more user devices may cause a user device, from the one or more user devices, to generate a graphical representation of the motion values and display the graphical representation on a display device of the user device. 2. Example Computer Environments FIG.1is a block diagram showing an example computer environment10.FIG.1, the other drawing figures, and all of the description and claims in this disclosure are intended to present, disclose, and claim technical system and technical methods in which specially programmed computers, using a special-purpose distributed computer system design, execute functions that have not been available before to provide a practical application of computing technology to the problem of machine learning model development, validation, and deployment. In this manner, the disclosure presents a technical solution to a technical problem, and any interpretation of the disclosure or claims to cover any judicial exception to patent eligibility, such as an abstract idea, mental process, method of organizing human activity or mathematical algorithm, has no support in this disclosure and is erroneous. Configurations of computer environment10may vary and may depend on the implementations. In some embodiments, environment10includes a sensor device100integrated in a moveable object102, one or more student devices110, one or more teacher devices120, a cloud storage system130, a manager server140hosting one or more software applications and implemented either in stationary servers or in cloud storage system130, and one or more storage devices150implemented either in stationary servers or in cloud storage system130. In other configurations, environment10may include some, but not all, components depicted inFIG.1. In yet other configurations, environment10may include additional components that are not shown inFIG.1. The example components depicted inFIG.1communicate with each other. Some, or all components, depicted inFIG.1may be equipped with wireless transceivers or with devices allowing the components to access the Internet or other communications network. For example, wireless transceiver100E of sensor device100may enable wireless communications between sensor device100and student devices110, teacher devices120, cloud storage system130, and the like. In some embodiments, sensor device100is integrated in moveable toy-object102, which may be controlled and driven along a track103. Track103may be any type of physical track that is configured to provide a continuous surface on which object102may be placed and driven. Object102and track103are described in detail later. 2.1. Example Sensor Device Integrated in a Moveable Object In some embodiments, a sensor device100is integrated in a physical, moveable object102. Object102may be a toy, a miniature vehicle, a miniature car, and the like. Sensor device100usually comprises a plurality of components, non-limiting examples of which include: one or more processors100A, one or more sensors100B, one or more interfaces100C, one or more driving processors100D, one or more wireless transceivers100E, and storage100F for storing computer instructions that may be executed by processors100A and100D. Depending on the implementation, sensor device100may include all components100A-100F, some of components100A-100F, and/or some additional components not shown inFIG.1. Storage100F may be configured to receive a program code from, for example, manager server140. The program code may be downloaded onto storage100F via, for example, interface100C. Interface100C may be also configured to modify the program code, initiate execution of the program code, access movement value data (collected as object102is driven along track103) stored in storage100F, initiate a transfer of the movement value data from storage100F to any of devices110-120, storage150, and/or manager server140. The program code may be also provided by any of teacher devices120, and/or any of student devices110. For example, the program code may be downloaded by from a teacher device120, and a student, from his student device110, may wirelessly send a signal to sensor device100to start execution of the program code, and thus to cause object102to start driving along track103and cause sensor device100to collect movement information as object102is driving along track103. According to another example, the program code may be downloaded to storage100F, via interface100C, before sensor device100is deployed in environment10, and initiated and started upon receiving a wireless signal from either a teacher device120, or a student device110. The program code may include one or more sequences of instructions which, when executed by processors100A and/or driving processors100D, cause the processors to receive from wireless network transceiver100E, one or more experiment instructions for performing an experiment. As described above, execution of the program code may be initiated upon receiving an initiation signal from a teacher device and/or a student device. The experiment may be described by the experiment instructions and designed to cause toy-object102to drive along track103. Examples of experiments are described later. Once the experiment instructions are received by sensor device100, processors100A and/or100D may generate, based on the one or more experiment instructions, one or more driving instructions for causing object102to drive along track103. This may include generating, based on the driving instructions, one or more driving signals and sending the signals to, for example, wheels of object102, which in turn may cause moving object102along track103. The driving signals may be generated in such a way that the speed with which object102is driven along track103corresponds to the speed set up in the experiment instructions. The speed may vary and may depend on the many parameters set forth in the experiment. The example parameters are described later. As the driving instructions are executed and object102is driving along track103, sensors100B installed in sensor device100collect motion/movement value data and movement characteristics of moving object102. Furthermore, as object102is driving along track103, sensor100B transmits the collected motion movement value data to one or more devices, such as any student device110, teacher device120, manager server140and/or storage device150. Transmitting the value data may include, for example, generating motion signals based on the motion movement data that represent the motion values associated with the movements of object102, providing the motion signals to wireless transceiver100E and causing the transceiver to wirelessly transmit the motion signals to the user devices. Upon receiving the motion signals, the user device may generate a graphical representation of the motion signals and display the graphical representation of the motion signals on a display device of the user device. 2.2. Example Hardware Configuration of a Sensor Device FIG.2Ais a block diagram showing an example hardware configuration of a sensor device. In some embodiments, sensor device100(also depicted inFIG.1) comprises a rechargeable battery206, a power supply208, a microcontroller210, a Bluetooth communications device212, a wheel encoder214, and an inertial measurement unit (IMU) encoder216. Rechargeable battery206may be configured to provide power to power supply unit208, which may be also chargeable via a micro USB charger202. Power supply208may be configured to provide power to microcontroller210, which may be configured to execute program instructions to perform the tasks of sensor device100, as described above. The program instructions may be provided to microcontroller210, via Bluetooth communications device212, from any type of user device204, such as a computer, tablet, Chromebook, or a smartphone. Bluetooth communications device212and user device204may communicate with each other wirelessly, as shown inFIG.2A, or using any type of cable-based communications connection (not shown). In some embodiments, microcontroller210is configured to generate, based on the program instructions, signals for controlling wheels of object102in which sensor device100is implemented. This may include generating control signals for controlling a position, speed, acceleration, and the like of object102, and therefore, to move object102along track103, according to the speed, velocity, acceleration, and the like, along track103. IMU sensor216is an electronic device that measures and reports a g-force, acceleration, angular rate, and optionally the orientation of object102in which sensor device100is integrated. IMU sensor216measures the force, etc., using a combination of accelerometers, gyroscopes, and magnetometers. IMU216may be configured to determine the linear acceleration using one or more accelerometers and to determine the rotational rate using one or more gyroscopes. In some embodiments, IMU sensor216includes a magnetometer which may be used to determine a heading reference. A typical configuration of IMU2016contains one accelerometer, gyro, and magnetometer per axis for each of the three principal axes: pitch, roll, and yaw. 2.3. Example Object Implemented as a Toy-Car Object102that provides housing to sensor device100and that is configured to drive along track103may be implemented as any type a physical object can be moved along track103. Object102is usually a relatively small object that can be placed on a laboratory table and on track103, and at the same time, large enough to house sensor device100along with its components, such as processors, power supply, IMU unit, and the like. Example implementation of object102as a toy-car is depicted inFIG.2B. FIG.2Bis a block diagram showing an example object102implemented in a toy-car. The toy-car is usually designed to have a compartment222that is large enough to house sensor device100, described inFIGS.1and2A. At the same time, the toy-car needs to be small enough, so that it is easy to carry it, place it on a laboratory table, and drive it along track103. In some embodiment, object102implemented as a toy-car comprises four wheels220A-220D, as shown inFIG.2B. The wheels are usually pivotally mounted on two separated axles, and each axle connects two wheels of the four wheels and is communicatively coupled with a servo-mechanical mechanism that provides torque to the wheels. Wheels are simple machines configured to reduce the force of friction, and thus to allow moving object102along track103. The wheels also help to turn object102with more force or help object102to turn faster than if movements of object102were dependent only on gravity, pushing by hand, and the like. When a wheel turns, its edge goes around faster than the middle of the wheel. Furthermore, by applying a certain amount of torque to an axle, the wheels attached to the axle can rotate at a speed that is a function of the torque, the size of the wheel, characteristics of an internal lining of track103, and the like. In some embodiments, two wheels attached to an axle rotate at different speeds. The wheels spin at different speeds usually when object102is turning. During a turn, each wheel travels a different distance through the turn, and the inside parts of the wheels travel a shorter distance than the outside parts of the wheels. Since speed is equal to the distance traveled by the wheel divided by the time it takes for the wheel to cover that distance, the wheels that travel a shorter distance travel at a lower speed. In some embodiments, object102implements a differential that allows transmitting different amounts of torque to the wheels to allow them to rotate at different speeds as object102turns. 2.4. Example Tracks A track is a physical path along which toy-object102may be driven. The track may be constructed using any type of material, such as wood, plastic, cardboard, metal, and the like, and may have any size or shape. The tracks may be used by groups of students to race their toy-cars, participate in scientific experiments related to physics, kinematics, and mechanics, and learn the movement characteristics of the physical objects driven along the tracks. Tracks may have different shapes. For example, the tracks may be straight away, circular, oval, and the like. Some tracks may be shaped as a loop that may be either flat or non-flat. Other tracks may be shaped as slopes with different degrees of slopes and curvatures. Yet other tracks may have an obstacle course and/or one or more crash barriers. FIG.2Cis a block diagram showing examples of tracks. The examples depicted inFIG.2Care provided to merely illustrate a subset of all possible shapes of tracks and should not be considered as limited in any way. The examples depicted inFIG.2Cinclude a flat track232with obstacles, an uphill track234, a flat track235with a barrier, a plurality of racing tracks236-237, and a roller-coaster-type track238. Flat track232with obstacles may include one or more obstacles231and one or more buffers233. This track may be used to test and experiment with the impact that the obstacles231and buffers233may have on toy-object102as object102is driven along track232with different speeds, and the like. Variations of track232may include a plurality of flat tracks arranged in parallel and allowing a plurality of toy-objects to race along the tracks. Uphill track234may include a track that has an increasing height from a start point and toward an end point. This track may be used to test and experiment with the changes in speed of toy-object102as the object is driven up the hill. Variations of track234may include a plurality of uphill tracks arranged in parallel and allowing a plurality of toy-objects to race along the tracks. In some embodiments, uphill track234may be used in the opposite direction, i.e., to test and experiment with the changes in speed of toy-object102as the object is driven down the hill. Variations of track234may include a plurality of downhill tracks arranged in parallel and allowing a plurality of toy-objects to race along the tracks. Flat track235with a barrier may include one or more barriers239. This track may be used to test and experiment with the impact the barrier may have on toy-object102as the object hits barrier239at different speeds, with different force, and the like. Variations of track234may include a plurality of flat tracks with barriers arranged in parallel and allowing a plurality of toy-objects to race along the tracks. Plurality of racing tracks236-237may include a plurality of flat concentric tracks. Each of toy-objects102A-102B may be driven on its own track. That type of track may be used by groups of students to race their toy-cars along the tracks, test how the weight of the toy impacts the speed of the toy, and the like. Variations of tracks236-237may include a plurality of concentric tracks having different elevations and/or multiple curves, to allow a plurality of toy-objects to race along the tracks. Roller-coaster-type track238may include a track that has different elevations and a plurality of curves and turns. That type of track may be used by groups of students to test how the elevations and curves impact the speeds of toys102C,102D and102E. One can envision that in addition to tracks232,234-238, other tracks may also be designed and used for the purpose of conducting and monitoring computer-and-sensor based physics experiments. 2.5. Student Devices Referring again toFIG.1, in some embodiments, computer environment10comprises one or more student devices110. Student devices110may include various user devices, such as laptops, smartphones, PDAs, tablets, PCs, workstations, and the like. Student devices110may be configured to execute software applications that allow downloading applications and data from storage150. For example, student devices110may be configured to download experiment applications, experiment data, experiment results, statistical information about the experiments, experiment parameters, and the like, from storage150. Upon downloading one or more experiment applications and data for an experiment, from storage150, student devices110may execute the experiment applications, and send instructions either directly to object102or to manager server140to initiate the experiment. Upon sending the instructions to initiate an experiment, student devices110may either directly control parameters of object102and send control parameters to manager server140or request that manager server140control object102during the experiment. Once an experiment is completed (or is in progress), the results of the experiment may be transmitted, by wireless transceiver100E, from object102to either directly to student devices110or to manager server140. Furthermore, or alternatively, the results of the experiment may be communicated to one or more teacher devices120. Moreover, the results of the experiment may be transmitted to storage150for storing. 2.6. Teacher Devices Referring again toFIG.1, in some embodiments, computer environment10comprises one or more teacher devices120. Teacher devices120may include various user devices such as laptops, smartphones, PDAs, tablets, PCs, workstations, and the like. Teacher devices120may be configured to execute software applications that allow downloading applications and data from storage150. For example, teacher devices120may be configured to download experiment applications, experiment data, experiment results, statistical information about the experiments, experiment parameters, and the like, from storage150. Upon downloading one or more experiment applications and data for an experiment, from storage150, teacher devices120may execute the experiment applications, and send instructions either directly to object102or to manager server140to initiate the experiment. Furthermore, teacher devices120may forward the experiment applications to student devices110and cause student devices110to execute the experiment applications to conduct the corresponding experiment. In some embodiments, teacher devices120may initiate an experiment on their own. Alternatively, teacher devices120may send instructions to initiate an experiment to one or more student devices110, which in turn, may either directly control parameters of object102and send control parameters to manager server140and request that manager server140control object102during the experiment. Once an experiment is completed (or is in progress), the results of the experiment may be transmitted, by wireless transceiver100E, from object102to either directly to teacher devices120or to manager server140. Furthermore, or alternatively, the results of the experiment may be communicated to one or more student devices110. Moreover, the results of the experiment may be transmitted to storage150for storing. 2.7. Manager Server Referring again toFIG.1, manager server140may be configured to manage conducting and monitoring computer-and-sensor based physics experiments conducted in computer environment10. Manager server140may be implemented in a standalone server, a distributed server system, in a cloud system, and the like. Manager server140may be configured to host a variety of applications, including software applications configured to define scientific experiments, software applications for conducting scientific experiments, software applications for collecting data as scientific experiments are conducted, and the like. Manager server140may be configured to receive instructions to start, end, and/or resume scientific experiments. For example, manager server140may be configured to receive instructions from student devices110and/or teacher devices120to initiate an experiment, to stop the experiment, and/or to resume the experiment. Upon receiving such instructions, manager server140may download a corresponding experiment application onto sensor device100, integrated in object102. Once transceiver100E (shown inFIG.1) receives the experiment application, the transceiver may communicate to manager server140that the experiment is ready to be initialized. In response thereto, manager server140may either initiate and start the experiment itself or cause student devices110and/or teacher devices120to initiate the experiment. Once the experiment is finished, stopped, or otherwise terminated, manager server140may request the results of the experiment and, upon receiving the results, store them in, for example, storage150. Manager server140may further be configured to receive updates for experiment applications, provide a GUI for updating and modifying the experiment applications, provide a GUI and tools for analyzing results of the experiments and the like. 2.8. Storage Referring toFIG.1, storage150may be configured to store and serve experiment applications and experiment data for conducting and monitoring computer-and-sensor based physics experiments. Storage150may be implemented in a standalone server, a distributed server system, in a cloud system, and the like. Storage150may be configured to store results, statistical data, and parameters for a variety of applications, including software applications configured to define scientific experiments, software applications for conducting scientific experiments, software applications for collecting data as scientific experiments are conducted, and the like. For example, storage150may be configured to store experiment initialization parameters for an experiment, experiment results provided once the experiments were completed and stopped, statistical information about the experiments, statistical information about the users who participated in the experiments, ratings of the experiments, grades given to the user who participated in the experiments, and the like. Information for storing in storage150may be communicated from manager server140, object102, student devices110, and/or teacher devices120to storage150wirelessly or using any type of communications connection. Similarly, information already stored on storage150may be wirelessly communicated from storage150to manager server140, object102, student devices110, and/or teacher devices120using any type of communications connection. 3. Example Experiments In some embodiments, a system, an apparatus, and a method are configured for conducting and monitoring various computer-and-sensor based physics experiments. The experiments may be distributed from a central distribution server (such as manager server140) or may be distributed by educators and students interacting with manager server140, shown inFIG.1. Generally, the experiments that may be conducted and monitored using the presented system/approach/method allow the qualitative and quantitative analysis of various physics principles and scientific laws. Examples of experiments include experiments designed to measure kinetic energy in a roller coaster, experiments implementing bumper and crash barrier safety designs, experiments implementing energy dissipation to friction, experiments implementing races, experiments for pinewood derby-style competitions, experiments implementing elastic and inelastic collisions, experiments for modeling aerodynamics of a car, and the like. According to one example, an experiment includes testing a vehicle's speed, acceleration, force, and the like of object102as object102is driving along track103, which may have any shape and size, as shown inFIG.2C. According to another example, an experiment includes testing relative speeds, accelerations and forces, and the like, of a plurality of objects102as the objects are driven along their corresponding tracks, each of which may be any of the tracks shown inFIG.2C. According to other example, an experiment includes testing the relationship between a weight of object102(or a plurality of objects102) and the corresponding speed, acceleration, and the like, as object102(or the plurality of objects102) is driven along it track103(or their corresponding tracks103). Typical experiment setups involve dividing a group of students into a few teams, each team having 3-4 students. Every team has access to a computer-and-sensor based setup for performing physics experiments, a computer, kit materials including a track, weights, a tape, and the like. The time allocated to performing an experiment is usually 3-4 hours over multiple class periods. To perform the experiment, students are usually present in large classrooms to allow setting up the tracks and conducting the experiment. 3.1. Example Experiment for Determining Motion of an Object An example experiment that may be conducted using a computer-and-sensor system described herein allows determining motion of an object. The experiment may include planning an investigation to provide evidence that the change in an object's motion depends on the sum of the forces on the object and the mass of the object. Using the platform described herein, students can collect data required for a particular experiment without receiving detailed instructions from a teacher. The students may use modern equipment such as ultrasonic rangefinders that require a direct line-of-sight between the sensor and object and may use large objects of which the sonar signal can bounce off. The students may use carts like the carts having roller coaster designs and the like. 3.2. Example Experiment for Interpreting Digital Data An example experiment may include constructing and interpreting graphical displays of data to describe the relationships between kinetic energy and the mass of an object and the speed of an object. Using the platform described herein, a student may generate, display, and analyze scaffolded data representations of motion. While the intuitiveness of speed is something that everyone is familiar with, the data visualizations can help the students to determine the relevant data separated from, for example, errors and noise. The data may be visualized in a computer-graphics application that can be provided by the platform. 3.3. Example Experiment for Developing a Model An example experiment may include developing a model to describe that, when the arrangement of objects interacting at a distance changes, different amounts of potential energy are stored in the system. Development of accurate models of physical phenomena is a fundamental practice that many students and teachers struggle with. For example, the mathematical model of friction indicates that the force of friction corresponds to a coefficient multiplied by a normal force exerted on the object, Ff=μ/N. This is a very simple mathematical model, but it is difficult to experience or quantify in the real world. With the tools provided herein and using the data visualization applications, a student may build a model of the friction phenomena. The model building tool helps the students to develop valuable skills and increase their understanding of physics concepts. 3.4. Example Experiment for Researching Kinetic Energy An example experiment may include constructing, using, and presenting arguments to support the claim that when the kinetic energy of an object changes, energy is transferred to or from the object. The platform presented herein allows performing research related to the kinetic energy conversion and generating and expanding the energy model to include the impact of various forces acting on object102driving along track103. 4. Integration and Implementation Options 4.1. Operating Platform A system, an apparatus, and a method for conducting and monitoring computer-and-sensor based physics experiments may be implemented in a standalone product or as part of an existing application platform such as, for example, the PocketLab Notebook. The product is primarily intended for middle-school students. However, the product is also suitable for elementary-school students and college-level physics students. In some embodiments, an operating platform supporting an approach for conducting and monitoring computer-and-sensor based physics experiments integrates various components, including hardware and software elements. FIG.3Ais a block diagram showing an example operating platform302. In some embodiments, platform302houses hands-on experiment data304, a plurality of sensors306, student portal data308, teacher portal data310, and optionally other components, such as transceivers, communications protocol data, operating system data, and the like. Data304,308, and310may be stored in standalone server and/or cloud-based systems. The applications may be served from application servers and/or virtual machines supported by application servers. Hands-on experiment data304may include experiment applications described before, experiment result data, experiment rating data, and the like. Plurality of sensors306may include sensors306A that are configured to capture movements and movement characteristics of moving objects including, for example, object102depicted inFIG.1. Sensors306A may be electronic devices that are configured to measure and report values of g-force, acceleration, angular rate, and optionally the orientation of object102in which sensors306A are integrated. Sensors306A may measure the values of g-force, etc., using a combination of accelerometers, gyroscopes, and magnetometers. They may be configured to determine, for example, the linear acceleration using one or more accelerometers and to determine the rotational rate using one or more gyroscopes. Typical sensors306A include one accelerometer, gyroscope, and magnetometer per axis for each of the three principal axes: pitch, roll, and yaw. Plurality of sensors306may include sensors306B that are configured to capture, for example, weather data, air quality data, GPS data, and the like. That data may be used to, for example, design and conduct other science experiments for students and teachers. In some embodiments, sensor device100(shown inFIG.1) uses wireless electronics and miniaturized sensors mounted inside of a durable plastic car body of object102and is small enough to enable driving object102along a miniature track, as shown inFIG.2C. Object102may be configured to transmit data to any computer, tablet, smartphone, or the like, using Bluetooth-based communications connections. One of the objectives of implementing object102is to provide a vehicle for housing a sensor device configured to measure position, speed, velocity, acceleration, and force of object102as object102drives along a track. In some embodiments, a system/apparatus/method is implemented in a computer-based platform that supports any of the following operating systems: Chrome, iOS, Windows, Android, or Mac. The implementations may provide assistance, utilities and detailed explanations of physics experiments, and therefore, no-prep lesson materials for teachers may be required. 4.2. Auxiliary Devices In some implementations of the presented system/apparatus/method, various auxiliary devices are integrated and used in cooperation with a sensor device described herein. Examples of the auxiliary devices include speedometers, gages, clocks, and the like. FIG.3Bis a block diagram showing examples of auxiliary devices used in cooperation with a sensor device. In the depicted example, the auxiliary devices include photogates322, motion detectors324, mini dynamic carts326, and tracks328. Photogates322are usually used to study free falls, rolling objects, collisions, and pendulums. More specifically, photogates322allow to determine the accurate timing of events within physics experiments, and to study free falls, air track collisions, pendulum periods, and the speed of a rolling object. The photogate packages usually include an accessory rod for mounting to, for example, a ring stand. Photogates322usually have input ports so multiple gates can be connected in a daisy-chain configuration with up to four gates going to a single interface channel. Some of photogates322may be configured to operate in a laser gate mode, which requires the addition of a common pen laser. The laser may be mounted some distance from the gate to allow taking speed measurements of large objects such as model cars or model trucks. Photogates are essentially very accurate stop watches. The determined time may be used with additional information such as the distance traveled to calculate the speed of moving object102. Photogates only calculate speed data for a particular location or calculate average speed over a specific distance. They do not, however, provide acceleration data or force data. Motion detectors324may include ultrasonic rangefinders that can measure position, velocity, and acceleration. They usually operate within a specific range (e.g., 1 to 10 feet from the sensor) and in one-dimension. Mini dynamic carts326and tracks328may be used to perform experiments when studying the laws of motion. Mini dynamic carts326are scaled-down systems configured to teach velocity, acceleration, the Newton first and second laws, friction, and the conservation of momentum and energy. The carts are usually made out of plastic and have attachable spring steel bumpers, deep wells for weights and low friction wheels that snap into place on the carts. Tracks328may be implemented as benches that come with compatible gear for studying the laws of motion. Tracks328may be coated with a clear anodized finish to lower friction between an object driven along the track and the bare metal finish of the track. The tracks may come with adjustable feet and ring-stand brackets to make it easy to level or turn it into a ramp. 4.2. Visualization In some embodiments, a method presented herein is configured to generate visual representations of experiments, results of the experiments, scores and rating of users who participated in the experiments, and the like. The visual representations may be transmitted to display devices to cause the display devices to launch browsers and generate GUIs depicting the visual representations in forms of charts, tables, graphs, and the like. Presenting and visualizing data may include generating a GUI that depicts information about, for example, top speed, distance traveled, maximum g-forces, gauges and speedometer readings, and the like, all collected during the duration of the experiment. The visualization of the data may include line graphs, scatter plots, bar charts, pie charts, and the like. 4.3. Intended User A system, an apparatus, and a method for conducting and monitoring computer-and-sensor based physics experiments may be implemented as a tool to be used primarily by middle-school students. However, the tool may be adapted to the requirements of elementary-school level so that it could be used by elementary-school students. Furthermore, the tool may be scaled up so that it could be used by college-level physics students. The tool may be also used by teachers, researchers, and the like. Referring again toFIG.3A, a student-user may access operating platform302, and more specifically to download a client software application that includes a student portal data308that, when executed, allows the student to draft laboratory reports, and analyze results of scientific experiments. A teacher student may access operating platform302and download a client software application that includes a teacher portal data310that, when executed, allows the teacher to monitor the execution of experiments by students, prepare for the lessons, and evaluate the students' progress in studying the scientific concepts. 5. Example Process for Conducting a Computer-and-Sensor Based Physics Experiment FIG.4is a flow diagram showing an example process of conducting and monitoring a computer-and-sensor based physics experiment executed by a sensor device. The example process depicted inFIG.4may be executed by any type of sensor device, including sensor device100depicted inFIGS.1and2A. As described before, a typical sensor device includes one or more processors and one or more sensors coupled to the processors and configured to capture motion values associated with movements of an object housing the sensor device as the object drives along a track. The sensor device may also include a wireless network transceiver coupled to the processors and a non-transitory computer-readable storage medium coupled to the processors and storing one or more sequences of instructions which, when executed by the processors, cause the processors to perform the steps depicted inFIG.4. In step402, a sensor device receives, using a wireless network transceiver, one or more experiment instructions for performing an experiment. Examples of experiments were described before. In step404, the sensor device generates, based on the one or more experiment instructions, one or more driving instructions for causing the object to drive along the track. The driving instructions may specify, for example, a torque (if the sensor device is equipped with a motor and the like) to be applied to wheels of the object to cause the object to drive along the track. The driving instructions may also specify acceleration amounts to be applied to the wheels of the object as the object drives along the rack, and the like. The track may be any type of straight, curved, closed, flat, uphill, downhill, or a combination thereof, and configured to provide a hard, continuous surface for driving the object. In step406, the sensor device executes the one or more driving instructions to cause the object to drive along the track. Execution of the driving instructions may be initiated upon receiving from, for example, a user computer, initiation instructions for initiating the execution of driving instructions. The initiation instructions may include an electric signal or a mechanical input that, once received, releases, for example, brakes installed on wheels of the toy-object allowing the object to move along the track. According to another example, the execution of the driving instructions may be initiated upon turning on a switch implemented on the sensor device that, once turned on, releases, for example, the brakes installed on the wheels of the toy-object allowing the object to move along the track. Driving the object along the track may include applying the torque to the wheels of the object to cause the object to drive along the track. Alternatively, if the track is a downhill track, executing the driving instructions may include releasing brakes attached to the wheels of the object to cause the object to slide down the downhill track. As the object is driving along the track, the sensor device receives, in step408, from the one or more sensors, the motion values associated with the movements of the object along the track. The captured motion values may include position data, speed data, acceleration data, g-force data, and the like. All the captured motion values may be collected, for example, at discrete time points within a time period. In step410, the sensor device tests whether the experiment is to be stopped, finished, or otherwise terminated. For example, as the sensor device executes the driving instructions, the sensor device may determine that a time period for conducting the experiment has expired, and therefore, the experiment needs to be completed. According to another example, as the sensor device executes the driving instructions, the sensor device may receive instructions to stop the experiment, and therefore, the device will terminate the experiment. According to other examples, the sensor device may detect that the object has been removed from the track, and therefore, the experiment needs to be terminated. If, in step410, the sensor device determines that the experiment is to be stopped, finished, or otherwise terminated, then the sensor device proceeds to perform step412. Otherwise, the sensor device proceeds to perform step406to continue executing the driving instructions to cause the object to drive along the track. In step412, the sensor device transmits, using the wireless network transceiver, the motion values associated with the movements of the object to one or more user devices. Transmitting the motion values to the one or more user devices causes a user device, from the one or more user devices, to generate a graphical representation of the motion values and display the graphical representation on a display device of the user device. The graphical representation may include a chart, a table, a graph, or any form used to arrange and display the motion values collected during the experiment. For example, the graphical representation may represent a graph showing the track on which the object was driven, and the corresponding motion values associated with the movements of the object along the track. The motion values may include, for example, top speed, distance traveled, maximum g-forces, gauges and speedometer readings and the like collected as the experiment was performed. 6. Example Process for Conducting a Multiple-Objects Experiment FIG.5is a flow diagram showing an example process of conducting a multiple-objects experiment. The example process depicted inFIG.5may be executed by any type of computing device, including devices depicted inFIG.1, such as a manager server140, any teacher device120, any student device110, and the like. Manager server140may be configured to manage the conducting and monitoring of computer-and-sensor based physics experiments performed in computer environment10, shown inFIG.1. Manager server140may be implemented in a standalone server, a distributed server system, in a cloud system, and the like, and may be configured to host a variety of applications, including software applications defining scientific experiments, software applications for conducting scientific experiments, software applications for collecting data as scientific experiments are conducted, and the like. Student devices110may be configured to execute software applications that allow downloading applications, experiment data, experiment results, statistical information about the experiments, experiment parameters, and the like, from storage150. Teacher devices120may be configured to execute software applications and to download the applications and data from storage150. Teacher devices120may be configured to, for example, download experiment applications, experiment data, experiment results, statistical information about the experiments, experiment parameters, and the like, from storage150. Upon downloading one or more experiment applications and data from storage150, teacher devices120may execute the experiment applications, and send instructions either directly to object102or to manager server140to initiate the experiment, or cause student devices110to execute the experiment applications to conduct the corresponding experiments. For simplicity of the description, it is assumed herein that the steps ofFIG.5are performed by a user computer. The approach depicted inFIG.5allows conducting a plurality of experiments using a plurality of sensor devices and having a plurality of corresponding objects driving along their corresponding tracks. The experiments may be executed simultaneously, sequentially, or according to any time schedule. The results of the experiments may be collected and visualized graphically to allow the users to visually compare the experiments' results. In step502, a user computer transmits, to a plurality of sensor devices, a plurality of instruction sets for performing a plurality of experiments by the plurality of sensor devices integrated in a plurality of objects. In some embodiments, transmitting the plurality of instruction sets to the plurality of sensor device causes each of the plurality of sensor devices to execute an instructions set, of the plurality of instructions sets, to cause a corresponding object, associated with the sensor device, to drive along a corresponding track. In some embodiments, executing the plurality of instructions sets by the plurality of sensor devices causes the plurality of sensor devices to perform racing experiments in which corresponding objects, of the plurality of objects, race each other along their corresponding tracks. The plurality of experiments may include a variety of experiments, including an experiment for determining motion of a physical object as the physical object drives along a physical track, an experiment for interpreting digital data collected as a physical object drives along a physical track, an experiment for developing a physical model of motion of a physical object as the physical object drives along a physical track, and an experiment for researching kinetic energy associated with driving a physical object along a physical track. As the corresponding object drives along the corresponding track, the corresponding sensor device collects corresponding motion values associated with movements of the corresponding object as the corresponding object drives along the corresponding track, and the corresponding transceiver transmits the corresponding motion values to the user de computer vice, or a plurality of user devices. Transmitting the corresponding motion values to the user computer causes the user computer to generate a corresponding graphical representation of the corresponding motion values and display the graphical representation on a display device of the user computer. In step510, the user computer tests whether all experiments (or a group of experiments from the plurality of experiments) have been finished, stopped, or otherwise terminated. If so, then the user computer proceeds to perform step512. Otherwise, the user computer continues testing in step510. In step512, the user computer collects the motion values from all experiments (or the group of experiments) and transmits the motion values to one or more user computers to cause the user computers to generate graphical representations of the motion values and to display the graphical representation on display devices of the user computers. 7. Example Process for Conducting a Parametric Experiment FIG.6is a flow diagram showing an example process for conducting a parametric experiment. The example process depicted inFIG.6may be executed by any type of sensor device, including sensor device100depicted inFIGS.1and2A. In step602, a sensor device receives, using a wireless network transceiver, one or more experiment instructions for performing an experiment. Examples of experiments used in these embodiments include the parametric experiments in which a user (e.g., a student or a teacher) can provide one or more parameter values, and therefore, can model the outcome of the experiments. In step603, the sensor device receives, using the wireless network transceiver, one or more parameter values for the experiment instructions for performing the experiment involving driving the object along the track. The parameter values may include the values for the parameters such as speed, g-force, acceleration, length of the track, and the like. In step604, the sensor device generates, based on the one or more experiment instructions and the one or more parameter values, one or more parametric instructions for causing the object to drive along the track. The parametric instruction may, for example, include the parameter values provided by the user for the purpose of modeling the outcome of the experiment. In step606, the sensor device executes the one or more parametric instructions to cause the object to drive along the track. As the object is driving along the track, the sensor device receives, in step608, from the one or more sensors, new motion values associated with the movements of the object as the object is driving along the track. Once the object finishes driving along the track, the sensor device determines, in step610, whether the new motion values are as expected. This may be determined by, for example, downloading from storage150(shown inFIG.1) the expected motion values precomputed for the parameter values provided by the user and using the experiment instructions received using the transceiver, and comparing the downloaded expected motion values with the new motion values collected by the sensor device as the parametric experiment was conducted. If the new motion values match the expected motion values (either exactly or within a certain error margin), then the sensor device proceeds to perform step612. In step612, the sensor device transmits, using the wireless network transceiver, the new motion values associated with the movements of the object to one or more user devices so that a user device, from the one or more user devices, can generate a new graphical representation of the motion values and display the new graphical representation on the display device of the user device. However, in response to determining that the new motion values do not match the expected motion values, the sensor device may transmit, using the wireless network transceiver, a message indicating that the new motion values are not as expected. Furthermore, the sensor device may proceed to request one or more updated parameter values and repeat the process from step603. 8. Improvements Provided by Certain Embodiments In some embodiments, a present computer-and-sensor based platform provides many benefits to students and teachers. The platform provides a tool that allows the students to increase their proficiency and master the physics-related topics taught in SEP1, 2, 3, and 4 courses offered as part of the middle-school physical science teaching units. The platform, when used by the students to practice the hands-on-based activities, allows the students to study the science concepts interactively and in groups of students. The achievable outcomes are equally relevant in elementary and high school grades. Through its interactivity, the platform allows the students to ask questions related to science phenomena and gain the knowledge to answer those questions themselves. 9. Implementation Mechanisms Although the flow diagrams of the present application depict a particular set of steps in a particular order, other implementations may use fewer or more steps, in the same or different order, than those depicted in the figures. According to one embodiment, the techniques described herein are implemented by one or more special-purpose computing devices. The special-purpose computing devices may be hard-wired to perform the techniques or may include digital electronic devices such as one or more application-specific integrated circuits (ASICs) or field programmable gate arrays (FPGAs) that are persistently programmed to perform the techniques or may include one or more general purpose hardware processors programmed to perform the techniques pursuant to program instructions in firmware, memory, other storage, or a combination. Such special-purpose computing devices may also combine custom hard-wired logic, ASICs, or FPGAs with custom programming to accomplish the techniques. The special-purpose computing devices may be desktop computer systems, portable computer systems, handheld devices, networking devices or any other device that incorporates hard-wired and/or program logic to implement the techniques. FIG.7is a block diagram that depicts an example computer system700upon which embodiments may be implemented. Computer system700includes a bus702or other communication mechanism for communicating information, and a processor704coupled with bus702for processing information. Computer system700also includes a main memory706, such as a random-access memory (RAM) or other dynamic storage device, coupled to bus702for storing information and instructions to be executed by processor704. Main memory706also may be used for storing temporary variables or other intermediate information during execution of instructions to be executed by processor704. Computer system700further includes a read only memory (ROM)708or other static storage device coupled to bus702for storing static information and instructions for processor704. A storage device710, such as a magnetic disk or optical disk, is provided and coupled to bus702for storing information and instructions. Computer system700may be coupled via bus702to a display712, such as a cathode ray tube (CRT), for displaying information to a computer user. Although bus702is illustrated as a single bus, bus702may comprise one or more buses. For example, bus702may include without limitation a control bus by which processor704controls other devices within computer system700, an address bus by which processor704specifies memory locations of instructions for execution, or any other type of bus for transferring data or signals between components of computer system700. An input device714, including alphanumeric and other keys, is coupled to bus702for communicating information and command selections to processor704. Another type of user input device is cursor control716, such as a mouse, a trackball, or cursor direction keys for communicating direction information and command selections to processor704and for controlling cursor movement on display712. This input-device typically has two degrees of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allows the device to specify positions in a plane. Computer system700may implement the techniques described herein using customized hard-wired logic, one or more ASICs or FPGAs, firmware and/or program logic or computer software which, in combination with the computer system, causes or programs computer system700to be a special-purpose machine. According to one embodiment, those techniques are performed by computer system700in response to processor704executing one or more sequences of one or more instructions contained in main memory706. Such instructions may be read into main memory706from another computer-readable medium, such as storage device710. Execution of the sequences of instructions contained in main memory706causes processor704to perform the process steps described herein. In alternative embodiments, hard-wired circuitry may be used in place of or in combination with software instructions to implement the embodiments. Thus, embodiments are not limited to any specific combination of hardware circuitry and software. The term “computer-readable medium” as used herein refers to any medium that participates in providing data that causes a computer to operate in a specific manner. In an embodiment implemented using computer system700, various computer-readable media are involved, for example, in providing instructions to processor704for execution. Such a medium may take many forms, including but not limited to, non-volatile media and volatile media. Non-volatile media includes, for example, optical or magnetic disks, such as storage device710. Volatile media includes dynamic memory, such as main memory706. Common forms of computer-readable media include, for example, a floppy disk, a flexible disk, hard disk, magnetic tape, or any other magnetic medium, a CD-ROM, any other optical medium, a RAM, a PROM, and EPROM, a FLASH-EPROM, any other memory chip, or memory cartridge, or any other medium from which a computer can read. Various forms of computer-readable media may be involved in carrying one or more sequences of one or more instructions to processor704for execution. For example, the instructions may initially be carried on a magnetic disk of a remote computer. The remote computer can load the instructions into its dynamic memory and send the instructions over a telephone line using a modem. A modem local to computer system700can receive the data on the telephone line and use an infra-red transmitter to convert the data to an infra-red signal. An infra-red detector can receive the data carried in the infra-red signal and appropriate circuitry can place the data on bus702. Bus702carries the data to main memory706, from which processor704retrieves and executes the instructions. The instructions received by main memory706may optionally be stored on storage device710either before or after execution by processor704. Computer system700also includes a communication interface718coupled to bus702. Communication interface718provides a two-way data communication coupling to a network link720that is connected to a local network722. For example, communication interface718may be an integrated service digital network (ISDN) card or a modem to provide a data communication connection to a corresponding type of telephone line. As another example, communication interface718may be a local area network (LAN) card to provide a data communication connection to a compatible LAN. Wireless links may also be implemented. In any such implementation, communication interface718sends and receives electrical, electromagnetic, or optical signals that carry digital data streams representing various types of information. Network link720typically provides data communication through one or more networks to other data devices. For example, network link720may provide a connection through local network722to a host computer724or to data equipment operated by an Internet Service Provider (ISP)726. ISP726in turn provides data communication services through the world-wide packet data communication network now commonly referred to as the “Internet”728. Local network722and Internet728both use electrical, electromagnetic, or optical signals that carry digital data streams. Computer system700can send messages and receive data, including program code, through the network(s), network link720and communication interface718. In the Internet example, a server730might transmit a requested code for an application program through Internet728, ISP726, local network722and communication interface718. The received code may be executed by processor704as it is received, and/or stored in storage device710, or other non-volatile storage for later execution. In the foregoing specification, embodiments have been described with reference to numerous specific details that may vary from implementation to implementation. Thus, the sole and exclusive indicator of what is, and is intended by the applicants to be, the approach is the set of claims that issue from this application, in the specific form in which such claims issue, including any subsequent correction. Hence, no limitation, element, property, feature, advantage or attribute that is not expressly recited in a claim should limit the scope of such claim in any way. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense. | 68,406 |
11862039 | In figures:1: bottom plate;2: curved table;3: motor;4: transmission shaft;5: diverter;6: screw rod;7: screw rod supporting column;8: sliding guide rail;9: fixed baffle plate;10: sliding block;11: detachable baffle plate;12: hinge; and13: swing baffle plate. DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT The present invention is further described in detail with the accompanying drawings and the preferred embodiment. As shown inFIG.1, according to the preferred embodiment of the present invention, a device for an analogue modeling experiment of a geological structure under a hypergravity field of a large-scale centrifuge comprises a bottom plate1, a curved table2, a power part and a baffle plate part. As shown inFIG.1, the bottom plate1is arranged on a basket of the centrifuge; two sides of the bottom plate1are both mounted with a screw rod component; each screw rod component comprises a screw rod6, a sliding guide rail8, a diverter5, and a screw rod supporting column7; the screw rod6is arranged in parallel with the bottom plate1; two ends of the screw rod6are respectively supported and connected between the diverter5and the screw rod supporting column7; the diverter5and the screw rod supporting column7are fixed on the bottom plate1; the sliding guide rail8is fixed on the bottom plate1below the screw rod6; a motor3is fixed on the bottom plate1between same ends of two screw rods6; two ends of the motor3are symmetrically equipped with output shafts; the output shafts at the two ends of the motor3, through respective transmission shafts4, are connected to one end of two diverters5of two screw rod components; the other end of each diverter5is connected with one end of the screw rod6; the two ends of each diverter5are respectively located at two vertical sides; on each diverter5, a direction of the transmission shaft4is perpendicular to a direction of the screw rod6; the other end of the screw rod6is connected with the screw rod supporting column7; the screw rod supporting column7is for fixing and supporting the screw rod6; the motor3, the transmission shafts4, and the screw rod components constitute the power part, and the motor3of the power part is a power source. As shown inFIG.2, a fixed baffle plate9is connected between the two screw rods6of the two screw rod components; the fixed baffle plate9is parallel to the output shafts of the motor3and the transmission shafts4; two ends of the fixed baffle plate9are respectively connected with the two screw rods6through threaded connection, and bottoms of the two ends of the fixed baffle plate9are embedded with two sliding guide rails8; a detachable baffle plate11is mounted at a lower part of the fixed baffle plate9; a lower part of the detachable baffle plate11is connected with a swing baffle plate13through a hinge12; the fixed baffle plate9, the detachable baffle plate11, the hinge12, and the swing baffle plate13constitute the baffle plate part. The curved table2is fixed on a middle of the bottom plate1; an upper surface of the curved table2is an arc-shaped cylindrical surface; a tangential direction of the arc-shaped cylindrical surface is parallel to the two screw rods6, and a generatrix direction (axial direction) of the arc-shaped cylindrical surface is parallel to the fixed baffle plate9; a hinged shaft between the basket of the centrifuge and the bottom plate1is parallel to the bottom plate1, but perpendicular to the axial direction of the arc-shaped cylindrical surface, so that motion tracks on the arc-shaped cylindrical surface of the curved table2are all on a same cylindrical surface with a rotation shaft of the centrifuge as a center axis when the centrifuge rotates; when the centrifuge rotates, the center axis of the arc-shaped cylindrical surface is overlapped with the rotation shaft of the centrifuge. As shown inFIG.3, the swing baffle plate13is made of flexible material, and contacts the arc-shaped cylindrical surface of the curved table2. A curvature radius of the arc-shaped cylindrical surface is equal to a distance from the rotation shaft to a bottom surface of the bottom plate1when the centrifuge rotates (namely an effective radius of the large-scale centrifuge) after subtracting a thickness of the bottom plate1and a central thickness of the curved table2. Therefore, when the large-scale centrifuge works, it is guaranteed that the upper surface of the curved table2completely fits the equipotential surface of centrifugal force herein. As shown inFIG.3, flowing of non-experimental reason, caused due to the experimental materials (especially fluid materials) placed on the curved table2not in the same equipotential surface of centrifugal force, is avoided. In order to ensure strength, the bottom plate1is made of steel material. As shown inFIG.1, the curved table2is an experimental operation table; experimental materials and models are placed on the curved table2; in order to ensure strength and quality, the curved table2is made of aluminum alloy material. As shown inFIG.1, the diverter5diverts a transmission direction of the transmission shaft4and transmits to the screw rod6. Because the fixed baffle plate9is coupled with a pair of screw rods6through screw nuts, when the screw rods6rotate, the fixed baffle plate9is driven to translate along the screw rods6through the screw nuts. As shown inFIG.1andFIG.2, the screw nuts are respectively embedded in the two ends of the fixed baffle plate9, so that the two ends of the fixed baffle plate9are coupled with the two screw rods6. The bottoms of the two ends of the fixed baffle plate9are fixed with two sliding blocks10, and the sliding blocks10are respectively embedded with the two sliding guide rails8, so that the sliding blocks10can not only support the fixed baffle plate9, but also connect the fixed baffle plate9to the sliding guide rails8, guaranteeing that the fixed baffle plate9can translate along the screw rods6. The detachable baffle plate11is fixed at a middle of the fixed baffle plate9through a screw, and the lower part of the detachable baffle plate11is connected with the swing baffle plate13through the hinge12, so as to guarantee that the swing baffle plate13can flip and swing with a rotation shaft of the hinge12as the center. As shown inFIG.1andFIG.3, a lower part of the swing baffle plate13contacts the upper surface of the curved table2; when the fixed baffle plate9translates, because the swing baffle plate13can swing upward and downward along the hinge12, under the effect of gravity, no matter the fixed baffle plate9translates to any position, the swing baffle plate13can always keep the contact relationship with the upper surface of the curved table2. During implementation, the fixed baffle plate9is translated to an ideal position, and then the experimental models and the required experimental materials are placed on the curved table2. The whole device is hoisted and loaded into the basket of the large-scale centrifuge with the hoist, and the necessary lines are connected, so that the preparation work is completed. The large-scale centrifuge is started; after the centrifuge is accelerated to a preset accelerated velocity, the motor3is started according to experimental requirements and drives the fixed baffle plate9to translate a specific distance in the specific velocity, and the swing baffle plate13moves the same distance in the same velocity and facilitates the experimental models and materials on the upper surface of the curved table2to deform, so as to generate the experimental phenomenon. Until the experiment ends, the whole device is unloaded from the basket. During experimentation, data processing of the hypergravity geological structure analogue modeling experiment comprises steps of:(1) conducting two-dimensional shooting and three-dimensional elevation scanning with the specially designed hypergravity geological structure analogue modeling experimental device having the curved model surface, and collecting initial elevation data and initial velocity field data; wherein:during implementation, deformation materials generally adopt the experimental materials having different deformation characteristics, such as quartz sand, micro glass beads, and silica gel;when the centrifuge rotates, the center axis of the arc-shaped cylindrical surface is overlapped with the rotation shaft of the centrifuge;(2) correcting the initial elevation data and the initial velocity field data, and obtaining corrected elevation data and velocity field data. In the step (2), for the initial elevation data collected by the hypergravity geological structure analogue modeling experiment, a three-dimensional coordinate system is established; each elevation point in the initial elevation data all has initial two-dimensional plane coordinates and initial three-dimensional elevation, and the elevation points are position points in the elevation data. Correction of each elevation point is described as follows. Processing of the elevation data means the processing to the two-dimensional plane coordinates and the three-dimensional elevation of all the elevation points, comprising four steps of:(a1) according to the initial two-dimensional plane coordinates and the initial three-dimensional elevation of each elevation point, calculating a plane coordinate deviation of each elevation point caused by undulation of the curved model surface;(a2) according to the initial two-dimensional plane coordinates and the plane coordinate deviation obtained through the step (a1) of each elevation point, calculating two-dimensional plane coordinates of an orthographic point corresponding to each elevation point, so as to realize orthographic correction of the two-dimensional plane coordinates of each elevation point;(a3) according to the two-dimensional plane coordinates of the corresponding orthographic point obtained through the step (a2) and the known surface arc equation and arc length formula of the upper surface of the curved table, calculating corrected two-dimensional plane coordinates and elevation projection difference of each elevation point, so as to realize projection transformation of the two-dimensional plane coordinates of each elevation point;(a4) according to the initial three-dimensional elevation and the elevation projection difference calculated through the step (a3) of each elevation point, calculating corrected three-dimensional elevation of each elevation point, so as to realize projection transformation of the three-dimensional elevation of each elevation point; andfinally, integrating the corrected two-dimensional plane coordinates and three-dimensional elevation of each elevation point into corrected elevation data of each elevation point. The corrected two-dimensional plane coordinates and three-dimensional elevation of each elevation point are calculated through formulas of: {xf=RarcsinQ(x,z)Ryf=yzf=z-R+R2-[Q(x,z)]2Q(x,z)=x3+x(R-z)R2-x2x2+(R-z)2wherein: x and v represent the initial two-dimensional plane coordinates of the elevation point; z represents the initial three-dimensional elevation of the elevation point; xfand yfrepresent the corrected two-dimensional plane coordinates of the elevation point; zfrepresents the corrected three-dimensional elevation of the elevation point; Q(x,z) represents the X coordinate of the orthographic point corresponding to the elevation point represented with x and z; R represents the curvature radius of the arc-shaped cylindrical surface of the curved table. In the step (2), for the initial velocity field data collected by the hypergravity geological structure analogue modeling experiment, a two-dimensional coordinate system is established, and two-dimensional plane coordinates are given to each feature point; according to the calculation principle of PIV (Particle Image Velocimetry), it can be known that the velocity field is obtained through dividing the relative displacement of the corresponding feature points in two photos having the certain time interval by the time interval; each feature point in the initial velocity field data all has two-dimensional plane coordinates of a start point where a time step begins and a displacement distance from the start point to the end point within the time step; the feature points are position points in the initial velocity field data. Correction of each feature point is described as follows, and the whole correction process comprises five steps of:(b1) according to the two-dimensional plane coordinates of the start point and the displacement distance of each feature point, calculating two-dimensional plane coordinates of the end point of each feature point;(b2) according to the two-dimensional plane coordinates of the start point, the two-dimensional plane coordinates of the end point, and the three-dimensional elevation of the start point and the end point of each feature point, respectively calculating plane coordinate deviations of the start point and the end point caused by the undulation of the curved model surface;(b3) according to the two-dimensional plane coordinates of the start point and the end point, and the respective plane coordinate deviations of the start point and the end point calculated through the step (b2), calculating two-dimensional plane coordinates of orthographic points respectively corresponding to the start point and the end point, so as to realize orthographic correction of the two-dimensional plane coordinates of the start point and the end point;(b4) according to the two-dimensional plane coordinates of the orthographic points respectively corresponding to the start point and the end point, calculated through the step (b3), and the known surface arc equation and arc length formula of the upper surface of the curved table, respectively calculating corrected two-dimensional plane coordinates of the start point and the end point, so as to realize projection transformation of the two-dimensional plane coordinates of the start point and the end point;(b5) according to the corrected two-dimensional plane coordinates of the start point and the end point, calculating a corrected displacement distance of each feature point; andfinally, integrating the corrected two-dimensional plane coordinates of the start point and the corrected displacement distance of each feature point into the corrected velocity field data of each feature point. In the step (b2), both of the three-dimensional elevation of the start point and the end point adopt the initial three-dimensional elevation of the two points in the elevation data. The corrected two-dimensional plane coordinates and displacement distances along two directions of the two-dimensional plane coordinates of each feature point are calculated through formulas of: {xf=RarcsinQ(x,z)Ryf=ydxf=R[arcsinQ(x+dx,z′)R-arcsinQ(x,z)R]dyf=dyQ(x,z)=x3+x(R-z)R2-x2x2+(R-z)2Q(x+dx,z′)=(x+dx)3+(x+dx)(R-z′)R2-(x+dx)2(x+dx)2+(R-z′)2wherein: x and y represent the initial two-dimensional plane coordinates of the feature point; z represents the initial three-dimensional elevation of the feature point; z′ represents the initial three-dimensional elevation of the end point of the feature point; xfand yfrepresent the corrected two-dimensional plane coordinates of the feature point; dxfand dyfrepresent the corrected displacement distances along the two directions of the two-dimensional plane coordinates of the feature point; Q (x,z) represents the X coordinate of the orthographic point corresponding to the feature point represented with x and Z; Q (x+dx, z′) represents the X coordinate of the orthographic point corresponding to the end point represented with x+dxand z′ of the feature point; R represents the curvature radius of the arc-shaped cylindrical surface of the curved table. It can be seen that: the present invention not only provides the uniform hypergravity field for the experimental models and materials, but also greatly expands the experimental model scale and improves the model resolution ratio. Cooperated with the advantage of relatively large basket space of the large-scale centrifuge, convenience is provided for the real-time collection of the experimental data. Thus, on the basis of the realization of carrying out the analogue modeling experiment of the geological structure with the large-scale centrifuge, the present invention also has the above technical advantages, having the obvious technical effects. | 16,492 |
11862040 | It should be appreciated by those skilled in the art that any block diagrams herein represent conceptual views of illustrative systems and devices embodying the principles of the present subject matter. Similarly, it will be appreciated that any flow charts, flow diagrams, and the like represent various processes which may be substantially represented in computer readable medium and so executed by a computer or processor, whether or not such computer or processor is explicitly shown. DETAILED DESCRIPTION Exemplary embodiments are described with reference to the accompanying drawings. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. Wherever convenient, the same reference numbers are used throughout the drawings to refer to the same or like parts. While examples and features of disclosed principles are described herein, modifications, adaptations, and other implementations are possible without departing from the scope of the disclosed embodiments. Gamification in learning as an approach to education intends to motivate users into learning through game elements in a learning environment. Gamified learning enables maximizing enjoyment and engagement, and capturing learners' interest, thus inspiring them for further learning. Traditionally, gamified learning-based systems provide courseware along with online tutoring, and allows users to create and manage learning contents. However, conventional game development systems fail to provide a single, browser-based, scalable, one-stop knowledge platform offering high degree of customization and leveraging emerging technologies for rapid end-to-end game or multi-modal learning and assessment solutions development. Embodiment of the present disclosure provides methods and systems for providing browser-based customized game-based multi-modal learning and assessment framework. The present disclosure provides a mechanism to configure custom content into a readymade learning and assessment platform. The framework proposed in the present disclosure is custom built to support scalable architecture of a modular digital learning platform, and its functionalities. Building blocks of the framework expand capabilities of the modular digital learning platform while also keeping proprietary Learning and Assessment components within user-generated custom content. The system of the present disclosure allows content creators to author custom content to create an immersive learning experience that could range from Spatial two-dimensional (2D) as well as three-dimensional (3D) technologies, Augmented Reality (AR) and Virtual Reality (VR) or games at one place. Further, the framework that features a robust learning management system (LMS) integration, along with a vast content marketplace, enables rapid game authoring and reduces game development cycle time to a large extent while raising bar for quality solutions. A multitude of content creators spread across different geographies get to pick and choose a suitable learning and assessment cartridge from a catalog of the digital learning platform of the present disclosure to quickly craft enhanced multimodal learning and assessment solutions for the modern learner through simplified game development process. The framework of the present disclosure is a multifaceted custom framework, supporting creation of 2D games as well as paving way for democratized, engaging, and immersive learning, tapping into the potential of AR, VR and Spatial 3D technologies. Each customizable, configurable learning and assessment solution offered under the framework of the present disclosure is pre-built with gameplay logic, player progression logic, leaderboard, user experience (UX), feedback mechanisms, scoring logic, micro interactions and LMS integration. Referring now to the drawings, and more particularly toFIGS.1through3B, where similar reference characters denote corresponding features consistently throughout the figures, there are shown preferred embodiments and these embodiments are described in the context of the following exemplary system and/or method. FIG.1is a functional block diagram of a system for providing browser-based customized game-based multi-modal learning and assessment framework, in accordance with some embodiments of the present disclosure. In an embodiment, the system100includes a processor(s)104, communication interface device(s), alternatively referred as input/output (1/O) interface(s)106, and one or more data storage devices or a memory102operatively coupled to the processor(s)104. The system100with one or more hardware processors is configured to execute functions of one or more functional blocks of the system100. Referring to the components of system100, in an embodiment, the processor(s)104, can be one or more hardware processors104. In an embodiment, the one or more hardware processors104can be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, and/or any devices that manipulate signals based on operational instructions. Among other capabilities, the one or more hardware processors104are configured to fetch and execute computer-readable instructions stored in the memory102. In an embodiment, the system100can be implemented in a variety of computing systems including laptop computers, notebooks, hand-held devices such as mobile phones, workstations, mainframe computers, servers, and the like. The I/O interface(s)106can include a variety of software and hardware interfaces, for example, a web interface, a graphical user interface and the like and can facilitate multiple communications within a wide variety of networks N/W and protocol types, including wired networks, for example, LAN, cable, etc., and wireless networks, such as WLAN, cellular and the like. In an embodiment, the I/O interface (s)106can include one or more ports for connecting to a number of external devices or to another server or devices. The memory102may include any computer-readable medium known in the art including, for example, volatile memory, such as static random access memory (SRAM) and dynamic random access memory (DRAM), and/or non-volatile memory, such as read only memory (ROM), erasable programmable ROM, flash memories, hard disks, optical disks, and magnetic tapes. Further, the memory102includes a plurality of modules110(not shown) required for execution of functions of system100. Furthermore, the memory102includes a database108that stores a plurality of game file templates and corresponding metadata, multimodal learning and assessment content features, obtained gamified content, sharable content object reference model, and/or the like. Further, the memory102may comprise information pertaining to input(s)/output(s) of each step performed by the processor(s)104of the system100and methods of the present disclosure. In an embodiment, the database108may be external (not shown) to the system100and coupled to the system via the I/O interface106. In an embodiment, the system100operate in conjunction with a configurable gamification application112. Functions of the components of the system100are explained in conjunction with flow diagram ofFIG.2, architectural overview of the gamification application inFIG.3A, and architectural diagram of the game file template inFIG.3B. FIG.2is a flow diagram illustrating the method200for providing browser-based customized game-based multimodal learning and assessment framework, using the system ofFIG.1, in accordance with some embodiments of the present disclosure. In an embodiment, the system100comprises one or more data storage devices or the memory102operatively coupled to the processor(s)104and is configured to store instructions for execution of steps of the method200by the processor(s) or one or more hardware processors104. The steps of the method200of the present disclosure will now be explained with reference to the components or blocks of the system200as depicted inFIG.1, the steps of flow diagram as depicted inFIG.2, architectural overview inFIG.3A, and architectural diagram of the game file template inFIG.3B. Although process steps, method steps, techniques or the like may be described in a sequential order, such processes, methods, and techniques may be suitably configured to work in alternate orders. In other words, any sequence or order of steps that may be described does not necessarily indicate a requirement that the steps to be performed in that order. The steps of processes described herein may be performed in any order. Further, some steps may be performed simultaneously. Referring to the steps of the method200inFIG.2, at step202, the one or more hardware processors104are configured to receive, a plurality of game file templates and corresponding metadata from a dynamically updated database, as input to a game instantiation component of a configurable gamification application. In an embodiment, the gamification application refers to any scenario or situation in which it is desirable to apply gamification concepts. The configurable gamification application is designed based on a concept which allows gamification of learning including online course documents, audio, video and 3D content and assessment in a simple way for content creators.FIG.3Aillustrate an architectural overview of the configurable gamification application for providing browser-based customized game-based multimodal learning and assessment framework, in accordance with some embodiments of the present disclosure. As can be seen inFIG.3A, the configurable gamification application runs on three main components namely the game file templates (alternatively referred as game files), the game instantiation component, and a sharable content object reference model (SCORM) compliant learning management system (LMS). The game file template is not a web application, but a file which does comprise a pre-defined format. The game file template represents a ZIP file containing HTML content, videos, images, JavaScript, bundled in a proprietary format, which is an extension of SCORM format. The game file template contains game logic to run the game and could be like an EXE which is an executable file, but it gets executed only from within the SCORM compliant LMS. The game instantiation component represents a software which may be configured to a) Read the Games File, and b) display its corresponding metadata. Here, the corresponding meta data may include but not limited to name, description, preview images, and/or the like. Further, user inputs are received by the game instantiation component for Learning Content and Assessment Questions and answers. The Learning Content, and assessment content is plugged into the Games File templates's provided placeholders to create a new packaged instance of the game file template. In an embodiment, the SCORM compliant LMS could be any LMS, which knows how to play a SCORM file. Since the game is essentially a SCORM, it can be played on any SCORM Compliant LMS. In an embodiment, the plurality of game file templates comprises information pertaining to one or more type of games, one or more type of gamified activities and one or more type of simulations. In other words, the plurality of game file templates comprises ready-to-configure cartridges, which are categorized into 3 types, namely game-based cartridges, gamified activities-based cartridges and simulation-based cartridges. All these cartridges are device and platform agnostic, and provide a learner/end user an ability to consume learning and assessment content within an internet browser. Each cartridge is uniquely created to address a specific learning/assessment need, such as teaching someone learn about school mathematics or to train someone about safety drill, or a virtual corporate onboarding exercise, but all these cartridges are built using an underlying common framework that is built as modules to work together to create required learning objectives. In an embodiment, the plurality of game file templates includes one or more game functionalities in a specific format. The one or more game functionalities may include but may not be limited to goals and rules, challenges, progress tracker, user interface, pedagogy, interactions, roleplay characters, feedback and scoring mechanism, decision tree, single/multiplayer, leaderboard, rewards, and fictional storyline. In an embodiment, the plurality of game file templates further comprises a plurality of modules that are stored in the dynamically updated database. The plurality of modules includes a configuration module (categorized as root folder), a learning module, an assessment module, a conversation module and a media module.FIG.3Billustrates architectural diagram of a game file template showing interaction of various modules of the system ofFIG.1for providing the customized game-based learning and assessment framework, in accordance with some embodiments of the present disclosure. The configuration module is in format of XML and contains informative data for a specific type of game solution. The informative data may include but may not be limited to number of stages, default language, whether difficulty level option should be shown, whether leaderboard should be shown, header image, banner image, which stage learning data should be shown and which stage assessment should be shown. The game instantiation component of the configurable gamification application can read and also change data in this XML based on inputs received from a creator or an author of the gamification application regarding how the game should be run. The learning module is in format of XML and contains, in a fixed format, the relative paths of the learning documents, images, videos along with textual content that the game has to show to the learner, during various stages like the configuration module, the game instantiation component can read and also change data in this XML based on inputs received from the creator or the author. The author can give their own text/videos to go with the game in the game instantiation component. The assessment module is in format of XML and contains, in a fixed format, assessment data such as questions, their answers (options for MCQ, MSQ, text for textual answers), whether to group questions in the game, and/or the like. Once again, in the game Instantiation Component, the author can give their own question and answers and they are replaced in the XML by this component. So, the game instantiation component changes data in this XML while merging and creating gamified content. The conversation module is in format of XML and contains screen-wise, conversational dialogues of the game. These are provided language-wise so appropriate dialogues are shown based on user selected language. The media module contains an assessment folder that stores all the files (Images, videos.) referenced in the assessment modules, a learning folder that stores all files (PDF, Images, videos) referenced in the learning module and an internal media folder storing other media files of the game. Further, at step204ofFIG.2, the one or more hardware processors104are configured to obtain one or more customized learning and assessment content features from one or more domain experts of the configurable gamification application. In an embodiment, the one or more domain experts could be the author or creator of the gamification application. In an embodiment, the one or more customized learning and assessment content features include relative path of textual learning documents, images, videos with textual content, questionnaire, spatial three dimensional (3D) models, immersive augmented reality based features, experiential learning based features, and virtual reality based features. In another embodiment, the present disclosure provides high level of customization by way of the game-based cartridges, gamified-activities based cartridges and simulations-based cartridges. In the games-based cartridges, learner-centric learning and assessment solutions are created by embedding them within a game, with a set storyline that has twists and turns and a final objective. Here, a learner earns rewards for every step of progress and has setbacks for poor performance. Learning and assessment are facilitated within such a game in a configurable linear and non-linear randomized methods which provides the learner a unique game play for every attempt using the in-built game play mechanics functions defined within game logic and game play modules present within the game cartridge. This may include the learning and/or assessment constructed using multiple choice, multiple selection, comprehension, true or false type questions that can be quickly integrated within a fictional game with multiple levels of achievements where the learner is able to focus on the game play and fictional objective defined within the game storyline, through various unique ways to progress the game and complete the game mission to satisfaction. For example, a teacher wants to engage students in an enjoyable learning/assessment when the goal is to give the students multiple ways to revise the subject/topic while keeping the student engaged throughout the session. A game-based solution functions to address this need while keeping the learning performance of the student at center of this experience enabling them to progress within the game while also making them familiar within the subject/topic they are learning or getting self-assessed. Progress of the student is realized within a game user interface that is inbuilt within the game-based cartridge to share information such as player score, level progress, badges earned, leaderboard, and or the like. In gamified activities bases cartridges, learning and assessment content of different types are integrated within a gamified setup complete with rules, score guidelines, leader boards for real-time updates. An assessment constructed using multiple choice, multiple selection, comprehension, short answer, true or false questions, which are used to assess student's ability to understand a specific subject or a topic, by attempting several sets of questions generally becomes monotonous and soon laborious within short while. However, when same assessments are introduced using a gamification layer, significant progress is observed since the gamified activities-based cartridge creates reward points and short achievement milestones, and ability to quickly see self-progress in comparison to similar respondents and able to self-motivate and pursue further levels of knowledge. Further, when introduced along with easy-to-use configuration methods built within the gamified activities based cartridges, it becomes part of the configuration process of the gamified activity based cartridge to facilitate the users who has inclination towards creating such assessment content without any pre-requisite. Here, the easy-to-use configuration methods built within the gamified activities-based cartridges may include enabling a randomized assessment question, adaptive question selection, language selection option, leaderboard, display of feedback, display of hints, and/or the like. In the simulation-based cartridges, spatial learning and assessment solutions are created with use of available rich media library, enabling exploratory and experiential learning that can be applied for safety training, virtual tours and other meaningful use cases. the simulation-based cartridge enabled with learning/assessment content using any or all question types, such as multiple selection, multiple choice, true or false, comprehension, and/or the like can be created for specific or generic simulated learning/assessment purposes. Such cartridges enable anyone to add a 3D model or 360 image or video to quickly author them with learning content or assessment content, using hotspots that allow the learner to either find them, tag them with correct answers or identify a part of a 3D model that itself is part of an assessment question. The simulation-based cartridge provides ability to quickly create a hotspot based spatial assessment and learning, using uploaded media or links to existing online media such as YouTube video, or an image from public domain. Referring back toFIG.2, at step206ofFIG.2, the one or more hardware processors104are configured to select a game file template from the plurality of game file templates (as shown in second row ofFIG.3A) based on one or more user requirements. In other words, a cartridge from the ready-to-configure cartridges is selected to address the one or more user requirements or specific learning/assessment need, such as teaching someone learn about chemistry to prepare a chemical substance, or a virtual corporate onboarding exercise, and/or the like. Further, at step208ofFIG.2, the one or more hardware processors104are configured to associate the one or more customized learning and assessment content features in a language of interest with the selected game file template to obtain a gamified content (as shown in second row ofFIG.3A). Here, the language of interest could be any language spoken and known by the author, creator, and/or end users. In other words, the selected cartridge gets instantiated while allowing the content creator to add relevant learning or assessment content into the cartridge. Furthermore, at step210ofFIG.2, the one or more hardware processors104are configured to check compliance of the gamified content (as shown in third row ofFIG.3A) for a sharable content object reference model (SCORM). When the gamified content is found to be SCORM compliant, at step212ofFIG.2, the one or more hardware processors104are configured to output the SCORM compliant gamified content on a display device to an end user. As shown inFIG.3A, the gamified content is uploaded as SCORM in the SCORM compliant LMS and consumed by a learner when found SCORM complaint. In an embodiment, the SCORM compliant gamified content could be displayed as a leadership dashboard, rewards earned, and/or the like. In an embodiment, the display device could be a monitor, a TV screen, a display console, and/or the like. In other words, the learner gets access to the gamified content through the SCORM compliant LMS. The written description describes the subject matter herein to enable any person skilled in the art to make and use the embodiments. The scope of the subject matter embodiments is defined by the claims and may include other modifications that occur to those skilled in the art. Such other modifications are intended to be within the scope of the claims if they have similar elements that do not differ from the literal language of the claims or if they include equivalent elements with insubstantial differences from the literal language of the claims. The present disclosure addresses unresolved problem of monopoly in content authoring, creation of dependency loop by providing a single, scalable, one-stop knowledge platform offering high degree of customization and leveraging emerging technologies for rapid end-to-end game or learning solutions development. Embodiment of the present disclosure provides methods and systems for providing customized game-based learning and assessment framework. The present disclosure provides a mechanism to configure custom content into a readymade learning and assessment platform. The framework proposed in the present disclosure is custom built to support scalable architecture of a modular digital learning platform, and its functionalities. Building blocks of the framework expand capabilities of the modular digital learning platform while also keeping proprietary Learning and Assessment components within user-generated custom content. The system of the present disclosure allows content creators to author custom content to create immersive learning experience that could range from Spatial three dimensional (3D) technologies, Augmented Reality (AR) and Virtual Reality (VR) or games at one place. It is to be understood that the scope of the protection is extended to such a program and in addition to a computer-readable means having a message therein; such computer-readable storage means contain program-code means for implementation of one or more steps of the method, when the program runs on a server or mobile device or any suitable programmable device. The hardware device can be any kind of device which can be programmed including e.g., any kind of computer like a server or a personal computer, or the like, or any combination thereof. The device may also include means which could be e.g., hardware means like e.g., an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA), or a combination of hardware and software means, e.g., an ASIC and an FPGA, or at least one microprocessor and at least one memory with software processing components located therein. Thus, the means can include both hardware means, and software means. The method embodiments described herein could be implemented in hardware and software. The device may also include software means. Alternatively, the embodiments may be implemented on different hardware devices, e.g., using a plurality of CPUs. The embodiments herein can comprise hardware and software elements. The embodiments that are implemented in software include but are not limited to, firmware, resident software, microcode, etc. The functions performed by various components described herein may be implemented in other components or combinations of other components. For the purposes of this description, a computer-usable or computer readable medium can be any apparatus that can comprise, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. The illustrated steps are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope of the disclosed embodiments. Also, the words “comprising,” “having,” “containing,” and “including,” and other similar forms are intended to be equivalent in meaning and be open ended in that an item or items following any one of these words is not meant to be an exhaustive listing of such item or items, or meant to be limited to only the listed item or items. It must also be noted that as used herein and in the appended claims, the singular forms “a,” “an,” and “the” include plural references unless the context clearly dictates otherwise. Furthermore, one or more computer-readable storage media may be utilized in implementing embodiments consistent with the present disclosure. A computer-readable storage medium refers to any type of physical memory on which information or data readable by a processor may be stored. Thus, a computer-readable storage medium may store instructions for execution by one or more processors, including instructions for causing the processor(s) to perform steps or stages consistent with the embodiments described herein. The term “computer-readable medium” should be understood to include tangible items and exclude carrier waves and transient signals, i.e., be non-transitory. Examples include random access memory (RAM), read-only memory (ROM), volatile memory, nonvolatile memory, hard drives, CD ROMs, DVDs, flash drives, disks, and any other known physical storage media. It is intended that the disclosure and examples be considered as exemplary only, with a true scope of disclosed embodiments being indicated by the following claims. | 28,465 |
11862041 | DETAILED DESCRIPTION The systems and methods of this technology are directed to system architecture, technical platforms (e.g., to facilitate and link educational activities involving input and output by different users) and methods configured to facilitate student growth and development, by providing an integrated student-growth platform with technical tools for use in an online, cloud-based environment, for improving the educational process. This integrated student-growth platform provides a comprehensive view of student growth and mastery available for educators, by its observational interface, accessible to users or clients. This integrated student-growth platform addresses the problems that educators have faced so far, on how to accurately measure and monitor student growth and performance on a daily basis, as well as, differentiate and personalize instruction to each student. The integrated student-growth platform in accordance with the present invention permits educators to set goals and monitor student progress with greater efficiency. This integrated student-growth platform includes a workflow engine that allows educators (e.g., teachers) to manage and deliver all assignments from assignment engines to subject practices through a simple-to-use student inbox. This student-growth platform provides educators with a comprehensive view of student growth and mastery while giving them more time to focus on students. The student-growth platform fully integrates learning analytics to make decisions and lay the groundwork for increased interoperability with existing school systems and instructional partners. FIG.1Aillustrates a general distributed environment (e.g., cloud-based) as designated generally by reference numeral100a, with users114a,114b, through114n, using user/client devices,106a,106b, through106n, and interacting with an integrated student-learning-and-growth platform118, via a network102. Each of the user devices may have a user application108a. User/client communications flow via lines112a,112b, through112n, respectively, to the user devices,106a,106b, through106n, and through lines104a,104b, through104n, to the network102and through line116to the student-learning-and-growth platform118. The integrated student-learning-growth platform118integrates functionalities of various platforms, including but not limited to, an assessment platform220, a planning platform222, a learning-progression platform224, an assignment platform226, a mastery-maker platform228, a MIRT (multi-dimensional-response) platform230, and a reporting platform232. Access to each of these platforms is accomplished via an observation engine221, which is a part of the user interface of the student-growth platform118. Such platforms facilitate digital reading and enable collaboration in and with the pages of digital books, articles, and documents, enabling users to embed materials and assignments in the text itself or attach them and provide them separately. They facilitate attaching highlight and tag functions to a piece or portion of text in a single action or activity. In some implementations, the text of interest or display for use in assessment, lesson planning, or any other task described here may be a digital book, an electronic article, or any other available text content presented by a suitable electronic device. In some examples, the text may be the text content on a page of a digital book available on the web or downloaded as an ePub (electronic publication) or PDF (portable document format). The integrated student-growth platform118may include one or more servers with one or more processors and one or more storage devices storing data or instructions executable by the one or more processors. For example, the integrated student-growth platform118may be a server, a server array or any other computing device, or group of computing devices, having data processing, storing and communication capabilities. The integrated student-growth platform118may be a virtual server (i.e., a virtual machine) implemented via software. For example, the virtual server operates in a host server environment and accesses the physical hardware of the host server including, for example, a processor, memory, storage, network interfaces, etc., via an abstraction layer (e.g., a virtual machine manager). It should be understood that the integrated student-growth platform118may be made up of any combination of devices and servers, or only one device or server. The integrated student-growth platform118may interact with the user devices106a-106nor other third-party servers117or media-distribution servers115, media store111, or data stores113athrough113n, of the distributed system100a, via the network102, or may be coupled to and interact with any of these entities via a direct data connection. In some embodiments, the entities of the distributed system100aincluding the integrated student-growth platform118and the media-distribution server115may be implemented using cloud-based architectures where one or more computer functions are performed by remote computing systems and devices at the request of a local computer device. For example, a user/client device106amay be a computing device having a limited set of hardware and/or software resources and may access hardware and/or software resources provided across the network102by other computer devices and resources, such as other user devices106b, the third-party server117, the integrated student-growth platform118, or any other computing resources. The user/client device106amay access these resources through a user application108a, such as a web browser or customized application, and the results of any computer functions or resources may be delivered through the user application108ato the user/client by the user device106a, such as those described. The integrated student-growth platform118may be a cloud-based distributed computing system having dynamically scalable and virtualizable resources, and various functionality of the integrated student-growth platform118, including the functionality of the assessment platform220, the planning platform222, the learning-progression platform224, the assignment platform226, the mastery-maker platform228, the multi-dimensional-response platform230, and the reporting platform232and/or the media-distribution server115may be carried out and supplemented by computing systems and devices distributed over the network102. Although only one integrated student-learning-and-growth platform118is shown, multiple servers/platforms118may be included in the system100afor regional or global reach or for specific purposes. The media-distribution server115is a computing device and/or system for transmitting electronic resources stored in or available through the media data store111to the other entities of the environment100a. In some embodiments, the media-distribution server11cooperates with the integrated student-growth platform118to provide an electronic resource to a user (e.g., teacher or student) for consumption. For example, the assessment platform220or the assignment platform228of the integrated student-growth platform may transmit a file (e.g., a webpage) to a user/client device106for display to the user/client114. In some instances, the file may include code (e.g., a video player) executable to receive a video and/or audio stream (e.g., an electronic resource) from the media distribution server115and render it for display to the user/client. In other embodiments, the integrated student-growth platform118performs the function of the media-distribution server115. In the depicted embodiment, the media-distribution server115is coupled to the network102via signal line123for communication with the other entities of the environment100. The media-distribution server115is also coupled to the media store111to access electronic resources and other data stored in the media store111. In some embodiments, the media-distribution server115is a hardware server including a processor, memory and network communication capabilities. In other embodiments, the media-distribution server115is a virtual server. In some embodiments, the media-distribution server115transmits video and audio streams to one or more user/client devices106a-n. The video and audio streams may be live feeds or may be previously recorded, stored as media objects in the media store111, and transmitted to the one or more user/client devices106a-non demand, via delayed broadcast, etc. In some embodiments, the audio and video are streamed from the media-distribution server115via the network102. In other embodiments, a user/client can download an instance of the video and audio media objects from the media-distribution server115to a local repository for storage and local playback. The media-distribution server115and/or the integrated student-growth platform118is/are capable of transmitting any number of electronic resources to any number of user/client devices106a-nsimultaneously. While in the illustrated embodiment, only one media-distribution server115is shown, any number of media-distribution servers115and/or media stores111may be included in the distributed environment. For example, the media-distribution server115and the media store111may be a distributed server and storage system with local instances strategically located in locations where spikes in demand for the electronic resources are likely to occur. For example, if a cluster of user/client devices106a-nare located in a particular geographic region, local instances of the media-distribution server115and the media store111may be coupled to the network102in that geographic region such that the media objects stored in the media store111may be served locally and at a faster data rate to that cluster of user/client devices106a-n. It should be understood that, in some embodiments, the media-distribution server115and/or the third-party server117have the same or similar architecture (e.g., memory, processor, communication unit, bus, etc.) as the integrated student-growth platform118illustrated inFIG.2, and thus the description of those components applies to the media-distribution server11and/or the third-party server117. The media store111is an information source for storing data and providing access to stored data. The stored data may include the electronic resources described above, such as media objects including video, audio, vector-based files, electronic books, documents, etc. In some embodiments, the media store111is included in the memory (not shown) of the media-distribution server115. In other embodiments the media store111is included in the memory404(seeFIG.4) of the integrated student-learning-and-growth platform. In yet other embodiments, the media store111is included in a server or storage system distinct from but accessible by the media-distribution server115and the integrated student-learning-and-growth platform. In some embodiments, the media store111includes a database management system (DBMS) executable by a processor to manage a collection of records, files, and objects including the media objects. For example, the database could be a structured query language (SQL) DBMS. In these embodiments, the integrated student-learning-and-growth platform118and/or the media-distribution server115are coupled to a data store113athrough113n, via the bus406to store data in multi-dimensional tables having rows and columns, and manipulate, i.e., insert, query, update and/or delete, rows of data using programmatic operations (e.g., SQL queries and statements). The third-party server117is a server hosting a network-based software application operable to provide various services or functionalities, and to send data to and receive data from the integrated student-learning-and-growth platform118, the media-distribution server115, and the client devices106a. . .106nvia the network102. In the depicted embodiment, the third-party server1117is coupled to the network102via signal line125for communication with the other entities of the system100. The third-party server117is also coupled to the data stores113a-113nby signal lines121aand121nfor accessing and storing data. In some embodiments, the third-party server117is a server, server array or any other computing device, or group of computing devices, having data processing, storing and communication capabilities. In other embodiments, third-party server117is a virtual server. The third-party server117can provide access to data stored in the data store113a-113nthat is associated with users of the integrated student-learning-and-growth platform118. In some embodiments, the data stored in the data store113a-113nmay include demographics data, achievement data, student data, teacher data, standards data, inter-rater reliability data, etc., and the third-party server117may include a software application for providing secure access to this data to the integrated student-learning-and-growth platform118over the network102via an API. For example, in an educational setting, the demographics data may include instructor and pupil demographics data, and may be segmented across school district, school, classroom, grade, etc.; the achievement data may include standardized test scores for educators and pupils; the student data may include student assessments of teachers (e.g., aggregated from surveys, reviews, etc.), biographical data describing the students, social graph data (e.g., aggregated from third-party social networking services), etc.; the teacher data may include biographical data describing the teachers, social graph data (e.g., aggregated from third-party social networking services), teacher preferences, teacher assessments of students (e.g., aggregated from surveys, reviews, etc.), etc.; and the standards data may include standards compiled and approved by a governing organization or institution which define the levels of attainment pupils much reach to be considered acceptably educated. It should be recognized that the fifty states in the U.S. may have unique needs and standards for education. The standards may require a varying range of skills. In some embodiments, a local instance of the data stored in the data store113a-113nmay be included in the data store113a-113n. For example, a batch program operating periodically (every few minutes, hours, days, weeks, etc.) may retrieve a refreshed version of the data stored in the data store113a-113n. InFIG.1A, the integrated student-learning-and-growth platform118includes an user-interface unit119, an observation engine221, an assessment platform220, a planning platform222, a learning-progression platform224, an assignment platform226, a mastery-maker platform228, a multi-dimensional response platform230, and a reporting platform232. The assessment engine220is software including routines for providing network-based assessment of students. In some embodiments, the integrated student-learning-and-growth platform118may collect and store mapping information (i.e., social graphs) in the data store113a-113nmapping how all users106a-106nof the integrated student-learning-and-growth platform118are associated. For example, the social graph of each user may describe that user's114arelationships with other users114n, based at least in part on shared attributes, etc. All users114a-114nmay be associated by school, school district, subject matter taught, amount of experience, etc. Users may also define their own connections and sets of users using functionality provided by the client application108in cooperation with the integrated student-learning-and-growth platform118. For example, users114a-114nsharing a similar subject matter may add one another to their community by using functionality provided by the client application108ain cooperation with the integrated student-learning-and-growth platform118. The integrated student-learning-and-growth platform118may also generate and maintain a user profile in the data store113a-113nfor each user of the integrated student-learning-and-growth platform118. A user profile is a collection of personal and student/teacher/administrator data that is unique to a specific user. In some embodiments, the user profile is a digital representation of that person on a student/teacher/administrator development service and includes a user's customized settings and preferences, biographical information, schooling information, personal interests, teacher/administrator information, lesson-plan development information, social connection information, etc. In some embodiments, access to the integrated student-learning-and-growth platform118via the network102may be provided to teachers and administrators in an academic environment or other educational setting, such as a school district. Instruction may be provided by electronic resources. An electronic resource may be any electronic media for conveying information. For example, an electronic resource can be instructional in nature, and can convey knowledge, information, and resources to a user who interacts with or views it. As a further example, an electronic resources may include an instructional audio or video segment, a publication, an interactive instructional reference, a lesson plan, a planning tool, a community forum, a sharing tool, an industry standard, a portfolio tool, a progress monitoring tool, a reporting tool, etc. In some embodiments, an electronic resource can include any of textural data, graphical data, video data, audio data, etc. For example, the electronic resource may be a webpage including one or more of text, graphics, video, audio, etc. In another example, the electronic resource may be or include a downloadable or streamable media object, including, for example, an electronic document (e.g., portable document format (PDF) document), electronic book (e-book), digital video, digital audio file, vector graphics file, etc. In these or other examples, the electronic resource may include a dataset/electronic file with text, graphics, video, audio, etc. embedded therein. In some embodiments, these electronic resources may convey information on various topics, such as student training, teaching skills, and similar subjects of consequence and importance to the growth and development of the users. For instance, for teachers an electronic resource may be an instructional video about an aspect of teaching, and a teacher may view the video by streaming it using his/her client device106. In another example, the electronic resource may be a web-based interactive reference including text, audio, video, etc., and the teacher may study the reference by interacting with it via a client application106such as a web browser before determining that it is appropriate for a particular student, student group or a particular lesson plan. FIG.1Billustrates an alternative embodiment including a growth-projection engine101connected through the network102to a universal-skills pool103, a curriculum-to-skills mapper105, an instructional-resource-recommendation engine107, a lesson-planning engine109, and the media store111, a data store113a-113n, media-distribution server115and the third-party server117. As illustrated here, the student-growth platform instantiates a closed-loop system accepting an input of teacher context (including a chosen curriculum) and an input of student context (including assessment) and automatically generating an output with a digital lesson plan. The closed-loop system, global in scope, may be tailored by institution or educational intent and comprises at least five fundamental components: 1) a growth-projection engine101; 2) a universal-skills-pool engine103; 3) a curriculum-to-skills-mapper engine105; 4) an instructional-resource recommendation engine107; and 5) a lesson-planning engine109. The student-growth system has a unique student growth percentile algorithm (SGP) to selectively position a student or a group of students into a scaled learning progression scheme, based on the amount of time that has elapsed since observation of the last set of assessments on a particular student. In some implementations, this unique SGP algorithm may be implemented by a scaled learning progression schemes (similar to those used by the learning-progression platform inFIG.1A), which may determine if students fall into a particular group, a class, a group of classes, a school, a group of schools, a district, or a state. The learning-progression schemes are adapted to establish the best possible set of skills to teach a group of students on a particular day. This combines the CAT (Computer Adapted Testing)+SGP (Student Growth Percentile)+time-based projection+entry points to establish curriculum entry point. The universal-skills pool engine103bridges from a GOM to a range of curriculums. This feature enables the lesson-planning engine109to act as a Rosetta Stone or like language capability for linking many assessments to many government-created learning standards. After a user (e.g., educator or teacher) selects the learning objectives (skills) and chooses to build a lesson plan, by the lesson-planning engine109, the user/client106may choose the student resources and assessments to include in the lesson plan for each group. Resources may include sample items, worked examples, videos, lessons, definitions, or activities. Assessments include assessment probes designed to evaluate a level of skills. As a user selects resources and assessments, they may be assigned to student groups. When a teacher generates a lesson plan, students automatically see the resources and assessments in the assignments list on their home page once the lesson plan begins. At the top of the add resources and assessments page, the teacher sees the learning objectives (skills) that was selected. If the teacher wants to concentrate on resources and assessments for one skill at a time, only that skill may be checked. The teacher may easily change which skills are checked as the teacher adds resources and assessments. If the teacher choses more than three skills, the teacher may use the scroll bar to see the rest of the skills. Resources and assessments that are related to the checked skills are already listed on the page. The template may be configured to show colored squares for each resource or assessment to show the viewer which of the skills it relates to. For example, in one example, the colors show the viewer that the resource is for the second skill. For the purposes of this disclosure, it should be recognized that education has many standards and preferences that must be met in a particular country, state, or district. For example, the common core state standards initiative in the U.S. is an educational initiative that details what K-12 students should know in English language arts and mathematics at the end of each grade. This initiative seeks to establish consistent educational standards across the states as well as ensure that students graduating from high school are prepared to enter credit-bearing courses at two or four-year college programs to enter the workforce. The student-growth platform118approaches student development based on a universal-skills pool approach that is made available through learning progression schemes. This approach is based on selecting a range of skills that are appropriate for a specific scale score accorded to a student or group of students. A scale score delivers or specifies an entry point into the learning progression schemes that represents the student's “zone of engagement.” This zone of engagement includes a range of skills that the student is more likely to be ready to learn. This improves the accuracy of the data and insights that are provided to teachers to inform their instruction. The planner (e.g. teacher) permits aligning learning progressions to pacing guides, district curriculum, and textbooks. This makes the learning progression (by subject) more useful to teachers and administrators focused on a curriculum and not just a standard. Referring now toFIGS.2A and3, the assessment platform220of the student-growth platform118includes a computer-adapted-testing engine203, a fixed-form engine205, an assessment-importer module207, and a universal-scale module209. The assessment platform220as illustrated may be accessed by district administration of an institution to obtain previous or old assessments for a particular student or group of students that may be imported by the assessment-importer module207from sources that have these assessments. The universal-scale module209receives inputs from the computer-adapted-testing engine203, fixed-form engine205and the assessment-importer207. The universal-scale module209positions a target student into a standard scale (e.g., mandated by a governing body) based on testing data obtained from the computer-testing engine203, prior data provided by a student determined by the fixed-form engine205, and prior assessment data imported by the assessment-importer module207. The planning platform202includes an expected-score engine244, a pacing-guide manager246, a skills-selection engine248, a recommendations engine250, a resource-finder engine252, and lesson-organizer engine254.FIG.2Aalso illustrates a learning-progression engine236, a mastery-maker engine238, and a multi-dimensional-response-item engine240. In some embodiments, the planning platform202is coupled to a single-source implementation211, a multiple-source implementation213, an assignment generation unit215, and a curriculum mapping unit217. The planning platform202computes and expected score for a target student based on where the target student is positioned (e.g., by comparing within a range of scores for the level where the target student is positioned). The pacing-guide manager246is software including routines for prescribing and managing the pace at which the target student should learn. The skills-selection engine248is software including routines for selecting skills appropriate for the level and pace prescribed for the target student. The recommendation engine250is software including routines for recommending instructional resources for the target student that are consistent and appropriate for the level determined and pace prescribed for the target student. The resource-finder engine252is software including routines for managing and providing resources and content for students. In some embodiments, the resource-finder engine252catalogs the electronic resources, provides for the addition or removal of electronic resources, transmits the electronic resources to students for consumption, tracks user consumption and interaction with the of electronic resources, etc. The resource-finder engine252is coupled to the data store410(FIG.4) and the media data store111, either directly or via the media-distribution server115, to access the electronic resources stored therein. In some embodiments, the resource-finder engine252can search the data store410and the media data store111to generate and collect information about the electronic resources. For instance, the resource-finder engine252can aggregate attributes of the electronic resources, such as the author, publisher, file size, creation date, publication date, a thumbnail of the resource, etc., and store them in a resource library database. In various embodiments, the resource-finder engine252can access the electronic resources in the data store410and the media data store111to transmit or stream copies of those resources to the client devices106of the users114requesting to interact with them. The resource-finder engine252can also receive and store new electronic resources in the media data store111or the data store410. In some embodiments, the resource-finder engine252may interact with the media-distribution server118to store information in the media data store111. In other embodiments, the resource-finder engine252may store information in the media store111directly. In some embodiments, the resource-finder engine252may receive resource addition requests via the network102, requesting the addition of electronic resources accessible to the student-growth platform118. For example, the re-source finder engine252is capable of serving a webpage to a user/client device106that provides functionality for the user of the client device106to author or upload an electronic resource along with metadata characterizing it. The electronic resource may be an interactive electronic book, a video file, an audio file, a document, a dataset, an electronic link, or any other electronic resource that can be accessed and viewed by the observational engine221of via the student-growth platform. The resource-finder engine252may receive the additional electronic resource, store the metadata about the resource in the resource library database, and store the electronic resource in the data store410and/or media data store111. Thus, the resource-finder engine252can update the resource library database, either periodically or real-time, with any new electronic resources that have been added to or removed from the student-growth platform118. The resource-finder engine252is capable of receiving requests for electronic resources from users106and fulfilling those requests by transmitting the electronic resources to the corresponding client devices106of the users114. In one example, upon logging in to the student-growth platform, a user106may be presented with an interface by the user application108that shows any outstanding assignments that the user114must complete, the dates by the assignments must be completed, a description of what the assignments are, etc. Using this interface, the user114may select an assignment, in response to which the user application108transmits a request to the resource-finder engine252for the electronic resource associated with the assignment. In yet another example, an observer, upon logging in, may be provided with electronic resources (e.g., video, audio, etc.) by the resource-finder engine252in cooperation with the client application108, which describes what to focus on, observe, evaluate, during an upcoming/pending observational assessment of a target subject. In these or other examples, electronic resources can be identified and served to the users based on the users' social graphs and/or preferences. The resource-finder engine252, upon receiving this request, may locate the electronic resource in the data store410and provide it to the user application108via the network102for presentation to the user114. As discussed elsewhere herein, the resource-finder engine252may, in some embodiments, cooperate with the media-distribution server116to provide the electronic resources for consumption and/or interaction by the users114requesting them. When users consume or interact with the electronic resources provided by the resource-finder engine252, the resource-finder engine202is capable of logging the consumption and interaction in the data store410in association with those users. In some embodiments, the resource-finder engine252cooperates with the user application108to monitor user interactions with the electronic resources. For example, when user interacts with a user interface generated and displayed by the user application108, the user application108sends interaction data via the network102to the resource-finder engine252informing the resource-finder engine252of the interaction, and the resource-finder engine252stores this interaction data. In a further example, if a user interacts with a media player embedded in a user interface of the user application108, interaction data describing the user's interactions, such which actions the user took (e.g., clicked a pause button, a play button, a scrubbing dial, volume dial; maximized the viewing field of the media player; added a comment about the video using an associated interface element; etc.) are sent by the user application108to the resource-finder engine252and the resource-finder engine252may log those interactions. The interaction data may also include or be associated with data identifying which electronic resource was interacted with, the user who interacted with the resource, the time and date of the interaction, etc. In another example, if a user is accessing an interactive electronic book, the user application can send interaction data describing when the user begins interacting with the electronic book, pages through the electronic book, downloads files included with or embedded in the electronic book, completes surveys included with the electronic book, views videos embedded in the electronic book, comments on passages of the electronic book, or otherwise uses any other functionality provided by the user application108for interaction with the electronic book or the corresponding components of the student-growth platform118. In some embodiments, the resource-finder engine252may provide the electronic resource to the user/client devices106with presentational information and the client application108may use the presentational information to form the look and feel of the user interfaces. For example, the electronic file(s) or data stream(s) may be formatted using a markup language (e.g., HTML, XML, etc.), style sheets (e.g., CSS, XSL, etc.), graphics, and/or scripts (e.g., JavaScript, ActionScript, etc.), and the client application108may interpret the interface instructions and render an interactive Web User Interface (WUI) for display on a user device106based thereon. In other implementations, the user/client application108may determine the formatting and look and feel of the user interfaces independently. Using the user interfaces presented by the client application108, the user can input commands selecting various actions. Referring now toFIGS.2B and3, the assignment platform222illustrated with an instructional bridge242aand an instructional bridge242b. The first instructional bridge222aincludes a print engine253adapted to print assignments as needed, a scan engine256for scanning documents with assignments as needed, and an assignment-importer module258for importing or downloading assignments from other sources. The second instructional bridge222bhas an assignment manager260for managing assignments given to students, a grading framework262, by which grading of assignments is accomplished, and an assignment player264, by which assignments are conveyed to target students (e.g., by audio, video, or other forms of media). The reporting platform232includes dashboards/dashboard services233, alerts/alert manager235, and an exporter237, by which completed assignments may be exported or sent for further consideration or storing. In some embodiments, the user-interface unit119(FIG.1), in cooperation with the observation engine221(FIG.1), may generate a report dashboard/interface for viewing reports generated and provided by the reporting platform232and received by the observation engine221. In some instances, the reporting platform232may provide diagnostic reports. This dashboard provides numerous advantages including providing an observer (e.g. teacher) or administrator with detailed information about a given target student's performance (e.g., execution, effectiveness, compliance, etc.) over time. For example, the observer may be a teacher using the dashboard, to access any previous observational assessments of that student or student group; view an overall performance (e.g., execution, effectiveness, compliance, etc.) view statistics across all observational assessments of that student or a subset, such as the observational assessments performed for that academic year; may quickly ascertain the areas a student has had problems with or has been working on, or the areas the student has been improving on; review the test scores for the student, view the electronic training resources the student has consumed/interacted with; view any work-product, lesson plans, videos, presentation, etc., the student has uploaded, the groups the student has interacted with, etc. Using this information, the teacher may quickly get up-to-speed on where the student is at, thus provide pertinent and relevant observations (e.g., evaluations, ratings, suggestions, comments, etc.) and assignments, etc., during the observation session to be performed. The alerts235may be adapted to generate and provide alerts depending upon certain criteria that are specified. Referring specifically toFIG.3, is should be recognized that in some example scenarios, a district administrator332may have access to the assessment platform220to generate the assessments required. A teacher336may have access to the planning platform222to plan and generate lessons and the student334may have access to the assignment platform226to receive and complete assignments. The learning progression engine316drives information that is conveyed in the instructional planning and diagnostic reports that are generated. Learning progressions are descriptions of how learning typically advances in a subject area. Empirically based learning progressions can visually and verbally articulate a hypothesis, or an anticipated path, of how student learning will typically move toward increased understanding over time with good instruction. The learning-progressions engine316has an organizational structure, separated into domains, skill areas, and core skills. For example, a core progress scenario for mathematics has four domains, which form the base of the learning progression for that subject: 1) numbers and operations, 2) algebra, 3) geometry and measurement, and 4) data analysis, statistics and probability. The skills areas (e.g., whole numbers, place value, symbols and expressions, time etc.) represent the various skills and concepts students acquire as they progress in the development of mathematics at the level they are prescribed. The core progress learning progression is an interconnected web of prerequisite skills. For increased understanding over time, progress requires continually building up and building on a solid foundation of knowledge, concepts, and skills. The core progress learning progression is a map of skills created, where new learning is built on previous, foundational understanding of the subject. A core progress learning progression for a subject is defined in terms of a number of skills. Each skill is represented by a separate data point. The difficulty value may be derived from the calibrated difficulty of the test items from standard or existing tests to assess the skill level. There are several assessment items per skill, called an item-set. Common to these perspectives is the idea that the development of learning progressions is an iterative process. It begins with a hypothesis, informed by what is known about student learning, which undergoes empirical testing and subsequent refinement based on the data. As another example, a core progress learning progression for reading was developed according to this iterative model. To reflect the organization of the standards, a core progress reading learning progression may have four domains, including 1) foundational skills, 2) language, 3) literature, and 4) informational text. The learning progression is comprised of five (sub) domains: 1) word knowledge and skills; 2) comprehension strategies and constructing meaning; 3) analyzing literary text; 4) understanding author's craft; and 5) analyzing argument and evaluating text. For each student group, grade-level domain expectations may be identified to describe the desired level of student understanding by the end of the year. These expectations form the foundation of the learning progression. The learning progression then goes a step further to identify the intermediate skills and concepts necessary for students to move toward those expectations. Learning progressions are a progression of cognitive states that move from simple to complex and, while not necessarily linear, the progression is not random, but rather is sequenced and ordered as “expected tendencies” or “likely probabilities” of how learning develops. Inherent in these views of progressions is the idea of a coherent and continuous pathway along which students move incrementally through states of increasing competence in a domain. Every incremental state builds on and integrates the previous one as students accrue new levels of expertise with each successive step in the progression. It is important to note, however, that while progressions may provide clear descriptions of how learning develops in a domain, they are not developmentally inevitable. Rather, they are dependent on good curriculum and instruction. The skill areas represent the various skills and understandings that students gain as they progress in their reading development. For example, the grade-level skill statements identify the incremental steps students take as they progress in acquiring specific skills and understandings. It should be recognized that the grade-level skill statements provide specific examples of relevant words and texts, but do not specify reading content or identify the activities students should be able to perform to reflect attainment of a skill. They are intended as statements of the skill itself, which serve to advance subject (e.g., reading or math) competence. The skill statements reflect levels of relative difficulty of skills and understandings identified in the progression from their most basic, foundational states through increasingly sophisticated states of competency. For example, in the learning progression for a student in grade two, a domain defined for comprehension strategies and constructing meaning may require a skill (defined in a particular area) identifying the author's purpose, based on an understanding that authors write texts for different purposes. Having established this basic understanding, students may move incrementally through successive steps of increasing competence so that by the middle-level grades they are able to evaluate the appropriateness of the form chosen by the author in light of the author's purpose. These focus skills and prerequisites act as building blocks; each representing a specific level of competency of a skill or understanding that rests on prior development and that also provides a foundation for the next level of learning. The learning progression engine236identifies for each focus skill, the associated prerequisites necessary to understand that skill, and provides these criteria across grades, skill areas, and domains. To continue with the example for reading, by the 10thgrade, the focus skill may require analyzing the cumulative impact of figurative language on wider themes and meanings of the text, from the domain defining an understanding of the author's craft. This domain may have five prerequisite skills that span two grades and three domains. The learning progress engine236may be further adapted to perform a quantitative analysis to determine where skills fall on an assessment scale (e.g., standard ones used by educators). This analysis may compare empirically observed order of skills (i.e., where skill difficulty falls on a measurement scale) to the pedagogically determined order of skills (i.e., the most productive order of skills for learning a particular skill). Information and data flows from the Assessment Platform220, to the expected-score engine244, and from that point to the pacing-guider manager246. Based on the assessment results (e.g. old or previous) for a particular student or group, the expected-score engine244designates an expected score for that student or group of students. Based on the expected score for a student, the pacing-guide manager246determines a pace appropriate for the student or group of students, and the skills-selection engine248matches the skills required for the pace determined for the student and student group. The recommendation engine250discovers and finds resources from the resource-finder engine252. The lesson-organizer engine254organizes the lessons for teacher to use. In some embodiments, a teacher may provide input at any stage of the planning process, for example, either to the resource-finder engine252, or to other sections or portions of the planning platform222, in other instances (as indicated by the arrows (e.g. to the skills-selection engine248). The lesson-organizer engine254may provide information and data to assessment platform220or the assignment platform226. The learning-progression engine236may be adapted to provide input to the assessment platform220, the planning platform222, or alternatively, to the assignment platform226, or the reporting platform232. The grading-framework262is adapted to receive information from the mastery-maker engine238and to the MIRT engine240. The Grading Framework 4e is also adapted to provide information and data to the Assignment Manager 4d. The mastery-maker engine238is software including routines for providing information and data to the dashboard services333of the reporting platform232. The mastery-maker engine238prescribes practice tests (FIG.14,1438) and assignments in a particular subject for a target student to assist the target student with mastering a particular subject. Some standards for mastery measurement may be used to track either long-term progress or short-term progress. In some instances, the mastery-maker engine238may use general outcome measures (e.g. SAT, ACT etc.) to assess high-level skills (e.g., reading, math, or preparedness for college) or skill mastery measurement that measures more granular sub-skills (e.g. fluent recall of division involving single digit numbers with 8 and 9 as divisors). The mastery-maker engine238may be adapted to define clear pass/fail criteria, present a multiple equivalent valid forms that measure the same sub-skill, an ability to measure improvement even if the mastery criteria is not met, a valid underlying skill sequence, and an opportunity to test whether mastered skills are retained at a later date. The mastery-maker engine238in some embodiments is adapted to test depth of knowledge. The mastery-maker engine238tests a scale of cognitive demand and aligns assessments with standards. For example, Webb's four levels of cognitive complexity include recall and reproduction (level 1), skills and concepts (level 2), strategic thinking (level 3), and extended thinking (level 4). In some embodiments, mastery may be computed by a combination of assessments, instruction, and practice inputs. Assignment forms that contribute to mastery may include practice (contributing to probed consideration), formative assessment (contributing to probed consideration), instruction (contributing to probed mastery), summative (imported data contributing to assessed mastery), and CAT assessment (contributing to assessed mastery). In some embodiments, there are two tiers to probed mastery, including a system tier and an item tier. In some instances, the system tier may include the requisite items, forms, skills sequences, and mastery criteria. The items may include items worked for sub-skills and third-party tagged imported data. In some scenarios, an item % attainment by a student may be computed by an item score/highest possible score for the item. A probed % mastery designation for a student may be computed by a weighted mean for all item % attainments known to the system. The mastery-maker engine238in some embodiments may normalize the outcomes form computer-adapted testing with scores from practice tests (FIG.14,1438) to create an integrated model that reflects mastery by the target student, which may be compared or positioned within the learning progression data for the target student. The mastery-maker engine238may extend actual testing data by intelligent inferencing schemes by deriving relationships of objects within the learning progression. The MIRT engine240is software including routines for binding responses from multiple activities (or assignments or results) received from assessment, instruction, and practice tests into a unified scale, which contributes to determining overall mastery. The MIRT engine240may receive information and data from the mastery-maker338. Both the mastery-maker engine238and the MIRT engine240provide data flow to the reporting platform232, specifically, the alert manager335. The dashboard services333and the alert manager335provide information and data to the teacher and the exporter engine exports data out of the reporting platform337. FIG.4illustrates the various components of the student-growth platform118coupled by a bus406to a communication unit408, a processor402, a memory404, and a data store410. The integrated student-growth platform118includes the assessment platform220, the planning platform222, learning-progression services224, the assignment platform226, the master-maker engine228, the MIRT engine230, and reporting platform232. The processor202processes data signals and program instructions received from the memory204and the data storage210. The processor202may comprise an arithmetic logic unit, a microprocessor, a general or special purpose controller or some other processor array to perform computations and provide electronic display signals to a display device (e.g., on a user device106a). The processor202is coupled to the bus206for communication with the other components. The processor202may comprise various computing architectures including a complex instruction set computer (CISC) architecture, a reduced instruction set computer (RISC) architecture, or an architecture implementing a combination of instruction sets. Although only a single processor is shown inFIG.4, multiple processors may be included. It will be obvious to one skilled in the art that other processors, operating systems, sensors, displays, and physical configurations than those that are illustrated may be used to perform the operations described in this specification. The memory204may be a non-transitory storage medium. The memory204stores the instructions and/or data for operating the student growth platform118, which may be executed by the processor202. In one implementation, the instructions and/or data stored in the memory204comprises code for performing any and/or all of the techniques or functionalities that are described in this specification. The memory204may be a dynamic random access memory (DRAM) device, a static random access memory (SRAM) device, flash memory or some other memory device known in the art. The data storage210stores the data and program instructions that may be executed by the processor202. In one implementation, the data storage210may store the data of various types of users in the web forum. The data storage210may include a variety of non-volatile memory permanent storage device and media such as a hard disk drive, a floppy disk drive, a CD-ROM device, a DVD-ROM device, a DVD-RAM device, a DVD-RW device, a flash memory device, or some other non-volatile storage device known in the art. The communication unit208facilitates the communication between the user device106(inFIG.1) and the student-growth platform118over the network102(inFIG.1). For example, a user114a, via the user device106a, may access the student-growth platform118to view or read electronic content and otherwise interact with the student-growth platform118and receive information from the student-growth platform118, via the communication unit208. The communication unit208also displays the content or information either received from or hosted on the student-growth platform118to any of the users114athrough114n. The communication unit208couples the student-growth platform118to the network102by the signal line116(inFIG.1) and via the bus206. The communication unit208may include network interface modules, which include ports for wired connectivity such as but not limited to USB, SD, or CAT-5, etc. The network interface modules are configured to link the processor202to the network102that may in turn be coupled to other processing systems. The network102(FIG.1) may comprise a local area network (LAN), a wide area network (WAN) (e.g., the Internet), and/or any other interconnected data path across which multiple devices may communicate. The network interface modules are configured to provide conventional connections to the network102using standard network protocols such as TCP/IP, HTTP, HTTPS and SMTP as well as any others that are understood to those skilled in the art. The network interface modules include a transceiver for sending and receiving signals using WIFI, Bluetooth® or cellular communications for wireless communication. Each of the platforms, modules, and/or engines described above may include software or program instructions configured to perform the functionalities described here. Example Student-Growth Platform118 The example student-growth platform118depicted inFIGS.4(and1A) is provided by way of example and it should be understood that it may take other forms and include additional or fewer components without departing from the scope of the present disclosure. For example, while not shown, in some implementations, the student-growth platform118may include input and output devices (e.g., a computer display, a keyboard and mouse, etc.). Additionally, it should be understood that the computer architecture depicted inFIG.4is applicable to the other entities of the system100a(FIG.1A), such as the media-distribution server115and/or the third-party server117with various modifications. The processor416includes an arithmetic logic unit, a microprocessor, a general purpose controller, or some other processor array to perform computations and provide electronic display signals to a display device (not shown). The processor402may be coupled to the bus406for communication with the other components of the student-growth platform118. The processor416may process data signals and may have various computing architectures including a complex instruction set computer (CISC) architecture, a reduced instruction set computer (RISC) architecture, or an architecture implementing a combination of instruction sets. Although only a single processor216is shown inFIG.4, multiple processors may be included. The processor402may be capable of supporting the display of images and the capture and transmission of images, performance of complex tasks, including various types of feature extraction and sampling, etc. It should be understood that the student-growth platform118could include various operating systems, sensors, displays, additional processors, and other physical configurations. The memory404stores instructions and/or data that may be executed by the processor402. The memory404is coupled to the bus406for communication with the processor402and the other components of the student-growth platform118. The instructions and/or data may comprise code for performing any and/or all of the techniques described herein. In particular, the memory404includes a non-transitory computer-usable (e.g., readable, writeable, etc.) medium, which can be any apparatus or device that can contain, store, communicate, propagate or transport instructions, data, computer programs, software, code, routines, etc., for processing by or in connection with the processor402. A non-transitory computer-usable storage medium may include any and/or all computer-usable storage media. In some implementations, the memory404may include volatile memory, non-volatile memory, or both. For example, the memory404may include a dynamic random access memory (DRAM) device, a static random access memory (SRAM) device, flash memory, a hard disk drive, a floppy disk drive, a CD ROM device, a DVD ROM device, a DVD RAM device, a DVD RW device, a Blue-Ray™ storage device, a flash memory device, or any other mass storage device known for storing information on a more permanent basis. It should be understood that the memory404may be a single device or may include multiple types of devices and configurations. The communication unit408is an interface for sending to and receiving data from other computing devices. In the depicted embodiment, the communication unit408is coupled to the network102by the signal line116and coupled to the bus406. In some embodiments, the communication unit408includes a network interface device (I/F) having ports for wired connectivity. For example, the communication unit408includes a CAT-5/6/7 interface, USB interface, or SD interface, etc. The communication unit408may also include a transceiver for sending and receiving signals using Wi-Fi, Bluetooth® or cellular communications for wireless communication. The communication unit408can link the processor402to the network102that may in turn be coupled to other processing systems. The communication unit408can provide connections to the network102and to other entities of the system100using standard communication protocols including, for example, TCP/IP, HTTP, HTTPS, etc. The student-growth platform118includes the assessment platform220, the planning platform222, the learning progression services224, the assignment platform226, the mastery-maker engine228, and the MIRT engine230, and reporting engine232. In some embodiments, the student-growth platform118and/or the assessment platform220are sets of instructions executable by the processor402to provide their respective functionality. In other embodiments, the student-growth platform118and/or the assessment platform220are stored in the memory404of the student-growth platform and are accessible and executable by the processor402to provide their respective functionality. In any of these embodiments, the student-growth platform118and the assessment platform220may be adapted for cooperation and communication with the processor402and other components of the student-growth platform118. Example Observation Engine221 The observation engine221is software including routines for facilitating student growth based on observational assessments received from the assessment platform220. In particular, the observation engine221may send, receive and store observation-related data, such as observation data, templates and files including questions and answers tied to performance standards (e.g., standards related to execution, compliance, effectiveness, personalized learning plans, etc.), identify and suggest electronic learning resources (in cooperation with the resource-finder252) based on observation-related data received, generate reports including analytics and diagnostics about the students and their learning progress, generate performance (e.g., execution, evaluation, compliance, effectiveness, etc.) assessments of the students based on demographics data, observation-related data, achievement data, standards data, student data, teacher-oversight data, interaction data, inter-rater reliability data, observer comparison data, or any other data described herein. In the illustrated embodiment, the observation engine221cooperates with the planning platform222including the recommendation engine250, an assignment engine208, and a reporting platform232. The observation engine221is coupled for communication with the other components of the student-growth platform118. The observation engine221is also coupled to the network102via the communication unit408for communication with the other entities of the system100a(and100b). In some embodiments, the user-interface119, the observation engine221, the assessment platform220, the planning platform222, the learning-progression platform224, the assignment platform226, the mastery-maker platform228, the multi-dimensional response platform230, and the reporting platform232are sets of instructions executable by the processor402to provide their respective functionality. In other embodiments, the user-interface119, the observation engine221, the assessment platform220, the planning platform222, the learning-progression platform224, the assignment platform226, the mastery-maker platform228, the multi-dimensional response platform230, and the reporting platform232are stored in the memory404of the student-growth platform118and are accessible and executable by the processor408to provide their respective functionality. In any of these embodiments, the user-interface119, the observation engine221, the assessment platform220, the planning platform222, the learning-progression platform224, the assignment platform226, the mastery-maker platform228, the multi-dimensional response platform230, and the reporting platform232may be adapted for cooperation and communication with the processor402and other components408,404, and410of the student-growth platform118. The observation engine221is software including routines for sending, receiving, processing, and storing observation-related data. In some embodiments, the observation engine221may provide observation templates to observers for use in observing and assessing other users (e.g., students, also referred to as the targets), receive observation files including observation data reflecting the assessments for particular students, and store the observation files in the data store113a(FIGS.1A &1B) in association with the targets being observed. In some embodiments, the observation engine221interacts and cooperates with the user/client application108a(FIG.5) to provide the above-noted functionality. In the illustrated embodiment, the observation engine221is coupled to one or more user/client devices106(FIGS.1A &5) to provide one or more observation templates (FIG.14) to the user/client devices106and to receive observation-related data from the user/client devices106. In some embodiments, an observation template is an electronic form for assessing the performance of a target student (e.g., generated by the fixed-form engine205inFIG.2A). The observation template may include different header fields for describing the circumstances of an observation session. For example, the observation template may include fields for describing the identity of a target student, the date the observation was performed by an observer (e.g. teacher or administrator), and how the results of the observation should be distributed (e.g., by reports) and stored (e.g., for later use), etc. Additionally or alternatively, the observation template may include assessment fields for describing the performance (e.g., execution, compliance, effectiveness, and/or other qualities) of the target student during the observation, data about prior observational assessments of the target students, data about other observers, etc. In some embodiments, the assessment fields may include data describing predefined questions and user-selectable or user-definable answers; fields for user-definable questions and/or answers; comment fields for providing a description of the target student; rubrics, etc. In these or other embodiments, the assessment fields may state a goal, objective (e.g., for mastery), effectiveness expectation (e.g., projections based on scores), or other metric, and include one or more indicators assessing how the target student is meeting that goal, objective, effectiveness expectation, or other metric. For example, the objective might be “students develop to meet the vision, mission, values, beliefs and goals of the organization (e.g. school), collaboratively determining the processes used to establish these attributes, and facilitating their integration into the life of the organization community,” and the selectable indicators assessing whether the student is partially proficient at meeting this goal may state that the vision, mission and values are: “developed through collaborative process,” “publically available,” part of routine,” and “routinely updated” by the target student (i.e., from a particular grade). In this example, if only some of these indicators are met, then the target student is deemed partially proficient at the goal. If all are met, additional indicators evaluating whether a target subject is proficient (as attained a high level beyond that expected), accomplished, or exemplary at meeting this goal are considered and selected if appropriate. The observation templates may also include assignment fields for recommending, assigning and/or integrating electronic resources (e.g., video) by a teacher; and fields for defining assignment parameters for the electronic resources (e.g., task timers, wait times, etc.), as described in further detail below. In some embodiments, suggestions for the assignment fields may be populated in real-time by the assignment platform226(particularly, the assignment manager260) in response to sending the observation data. The content of the observation templates may be displayed to users via user interfaces generated (FIG.14) and displayed by the user/client application108. The user interfaces displaying the content of an observation template to a user (e.g., student, teacher, or administrator) may also provide functionality for completing the various fields of the template. For example, while observing a target subject in the field, an observer or user114may interact with interface elements presented by the user/client application108to input information about the circumstances of the observation and the target's performance. For example, the observer or user114may input the location where the observation session took place; the date and time of the observation session; the identity of the target student's audience (if any); information about the identity of the observer (e.g. teacher or administrator); information about the observer's position and/or relationship to the target student (e.g., subject teacher); options for storing and distributing the results of the observation; etc. The observer or user114may also provide input describing the performance of the target student (e.g., teacher comments), such as inputting answers to questions about various aspects of the target student's performance, etc. In some embodiments, an observation template may include predefined questions and answers for assessing the compliance of a target student with various predetermined requirements. For example, the requirements may be based on institutional policy, compliance with requirements, legislated practices, or industry standards, and the questions may be directed to whether or not a target student is meeting those requirements/standards. In these embodiments, the same template may be used repeatedly by an observer to record his/her observations of a target student over time or of a number of different target subjects. In other embodiments, various different templates may be used for the observational assessments of a target student. The structure and content of the observation templates, or portions thereof, may be user-defined or may be automatically generated by the observation engine221using standards data stored in the data store113aor received from another entity of the system100a, such as the third-party server117. The user/client application108may transmit observation-related data including input provided by the observer (e.g., teacher or administrator) during the assessment of the target student to the observation engine221for storage. For example, the observer (e.g., teacher or administrator) may instruct the client application108to save a completed observation template as an observation file in a local repository, and then transmit it to the observation engine221via the network or cloud platform102for storage in the data store113a. The observation file includes the information from the template upon which it is based along with the observations (e.g., evaluations, ratings, compliance assessments, and comments), assignments, and/or other information input by observer (e.g., teacher or administrator) during the observation. In the illustrated embodiment, the observation engine221is coupled via the bus116(through the network102and bus121a) to the data store113ato store and retrieve observation-related data. For example, the observation engine221can store and retrieve the observation templates and the observation files received from the user/client application108. The observation engine221can also store, retrieve, and provide organization information associated with observers and target subjects. For example, in the educational setting, the observation engine221may access information associated with the organization of the school districts of a state or region; a school district; the schools of a school district; the teachers and administrators of a school district, a school, a subject, etc.; the classes in a district or school; the students of a school district, a school, a class, a subject, a teacher, an administrator, etc., from the data store113a. The assessment platform220is software including routines for providing the assessments as described above with reference toFIG.3. The planning platform222is software including routines for planning a lesson for a teacher according to the assessments. As illustrated inFIG.3, the planning platform222has the recommendation engine250, which is software including routines for receiving observation data related to a target student, identifying one or more electronic resources that correspond to the observation data, and for providing data representing the one or more electronic resources for display. In some embodiments, the recommendation engine250is coupled via the network or cloud platform102to receive observation data from one or more user/client devices106. The observation data may characterize one or more aspects of a target student's performance during an observation session performed by an observer (e.g. teacher or administrator). In the illustrated embodiment, the recommendation engine250is coupled to the data store113avia the bus116(and121a) to store and retrieve data, and is coupled to the media data store111via signal line127and the network102to store and retrieve data. In some embodiments, the observation data may accompany a resource request for a list of electronic resources that correspond to the observation data. The recommendation engine250may receive the request from a client device106, and may satisfy the request by identifying one or more electronic resources that correspond to the request, and provide a resource response including a summary of the one or more resources to the user/client device106for display to the user114of the client device106. For example, an observer of a target student may provide input reflecting observation data assessing the performance of the target subject, and the client application108, upon receiving that input, may transmit a request for recommended electronic instructional resources that can be assigned by the observer to the target student to help the target student improve his or her skills in a given area. In some embodiments, to identify one or more electronic resources that correspond to the observation data accompanying the resource request, the recommendation engine250can compare the observation data to metadata associated with electronic resources to identify resources that match the observation data. For example, the recommendation engine250can search a resource library database that includes an index or catalog of the electronic resources that are available. For instance, the resource library database can include metadata for each of the electronic resources describing each resource. The metadata can include tags describing various characteristics of an electronic resource, a graphical image of the resource (e.g., a thumbnail), a description of the topic or subject matter that the resources is directed to, an author or authors of the resource, the publisher of the resource, the popularity of the resource including, for example, the number of users who have consumed the resource and the level of their interactivity with the resource, etc. The recommendation engine250can query the resource library database using the observation data or aspects thereof to identify resources that have corresponding metadata that match the observation data, either loosely or strictly. The electronic resources may be distributed among several data stores located across the network or cloud platform102or may be stored in a single data store. In the illustrated embodiment, the media store111and the data store113awork cooperatively to store the electronic resources. For example, media objects such as video, audio, e-books, vector-based files, documents, datasets, learning objects, etc., may be stored in the media store111and lesson plans, learning progressions, curriculum maps, publications, portfolios, industry standards, etc., may be stored in the data store113a. In other embodiments, all of the electronic resources may be stored in and accessible from a single information source, such as the media store111, the data store113a-n, etc. In any of the foregoing embodiments, the resources stored in the data store may be cataloged, for example, by the recommendation engine250, in a single resource library database or in resource library databases distributed over the network102, and the recommendation engine250can query the resource library database or resource library databases for information matching various criteria or for information about the resources. In other embodiments, the electronic resources may be prescribed or predetermined in advance and pushed out by the student-growth platform118to the observer of a target student for assignment or to the target student directly for consumption. In some embodiments, the observation data includes data quantifying an observer's assessment of a target student's performance. For example, the observation data may include an answer input by an observer in response to a question about the target student's performance in a particular area, and the answer may quantify how well a target subject is performing. In some embodiments, the answers to questions may be based on predefined performance scales that are defined to the recommendation engine250and the recommendation engine250may use the answer to determine where the target student lies within that performance scale. For example, a target student's performance in a particular area may be assessed from worst to best using the following identifiers: “unsatisfactory,” “needs improvement,” “developing,” “proficient,” and “distinguished,” or other such method for scaling a student and if the observation data includes data identifying “unsatisfactory” as the answer to a particular question about a target student's performance in that area, the recommendation engine250may use this assessment to identify one or more electronic resources that provide foundational or basic learning in that particular subject area. If multiple electronic resources are identified by the recommendation engine250as corresponding to the observation data, the recommendation engine250can rank them based on one or more criteria. A criterion may be any attribute associated with the electronic resources. For example, the criterion may include a topic; the number of times an electronic resource has been interacted with, viewed, listened to, etc.; an author; a publisher; a date of the electronic resource; the number of users connected (or at the same level) to the target student who have interacted with the electronic resource previously; the number of times an electronic resource has been assigned to users having a similar assessment; etc. The recommendation engine250can generate the summary of electronic resources based on the ranking performed by it. For example, the top-ranked electronic resource may be listed first in the summary and the lowest-ranked resource may be listed last. In another example, the recommendation engine250may limit the summary to a certain number of top-ranked resources. In yet another example, the list of electronic resources may be sorted in order of rank and provided incrementally as needed by the user application108. In a further example, the recommendation engine250may rank the resources by those who have been most impactful/effective for students similar to the target student. For example, the recommendation engine250may use demographics, observation, achievement, interaction, standards, student, and/or teacher data, etc. to identify the resources that were the most effective at helping a set of similar target subjects develop professionally. For example, a target student may be a fourth grader who is struggling with maintaining a level appropriate for the grade. The recommendation engine250, using demographic data and/or profile data, may identify other fourth graders who, based on their respective observation data and/or achievement data, also initially struggled with maintaining the level and who later became proficient at that level, as reflected by their respective observation data and/or achievement data, by watching a learning video(s) on particular subject areas (e.g., math); and the recommendation engine250and may recommend this/these videos for assignment/consumption. The learning-progression platform224is software for placing the student in a learning-progression scheme and for following the learning-progression scheme prescribed for a target student. The assignment platform226is software including routines for receiving an assignment request requesting an assignment of one or more electronic resources to the target student for completion, and for assigning the one or more electronic resources to the target student based at least in part on the assignment request. In some embodiments, the assignment platform226is coupled via the network102to receive the assignment request from one or more client devices106. The assignment platform226may interact with the user/client application108to assign various electronic resources to a target student. For example, during an observation of the target student, the observer inputs observational data indicating that the target student is in need of training on a particular skillset, and the recommendation engine250provides a summary of electronic instructional/training resources that are accessible via the student-growth platform118. The observer, using an interface rendered and displayed by the user/client application108, may assign one or more of the electronic resources to the target student. In response to the assignment, the assignment unit518(FIG.5) of the user application108generates and sends and assignment request to the assignment platform226, which identifies the electronic resource or resources that have been assigned, as further discussed below with reference to at leastFIG.3. The assignment platform226then records the assignment of the electronic resources in the data store113in association with a user profile for the target student. In some embodiments, an assignment is not activated by the assignment platform226until the corresponding observation file including the assignment is finalized and uploaded by the observation unit516(FIG.5) of the user/client application108. In other embodiments, one or more assignment requests are provided and recorded by virtue of the observation file being uploaded for storage by the user/client application108to the student-growth platform118. For example, upon receipt of the observation file, the assignment platform226extracts any assignments from the observation file and records them as described above. In some embodiments, to complete the assignment, the target student, who is a user of the student-growth platform, may be required to access the service and interact with the electronic resource. In other embodiments, to complete the assignment, the target student may be required to consume the electronic resource and then report on his/her implementation of the learning provided by the resource and/or provide his/her reflections on the learning provided by the resource, etc., via the user/client application108. For example, the target student may be required to submit, via the user/client application108, input describing his/her experience with trying-out/implementing the principles taught by the assigned resource (e.g., an online learning video). Once this input has been received, the assignment platform226may flag the assignment as being completed in the data store113. Other configurations for completing an assignment are also contemplated. In some embodiments, the assignment request includes one or more assignment parameters or particulars. Each assignment parameter sets a condition that must be met in order to complete the assignment. For example, an assignment parameter includes a due date, a level of interaction with the electronic resource that is required to complete the assignment, an additional requirement that must be satisfied for completion of the assignment, etc. For instance, the observer may assign a video to the target student to view and may require the target student to write his/her thoughts or reflections about the video by inputting and transmitting them via an interface associated with the student-growth. In the illustrated embodiment, the assignment platform226is coupled to the data store113a-nvia the bus116to store the one or more assignment parameters in association with assignment to which they pertain. In these or other embodiments, one or more assignment parameters can be predefined and stored in the data store113a-n. A predefined assignment parameter can be applicable to all users who are assigned electronic resources, or may be customized for a particular group of users, such as those belonging to a particular school or grade or being observed by a particular observer (e.g., teacher). For example, for all videos that are assigned, a predefined assignment parameter can be set (e.g., by an observer via an associated interface of the student-growth platform118) requiring that the videos must be viewed to completion in order for the assignments of those videos to be considered satisfied. In another example, predefined assignment parameters can require videos to be viewed to completion in full screen mode with the sound of the video being set at an audible level in order for the assignments for the videos to be considered satisfied. In some embodiments, the assignment engine226generates and sends an electronic notification to the users associated with the assignment request. For example, the assignment engine226may send an email to the target subject and/or the observer(s) summarizing the assignment. The email may include a description of the electronic resource and an electronic link (e.g., a hyperlink including the uniform resource locator (URL) of the electronic resource) for directing the reader directly to the electronic resource. The email may also describe any assignment parameters, such as when the assignment must be completed by. In another example, the assignment platform226may send a similar message to the user via an internal messaging system, an instant-messaging system, a text-messaging system, or any other electronic-messaging system. In these embodiments, the assignment platform226is coupled to the data store113a-nto access information about the electronic resource and to store a copy of the electronic notification that was sent. The mastery-maker platform228is software including routines for enabling mastery of a particular subject. The details are described above with respect toFIG.3. The multi-dimensional response platform230is software including routines for binding various responses. The reporting platform232is software including routines for generating and sending reports. The reporting platform232may use the data stored/and or aggregated by the student-growth platform such as achievement data, demographics data, student data, teacher data, observation-related data, interaction data, standards data, or any other data described herein, to generate the reports. For example, the reporting platform232, using the data aggregated and stored by the observation engine221and/or student-growth platform118, may generate/segment/organize a report by region, district, school, class, teacher, student(s), class-size, gender, ethnicity, public policy, legislation, standards, requirements, etc. In a further example, the reporting platform232may process this data to make macro and/or micro qualitative assessments for inclusion in one or more reports. For instance, the reporting platform232, based on the observation-related data, demographics data, achievement data, student data, teacher data, interaction data, and/or standards data, etc., may generate an aggregate effectiveness score for a region, body, or group, and/or individual effectiveness scores for each of the students/teachers of that region, body, or group. The reports may be generated by the reporting platform232to include any type of data including textual, graphical, video, audio, and vector-based data to provide rich, qualitative and quantitative analysis of the target subject(s), observer(s), and associated organization(s) or businesses(s), including their performance (e.g., execution, effectiveness, compliance, problem-areas, etc.). In some embodiments, the reporting platform232may analyze two or more data types, such as observation-related data, achievement data, and/or student data related to the target subject, to generate an effectiveness rating for that target subject. Analyzing two more data types to generate an effectiveness rating is advantageous as it can provide a more reliable effectiveness rating for a target subject compared to an effectiveness rating generated from a single data type. For instance, the observation data for a given teacher may reflect, for a particular evaluation period, that the teacher received a rating of “proficient” for four of the metrics evaluated and a “needs improvement” rating for three of the metrics. However, during this same evaluation period, the student data may reflect that the students of this teacher gave the teacher a “proficient” or “excellent” rating in every category surveyed, and the achievement data for these students may reflect standardized test scores, which meet or exceed legislative requirements. As a result, the effectiveness rating generated by the reporting module210can balance the “needs improvement” ratings against the positive survey and test score results to produce a more accurate overall “effectiveness” rating for the teacher. In other examples, the reporting platform232may determine the assessments of the target subject described by each data type as being consistent, and as providing further evidence/support for a particular effectiveness rating. In some embodiments, the reporting platform can generate a report based at least in part on the receipt of interaction data describing an interaction between the target subject and the at least one electronic resource that was assigned. The reporting platform232may be coupled to the resource-finer engine252(FIG.3), the memory404, and/or the data store113a-nto receive the interaction data. For example, to generate a report, the reporting platform230may analyze user behavior in interacting with one or more electronic resources provided by the resource-finder engine252, and generate a report summarizing and/or detailing this analysis. In particular, when a user consumes an electronic resource, the resource-finder engine252of the student-growth platform118may receive and store interaction data describing the interaction in the data store113a-nin association with a user profile associated of the user, and the reporting platform232may access the interaction data to analyze the user interaction and results and generate a report describing the user interaction and results. For example, when a user accesses an electronic resource, pages through an electronic book, downloads files included with or embedded in a webpage, complete a survey associated with any electronic resource, views a video file, listens to an audio file, comments on passages of an interactive electronic book, submits lesson plans, submits curriculum maps, downloads documents, uploads files including video, audio, text, graphics, etc., participates in communities, groups defined by his/her social connections, or otherwise uses any other functionality provided by the user/client application108(e.g., seeFIG.5) to interact with an electronic resource. The student-growth platform118the receives interaction data describing these interactions from the user/client application108or another entity of the system, such as the media-distribution server117, and stores interaction data describing the interaction in the data store113a-n. In another example, if an observer assigns a target student the task of watching a video on achieving effective scores via the student-growth platform118, the reporting platform232can generate status updates about the target student's progress on watching the video and sending them to the observer (e.g. teacher). The reporting platform232can also report on the target student's additional efforts to develop his/her skills by reporting on what other electronic learning resources the target student has consumed since the observer made the assignment, provided the target student provides his/her consent for doing so via an associated privacy settings interface. In some embodiments, the reporting platform232generates a report in response to receiving a trigger signal. In some embodiments, the trigger signal may be generated by the student-growth platform118upon the completion of an assignment by a target user and transmitted to the reporting module232. In other embodiments, the trigger signal may be generated in response to a request for a report, for example, from a user of the student-growth platform via an associated user interface. For example, an observer who observed a target student and assigned the target student one or more electronic resources may input a command into his/her user device106via the user application108commanding that a report be generated describing the target student's progress on completing the assignment. Responsive to receiving the command, the user application108may generate and send a report request via the network102to the reporting platform232, thus triggering the reporting platform232to generate and send the report for display to the target student, observer, an administrator, a combination of the foregoing, etc. In other embodiments, the reporting platform232may automatically generate the report at certain intervals, times, etc. For example, the reporting platform232may automatically generate reports for all outstanding assignments and send them to the administrator and/or observer users114who oversee the target students that the outstanding assignments correspond to. In some embodiments, the reporting platform232may transmit the report to the user application108for display to the user114, provide the report for download as a portable document, transmit the report via electronic message (e.g., via email) to one or more other users114associated with or responsible for the target subject, etc. The reporting module232is also capable of analyzing the performance/effectiveness of an observer/student, and generating and providing a report describing the observer's/student's effectiveness/performance to the observer and other users114, such as an administrator of the observer. In some embodiments, to analyze the effectiveness/performance of the observer/student, the reporting platform232compares achievement data and observation-related data associated with the target to determine if the performance assessment of the target reflected by the observation-related is accurate and consistent. The achievement data can include any type of achievement data associated with the target. For example, depending on the target student's performance, the achievement data may include test scores for the target, reviews by teachers, performance reviews, compliance with requirements/standards, etc. The observation data can include any data associated with the performance assessments made by an observer, such as the observation files associated with the observer and/or target students observed by the observer. In these or other embodiments, the reporting platform232can track the observational assessments performed for the target student and compare them for consistency based on substance, frequency, etc. Based on the observation-related and achievement data, the reporting module232can determine the accuracy and consistency of a performance assessment (e.g., execution, effectiveness, compliance, performance, trending, and other metrics, etc.) of the target students. In some embodiments, the reporting module232can analyze the achievement data to determine an achievement-based performance assessment for the target student; can analyze the observation-related to determine an observation-based performance assessment for the target student; and compare the achievement-based and the observation-based performance assessments to further determine if the observation-based performance assessment of the target student is accurate/consistent. In other embodiments, the reporting platform232may compare the observational assessments by one observer of a target student to the observational assessments of the same target student by other previous observers to determine the accuracy of the observer's assessments. For example, if an observational assessment of a target student by a first observer is grossly inconsistent with the observational assessments of that target student by other observers on the same or similar subject matter, the observational assessment of the first observer may be flagged and reported to an administrator of the observer for further review/scrutiny. In some embodiments, the accuracy of the observation-based performance assessment can be determined based on whether the achievement-based and the observation-based performance assessments are consistent. For example, the reporting platform232may determine the observation-based performance assessment to be inaccurate if the observation-based performance assessment is negative and the achievement-based performance assessment is positive, or conversely, if the observation-based performance assessment is positive and the achievement-based performance assessment is negative. Further, the reporting platform232may determine the observation-based performance assessment to be accurate if both the observation-based performance and achievement-based performance assessments were negative or positive. However, if the both the observation-based performance and the achievement-based performance assessments were neutral, the reporting platform232may report that the accuracy of the performance assessment made the by the observer could not be verified. The reporting platform232can generate a report describing the determination it made about the accuracy of the observer's performance assessment of a target subject and provide the report for display to the observer(s) or one or more other users, such as an administrator of the observer(s). In some embodiments, the reporting platform232can generate the report in response to receiving a request from a client device106of an administrator/user114who oversees the observer. In other embodiments, the reporting platform232can automatically generate and send the report to the administrator via an electronic message, such as an email, an internal messaging application provided by the student development application, a text message, etc. In some embodiments, the accuracy of all of the observer's performance assessments of a particular target student or multiple target students may be determined by the reporting platform232and included in the report. For example, the observer's overall accuracy in performing the observational assessments may be computed over time by the reporting platform232to determine if the observer is consistently inaccurate with his/her observations. Additionally, the reporting platform232may compare the accuracy of one or more of an observer's assessments of a target student to the assessments of that target student by other observers to determine whether they are consistent. If not, information describing the inconsistencies may be included in the report. The reporting platform232may also determine whether an observer is properly performing the observational assessments and can include this determination in the report. In some embodiments, the reporting platform232may analyze the observation files for some or all target students observed by the observer to determine the level and quality of feedback provided by the observer about those students. For example, if the reporting platform232determines that the assessments (e.g., answers, ratings, comments, etc.) for the target students made by the observer in the observation files are all the same or substantially similar, the reporting platform232may determine that the observer is simply making the same assessments for each target student and is not performing the assessments as required. The reporting platform232may also make a determination as to the quality of one or more assessments performed by an observer based on the level and/or variety of feedback included in the observation file(s) for one or more target students. The reporting platform232may store any reports and/or data generated by it in the data store113a-nfor later access by the reporting platform232or any other component of the student-growth platform118, such as an administrative module (not shown) of the student-growth platform118that provides administrator/users access via the client application108to statistics and reports about the users114of the student-growth platform118that the administrator oversees. In the depicted embodiment, the reporting platform232is coupled to the data store113a-nvia the bus116via the network/cloud platform102and the third-party server117to receive the achievement data. For example, the reporting platform232can periodically retrieve the achievement data from the third-party server117via an API and store it locally in the data store113a-nfor later access or use. In another example, the reporting platform232can retrieve the achievement data real-time via the API for analysis and compare it to the observation-related data from the observation file. However, in other embodiments, the reporting platform232may retrieve the achievement data from any information source communicatively coupled to the student-growth platform118or network102via the network. The reporting platform232provides numerous additional advantages including providing the target student a mechanism for reporting on the completion of an assignment, providing an observer/user a mechanism to monitor whether the target student(s) he/she observes completes the assignments assign to them, analyzing and reporting on an student's performance and work quality, determining/rating effectiveness of target students, etc. Additional functionality of the student-growth platform118and its observation engine221, and their corresponding components are further described below. Example Client Device108 FIG.5is a block diagram of an example user/client device106. In the depicted embodiment, the client device106includes a client application108. The client device106also includes a communication unit308, a processor302, a memory304, a display device310with a graphics adapter320, a display318, and an input device312, which are communicatively coupled via the bus306. In some embodiments, the functionality of the bus306may be provided by an interconnecting chipset. The communication unit308includes interfaces for interacting with other devices/networks of devices. In some embodiments, the communication unit308includes transceivers for sending and receiving wireless signals. For example, the communication unit308includes radio transceivers (4G, 3G, 2G, etc.) for mobile network connectivity, and radio transceivers for WiFi and Bluetooth® connectivity. In these or other embodiments, the communication unit308may include a network interface device (I/F), which includes ports for wired connectivity. For example, the communication unit308may include a CAT-type interface, USB interface, or SD interface, etc. In the depicted embodiment, the communication unit308is coupled to the network105(FIG.1) by the signal line104a-n. The processor302comprises an arithmetic logic unit, a microprocessor, a general purpose controller, or some other processor array to perform computations and optionally provide electronic display signals to the display device310. The processor302may communicate with the other components via the bus306. Processor302processes data signals and may comprise various computing architectures including a complex instruction set computer (CISC) architecture, a reduced instruction set computer (RISC) architecture, or an architecture implementing a combination of instruction sets. Although only a single processor is shown inFIG.5, multiple processors may be included. The client device106also includes an operating system executable by the processor302as discussed elsewhere herein, for example, with reference toFIG.1. The memory304stores instructions and/or data that may be executed by processor302. The memory304communicates with the other components of client device106via the bus308. The instructions and/or data comprise code for performing any and/or all of the techniques described herein. In particular, the memory304includes a non-transitory computer-usable (e.g., readable, writeable, etc.) medium, which can be any apparatus or device that can contain, store, communicate, propagate or transport instructions, data, computer programs, software, code, routines, etc., for processing by or in connection with the processor302. A non-transitory computer-usable storage medium may include any and/or all computer-usable storage media. In some implementations, the memory304may include volatile memory, non-volatile memory, or both. For example, the memory304may include a dynamic random access memory (DRAM) device, a static random access memory (SRAM) device, flash memory, a hard disk drive, a floppy disk drive, a CD ROM device, a DVD ROM device, a DVD RAM device, a DVD RW device, a flash memory device, or any other mass storage device known for storing information on a more permanent basis. It should be understood that the memory304may be a single device or may include multiple types of devices and configurations. In some embodiments, the user/client application108is stored in the memory304and executable by the processor302. The display device310represents any device equipped to present output signals generated and provided by the user/client device106. In some embodiments, the display device310displays electronic images and data including, for example, user interfaces and formatted information. For example the display device310may be any conventional display device, monitor or screen, such as an organic light-emitting diode (OLED) display, a liquid crystal display (LCD), an e-ink display, etc. In some embodiments, the display device310is a touch-screen display capable of receiving input from one or more fingers of a user/client106. For example, the display device310may be a capacitive touch-screen display capable of detecting and interpreting multiple points of contact with the display surface. In some embodiments, the display device310may be coupled to the bus306via a graphics adapter320(shown within the display device310, but also may be configured outside), which generates and provides display signals to the display device310. The graphics adapter320may be a separate processing device including a separate processor and memory (not shown) or may be integrated with the processor302and memory304. The input device312represents any device for inputting data on the client device106. In some embodiments, the input device312is a touch-screen display capable of receiving input from the one or more fingers of the client/user106. The functionality of the input device312and the display device310may be integrated, and a user/client106of the client device106may interact with the client device106by contacting a surface of the display device310using one or more fingers. For example, the user/client114a-nmay interact with an emulated (i.e., virtual or soft) keyboard displayed on the touch-screen display by using fingers to contacting the display device310in the keyboard regions. In other embodiments, the input device312is a separate peripheral device or combination of devices. For example, the input device312includes a keyboard (e.g., a QWERTY keyboard) and a pointing device (e.g., a mouse or touchpad). The input device312may also include a microphone (e.g., for voice input) or other known peripheral devices. Example User/Client Application108 Referring now toFIG.5, the user/client application108is software including routines for sending and receiving data to the other entities of the system, including, for example, the student-growth platform118, the media-distribution server115, and the third-party server117. In some embodiments, the user/client application108ais a web browser application for accessing the resources provided by the student-growth platform118and the media-distribution server115. For example, the student-growth platform118operated by the in cooperation with the media-distribution server115may be a web-based service and the user/client application108may access various electronic resources provided by the service via uniform resource locators (URLs). In other embodiments, the user/client application108ais an application customized specifically for accessing the student-growth platform118, and more particularly, for cooperating and interacting with the observation engine119. In the depicted embodiment, the user/client application108provides a user114a-n(e.g., an observer) interacting with the client device106mechanisms for inputting viewing, adding, modifying, deleting observation-related data related to one or more other users/clients114a-n. The user/client application108may cooperate with the observation engine221(FIG.1A) to conveniently store and retrieve observation templates and files for viewing by the user. The user/client application108may, in some embodiments, send a resource request to the observation engine221to identify and provide recommended electronic resources that can be assigned to a user. The user/client application108may also send a request to the reporting module232(FIG.1A) to provide observation-related statistics and reports for display to the user114a-114nvia a report interface generated by the user-interface module514of the user/client application108. In the illustrated embodiment, the user/client application108includes a user-interface module514, an observation unit516, and an assignment unit518. The observation unit516, the assignment unit518, and the user-interface module514are communicatively coupled with each other and the other components502,504,508,510, and512of the client device106. The components are also coupled to the network102via the communication unit508(and line104) for communication with the other entities of the system100a. While not shown, in some embodiments, the user/client application108may include an authentication or verification module for authenticating the user114a-nto access the student-growth platform118. In some embodiments, the user/client application108, the user-interface module514, the observation unit516, and/or the assignment unit518are sets of instructions executable by the processor502to provide their respective functionality. In other embodiments, the user/client application108, the user-interface module514, the observation unit516, and/or the assignment unit518are stored in the memory504of the client device106and are accessible and executable by the processor502to provide this functionality. In any of these embodiments, the user/client application108, the observation unit516, the assignment unit518, and/or the user-interface module514may be adapted for cooperation and communication with the processor502and other components of the user/client device106. In some embodiments, the observation-related data managed by the user/client application108may be locally stored in the memory504, remotely stored in any of the data stores113a-113n(via signal lines121a-121n), the third-party server117, or may be stored in any combination of the forgoing thereof. For example, an instance of the observation-related data may be stored locally on the user/client device106and remotely on the student-growth platform118, and the client/user application108may synchronize the information via the network105, either continuously or periodically, as the information changes. In some embodiments, the user/client application108may be a stand-alone application or may be integrated into another application operable on the user/client device106. The observation unit516is software including routines for sending and receiving observation-related data to the observation engine221(FIG.1A), cooperating with the interface engine306to display observation-related information to a user, and cooperating with the user-interface unit119to receive observation-related input from the user/client. In some embodiments, the observation unit516interacts with the observation engine221to receive observation templates and observation files for display to the user114a-nof the user/client device106and to send observation files to the observation engine221for processing and/or storage in the data store113a-n, as discussed above with reference to at leastFIG.1A. In some embodiments, the observation unit516can cooperate with the observation engine221via the network102to provide information about target students to an observer and provide functionality to the observer for assessing and tracking the performance and development of the target students. The observation unit516may also interact with the user-interface module514to provide administrative tools such as a reporting tool for viewing statistics and other analytical data, and/or an observational tool for assessing the performance of students, assigning instructional resources to students, and tracking completion of the assignments given to them. In some embodiments, the observation unit516interacts with the user-interface module514to display observation templates and files to a particular user, as discussed with reference to at leastFIG.14below. The observation unit516may be coupled to the user-interface module314to receive user input and display the information to the user1114a-114nvia user interfaces generated by the user-interface module514, such as the observation interface discussed with reference toFIG.14below. For example, the observation unit514may send interface signals to the user-interface module314, and responsive to receiving these signals, the user-interface module314may generate and display user interfaces that correspond to the instructions carried by the interface signals. In another example, the user-interface module314may receive input signals from a user via the input device312and send those signals to the observation unit314for processing. In some embodiments, in cooperation with the user-interface module314, the observation unit314can receive user-related and observation-related information and display the data to the user, display observation templates to the user, populate observation templates with user input, save observation files based on the observation templates, transmit observation-related data such as observation files to the observation engine221or storage, receive observation-related statistics and reports and organize and display them to the user, receive electronic resources for assignment, consumption, etc., by the user, receive electronic communications from other users via the network102and display them to the user, etc. In some embodiments, an observer may, via a user interface rendered by the user-interface module, preselect options and/or be guided similarly in designing observation templates and appropriate follow-up activities. In some embodiments, the user-interface module514, in cooperation with the observation unit316, may generate a report dashboard/interface for viewing reports generated and provided by the reporting module232(FIG.1A) and received by the observation unit514. This dashboard provides numerous advantages including providing an observer or administrator with detailed information about a given target student's performance (e.g., execution, effectiveness, compliance, etc.) over time. For example, the observer may be a teacher and may need to interact with a number of students to perform observational assessments of each of them. For each student, the teacher may, using the dashboard, access any previous observational assessments of that student; view an overall performance (e.g., execution, effectiveness, compliance, etc.) rating/summary of that student (scores assigned); view the performance (e.g., execution, effectiveness, compliance, etc.) ratings/summaries of that student over time; view statistics across all observational assessments of that student or a subset, such as the observational assessments performed for that academic year; may quickly ascertain the areas a student has had problems with or has been working on, or the areas the student has been improving on; review the test scores for the students, evaluations of the student; view the electronic training resources the student has consumed/interacted with; view any work-product, lesson plans, videos, presentation, etc., the student has uploaded, the learning communities and groups the student has interacted with, any mentors the student has been working with, etc. Using this information, the teacher may quickly get up-to-speed on where the student is at, thus provide pertinent and relevant observations (e.g., evaluations, ratings, suggestions, comments, etc.) and assignments, etc., during the observation session to be performed. The assignment unit518is software including routines for generating and sending resource requests, receiving resource responses including one or more electronic resources identified by the assignment platform226, and assigning one or more electronics resources to a user. In some embodiments, the assignment unit518cooperates and interacts with the assignment platform226to identify one or more electronic resources that can be assigned to a user, as discussed above with reference to at leastFIG.1A. The assignment unit518is coupled to the user-interface module514to receive user input and provide information to the user/client114a-nvia user interfaces generated by the user-interface module514. In some embodiments, responsive to receiving user input signals, the assignment unit518can generate a resource request or an assignment request. In some embodiments, the input signals may specify which electronic resource(s) is/are being assigned and the user the resource(s) is/are being assigned to. For example, an observer performing and observation of a target student, may select one or more of the videos identified by the recommendation engine250(FIG.2A) and displayed via the user-interface module514, such as the observation interface1400illustrated inFIG.14. The assignment unit518may also assign supplemental instructional, prescriptive and/or discipline-related resources in response to one or more of these resources being assigned by an observer (e.g., after receiving a report about an initial assignment). In some embodiments, the assignment unit518assigns one or more of these resources by generating and sending an assignment request and receiving an assignment confirmation as discussed elsewhere herein. In addition, the assignment unit518may provide tools/functionality to the observer to provide the target student with feedback, follow-up with the target student about an assignment or an aspect observational assessment performed, provide recommendations of additional electronic resources to assign to the target subject upon completion of an initial assignment by the target student, etc. The user-interface module514is software including routines for rendering user interfaces and for receiving user input. The user-interface module514may be coupled to the input device512via the bus506to receive input signals from the user114a-n. For example, an observer/user114a-ncan select an answer to an observation-related question using the input device512, and the user-interface module514receives signals describing the answer. The user-interface module514may store the input signals in the memory504for retrieval by the other elements of the client application508, such as the assignment unit518, or may provide the signals directly to the other elements of the user/client application108. The user interfaces generated by the user-interface module108include interfaces for inputting, modifying, and deleting information, displaying notifications, rendering video, displaying images and text, displaying vector-based content, sending and storing information, etc. In some embodiments, the user interfaces include user interface elements that allow users/clients114a-nto interact with the user/client device106and input information and commands, such as text entry fields, selection boxes, drop-down menus, buttons, virtual keyboards and numeric pads, etc., as further discussed below with reference toFIG.14. Example Methods Referring now toFIG.4, an example method400for prescribing electronic resources based on observational assessments is described. The method400begins by identifying402one or more electronic resources based on observation data. In some embodiments, the recommendation engine206identifies402the one or more electronic resources by querying a library of electronic resources for resources that match one or more aspects of the observation data. If a plurality of electronic resources is identified, the recommendation engine206can rank and filter the electronic resources and thus recommend which electronic resources are the most suitable for a target subject. Next, the method400provides404a summary of the one or more electronic resources to an observer, such as a supervisor or evaluator, for assignment to subject that he/she is observing. For example, the client device126of the observer may receive a summary of training videos or other resources identified and ranked by the recommendation engine206and may display the summary to the observer via a user interface. The observer may use the interface to preview the videos or other resources and/or assign one or more of the videos or other resources to the target subject. Next, the method receives406an assignment of one or more electronic resources. In some embodiments, the assignment engine208receives an assignment request describing the one or more electronic resources that are to be assigned to the target subject by the assignment engine208. The method400continues by associating408the assignment of the one or more electronic resources with the target subject. In some embodiments, to associate the assignment, the assignment engine208stores the assignment request or information therefrom in the data store210in association with the a user profile of the target subject. The method400is then complete and ends. FIG.6describes an example method600for the developing student growth. The method600begins at601, including one or more operations for evaluating the time elapsed since a last set of assessments for a student were generated. The method600proceeds to602, including one or more operations for utilizing the smart-gradient project (SGP) algorithm to assess and position a student (or student group) into the scaled learning-progression platform based on the time elapsed. The method600proceeds to603, including one or more operations for combining the computer-adapted-testing (CAT)+SGP+time-based projection+entry points to establish the entry point to curriculum. The method600proceeds to604, including one or more operations for normalizing the CAT outcomes with practice assignments to create an integrated model of mastery against the learning progression. The method600proceeds to605, including one or more operations for extending actual testing using intelligent inferencing based on the relationship of objects within the learning progression. The method proceeds to606, including one or more operations for utilizing universal skills pool to enable curriculum mapping to facilitate lesson planning by mapping to the teacher's chosen curriculum, pacing guide or text book. The method600proceeds to607utilizing a multi-dimensional response item model to bind assignments from assessment, instruction, and practice assignments into a unified scale. The method600proceeds to608, including one or more operations for viewing mastery level of students by assignment score, by probed assessment, by general-outcome-measurement (GOM) assessment, and by integrated models. The method600proceeds to609, including one or more operations for enabling the lesson-planning engine to link assessment to curriculum. The method600proceeds to610, including one or more operations for enabling the lesson-planning engine to link assessments to government-created learning standards. The method600proceeds to611, including one or more operations for including probabilistic algorithms. The method600proceeds to612, including one or more operations for enabling the lesson-planning engine to link assessments to instruction resources from many sources. The method600proceeds to613, including one or more operations for enabling the lesson planning-engine to combine assessment sources with standards, curriculums, instructional resources, and assignment delivery systems. The instructions resources include metadata for electronic resources, such as audio files, video files, vector-based files, electronic books, electronic publications, spreadsheets, word processing documents, presentational slides, etc. In some embodiments, the electronic resources may be derived from storage in the data store410and/or the media data store111along with metadata describing the contents and characteristics of the electronic resources. In other embodiments, metadata for the electronic resources are derived from the electronic resources themselves, for example by parsing header information included in the electronic resources. In some embodiments, the instructional materials may be retrieved from a resource library database updated to include the metadata for the electronic resources, including for example, data describing the content and characteristics of the electronic resources and their stored location. At612, the lesson-planning engine receives observation data reflecting an observational assessment generated for a target student. In some embodiments, the observation data reflects an answer to a question from an observation template. For example, the observation data can describe how the target subject is performing with reference to a particular skill, requirement, standard, etc. Using the metadata associated with the electronic resources, the method600queries for one or more electronic resources that match the observation data. The match can be loose and allow electronic resources that generally pertain to the observation data to be identified, or may be strict and require that the electronic resources be precisely directed to the assessment reflected in the observation data. For example, if the target student is identified as lacking in his or her ability in a particular area, a loose match may identify resources generally related to what is lacking, and a strict match may identify resources that specifically relate to what is lacking. FIG.7describes an example method of assessing target students. The example method700begins at702with determining the time elapsed since a last observation assessment for a target student. The method700proceeds to704for conducting computer-adapted testing for the student determine the educational level of the target student. The method700proceeds to706for determining the student growth percentile. It should be recognized that a student growth percentile, or SGP, compares a student's growth to that of his or her academic peers nationwide. Academic peers are students in the same grade with similar achievement history on standardized assessments. The SGP is reported on a 1-99 scale, with lower numbers indicating lower relative growth and higher numbers indicating higher relative growth. For example, a SGP score of 90 means the student has shown more growth than 90 percent of students. The percentile rank (PR) and student growth percentile (SGP) are very different metrics has a PR is an achievement score that describes a single point in time and a SGP is a growth measure that explains student growth between different points in time. Both measures are norm-referenced, but they have different norming groups. The norming group for PR is all students in a particular grade level. The norming group for SGP is each student's own academic peer group. Percentile rank (PR) and student growth percentile (SGP) are based on scale of 1-99. At least two tests are typically required to report a SGP. The method700proceeds to708to integrate all the scores into a unified score for placement of the target student by the learning-progression engine. FIG.8describes an example method800for generating a lesson plan. The method800begins and proceeds to802, including one or more operations for generating an expected score for a target (student). The example method800proceeds to804, including one or more operations for generating an estimated pace for the target (student). The example method800proceeds to806, including one or more operations for, selecting skills appropriate for the estimated pace for the target student. The example method800proceeds to808, including one or more operations for recommending a curriculum for the target student. The example method800proceeds to the810, includes one or more operations for finding resources that match the curriculum. The example method800includes one or more operations for generating a lesson plan for the target student. FIG.9describes an example method for creating and assigning assignments in accordance with the lesson plan. The example method900begins and proceeds to902, at which point the example method receives an assignment request, generated with by the teacher or the student. The example process900, at904, includes a preview request for previewing the resource. If so, the method900provides906the selected resource for the assignment indicated in the preview request for presentation to the observer. In some embodiments, the electronic resource is provided by the student-growth platform118and/or media-distribution server117via the network102to a user/client device106of the observer (e.g., student or teacher). In other embodiments, other entities coupled to the network102may provide the electronic resource. By way of example, an observer who received a list of electronic resources from the recommendation engine250via the client application108can preview one or more of the electronic resources to learn more about the resource or resources, determine whether the subject matter of the resource is appropriate for the target subject, etc. If the method900determines at904that the request does not include a preview request, the method900then determines at908whether the request includes an assignment request that require one or more electronic resources for a target subject for completion. If so, the method900determines910if any assignment particulars or parameters are associated with the assignment request. In some embodiments, an assignment particular places a condition on how the assignment of an electronic resource is to be completed. For example, the assignment particular may be a due date by which the target must interact with the electronic resource by. As a further example, if electronic resource is a video, the assignment particular may be a due date by which the target must watch the video by using an interface associated with the student-growth platform118. If it is determined at908that the request does not include an assignment request for resources, the method900is then complete and ends. Next, the method900, at912, merges the one or more electronic resources with the assignment for the target based on the one or more assignment particulars or parameters. In some embodiments, the method900may assign912the one or more electronics resources by storing a record of the assignment in the data store410(FIG.4) in association with a user profile of the target. The record can include information describing the one or more electronic resources and the one or more assignment parameters. The method900is then complete and ends. FIGS.10and11describe an example method1000for monitoring and reporting on assignments. The method1000begins by monitoring1002the progress of an assignment. The assignment may include the assignment of one or more electronic resources to a target for completion/interaction by the target subject. The assignment may also include one or more assignment parameters that dictate how the assignment should be completed by the target, and the method1000can analyze the assignment parameters to determine if the assignment has been completed. In some embodiments, the reporting platform232(FIG.3) is configured to monitor the status of the assignment, including whether the assignment has been fully completed, is in progress, or has not begun. The method1000continues by exchanging1004communications between the target and the observer of the target. In some embodiments, the method1000facilitates the exchange by providing the contact information (e.g., an electronic messaging address) of the target to the observer and vice versa. In other embodiments, the method1000exchanges communication by relaying electronic messages between messaging accounts of the target and the observer using an internal messaging service. Exchanging communication using other messaging services, such as email, instant messaging, SMS, etc., is also contemplated. In these embodiments, the method1000may store record of any communications exchanged between the target and the observer for later reference and retrieval. Exchanging communication between the observer and the target is advantageous in a number of respects including that it provides a feedback loop between the target and the observer. For example, the target may communicate questions to the observer about what specific areas the target should focus on improving when interacting with an electronic resource assigned to him/her by the observer, and the observer may provide feedback to the target. In some embodiments, the communications exchanged by the method1000may be included in a report generated by the reporting module232to summarize the interaction between a target and an observer. Next, the method1000determines at1006the completion of the assignment. For example, the method1000can determine whether the assignment was successfully completed, was never begun, or was in progress at the conclusion of the time set for completing the assignment. The method1000then provides at1008the grading frame to the observer and updates the target profile to reflect the completion. In some embodiments, the reporting module232updates a record stored in the data store410with data reflecting the completion. The method1000continues by generating1010a report describing the status of the assignment and providing it to the observer1012and/or other users. The report may include the completion determined by the method in block1006, any electronic communication exchanged between the target subject in the observer in block1004, and any other information about the assignment, including a description of the electronic resource(s), information from the observation file associated with the assignment, statistics and results from other observational assessments performed previously of the target subject, any related industry standards, performance benchmarks, requirements, etc. The method1000then determines at1014whether the assignment was successfully completed. In some embodiments, this determination is based on the conclusion from block1006. If the method1000determines at1014the assignment to have been successfully completed, the method1000continues by updating the assignment list for the target at1016and then at1118generating a status report for the target. If the method1000determines at1114the assignment to have not been successfully completed, the method1000continues by updating the target status report and then proceeds to generate alerts and report on the target. The method1000is then complete and ends. FIG.12describes an example method1200for the learning-progression scenario. The method1200begins by presenting1202an observation template including learning progression questions and associated user-selectable/definable options to an observer of a target. In some embodiments, the user interface unit119(FIG.1A) displays the observation template upon receiving interface signals from the observation engine221. Next, the method1200receives at1204user input providing answers to a question and, based on the answers, the method1200determines at1206one or more electronic resources that relate to the answer. For example, the user interface unit119receives input signals providing observation data from the observer via the input device512and the assignment manager260generates an assignment request based on the observation data and transmits it to the assignment player264. The assignment manager260, in reply, identifies one or more electronic resources and sends them to the assignment player264and the assignment unit264instructs the display device510to display the one or more electronic resources to the observer. The one or more electronic resources are then displayed1208by the method1200to the observer. Next, the method1200receives1210user input selecting one of the electronic resources, and determines1212whether the user input includes an instruction to present the resource for review for a lesson plan or assignment. If so, the method1200requests1214the electronic resource for presentation. In some embodiments, the method800sends a presentation request to the server hosting the resource requesting the server provide the electronic resource for presentation. For example, the electronic resource is a video and the assignment player264receives a video stream from the media-distribution server115responsive to sending a preview request to the resource-finder engine252. If the user input does not include an instruction to present the resource, the method1200continues by determining1216whether the user input includes an instruction to assign the electronic resource to the target subject for completion. If so, the method1200requests1218the assignment of the electronic resource to the target. In some embodiments, an assignment request is sent by the assignment manager260to the assignment player264via the network102requesting the electronic resource be assigned to the target for completion. If the method1200determines1216that the user input does not include an instruction to assign the electronic resource, the method1200is then complete and ends. FIG.13describes an example method1300for assessing performance of a target. The method1300begins by receiving1302achievement data for at target student and comparing, at1304, the achievement data to assessment data and scores associated with the target student. For example, the reporting platform232may access achievement data from the data store113(by exporting the data by the exporter engine337) or from the third-party server117and compare it to observation data also accessed from the data store113. In some embodiments, the observation data may be pulled from an associated observation file stored in the data store113. Based on the comparison, the method1300determines at1306whether a performance assessment of a target student meets goals for the target student and generates at1308a report describing the performance of the target student describing the performance of the target student. For example, the reporting platform232can generate a report describing the determination it made about the target student's performance. The method1300provides at1310the report for access and display to an administrator, teacher, or other entity, and then completes and ends. It should be understood that the methods600-1300are provided by way of example, and the variations and combinations of these methods, as well as other methods, are contemplated. For example, in some embodiments, at least a portion of the methods600-1300represent various segments of one or more larger methods and may be concatenated or various steps of these methods may be combined to produce other methods which are encompassed by the present disclosure. Additionally, it should be understood that the assignments of electronic resources and reporting on the conclusions of the assignments, as described with reference to at least the methods600-1300, could be iterative, and thus repeated as many times as necessary to assist a target student with his or her growth and development. To illustrate various aspects of the system100aand the methods600-1300, the following non-limiting example is provided. A school or district administrator or such third party may visit the classrooms of each teacher in his/her school to observe. The third party may launch the client application108on his/her wireless client device106, and once launched, the observation unit516(FIG.5) of the client application108may refresh a local repository with updated target information and observation templates received from the observation engine221via the network102. The third party, using an interface generated by the user-interface module514, may select previously completed observation files for a given student to view how the student performed during previous observation sessions. Example User Interface Referring now toFIG.14, an example observation or user interface (or dashboard display)1400for the functionalities of the student-growth platform118is described. It should be understood that the example observation or user interface (or dashboard display) illustrated inFIG.14is provided merely by way of example, and that other user interface displays (with different criteria or user interactions) may be generated and displayed by the user/client application108to allow users114to interact with the system100(a and b) and to allow the system100to present information to the users. For example, various user interfaces may be produced, to display reports and statistics, display dialogs among the users (by a chat feature), set parameters and settings, send electronic communications, view, listen to and/or interact with the electronic resources provided by the student-growth platform, etc. As depicted inFIG.14, the observation interface1400includes a menu region1402and an observation region1404. The menu region1402includes a listing of students belonging to a particular school, class, or group1406. The menu region1402also includes a button1408for viewing suggested skills, which may be divided into domains (e.g., four domains including foundational skills, language, literature, informational text). The illustrated dashboard shows an example domain literature. Selecting a student selector1406displays a corresponding observation file created/being created for that particular student. For example, in the depicted embodiment, the student selector1404for Jim Brown has been selected and a corresponding observation file for Jim Brown is being populated with assessment information by the observer in the observation region1004. Selecting the view suggested skills button1408creates a new observation file for a student from an observation template. In some embodiments, in response to the selection of the observation creation button, a dialog (not shown) displaying a list of users may be presented to the observer. In some embodiments, the list of users represents all of the students that are associated with a particular school, class, or group within a school. For example, in the educational setting, the list of users may include all of the students in a particular grade in a school, or may be a segmented list of students selected from all of the schools within a school district and their corresponding teachers and administrators. In some embodiments, this list is provided on demand to the observation unit516by the observation engine221via the network102and rendered for display by the user-interface module514. In other embodiments, the observation unit516may retrieve the list from a local repository and provide it to the user-interface module514for display. Using the user interface, the observer may then select who the target student is from the list of users, and responsive to receiving this input, the user-interface module514may render the observation interface1400for the target student similar to the one displayed inFIG.14. Variation of this observational interface is possible. An observational interface may display a dashboard and screenshots that may be specific to a particular subject. In some embodiments, hovering over a standards bar once a skill is selected displays the standard code and text. Changing the selection to standards view displays the state-specific standards code; hovering over the code displays the standard's text. The observation region may include a header region1410and a body region1412. The header region1010includes fields for displaying who the target student of the observation is (e.g., Jim Brown) and which observation template is being used for the observation, and for inputting the date and time the observation session was started and completed. The header region1410also includes an options dialogue box for configuring settings, such as generating and sending a report and updating a user profile. For example, the observer may check a checkbox to set an option for generating and sending a report and for updating a user profile for storage in the data store113for later access. The body region1412includes elements for the observer to input his/her assessments made during the observation. For example, as depicted, the body region1412includes a region1414indicating the following: create instructional groups, find instructional resources, and indicate a performance level. There is a window (which may appear as a pop-up) for teacher activity indicating teacher objectives and lesson, with indicating a sample item. As depicted, the body region1412also includes a resource region1434for displaying one or more electronic resources. In some embodiments, the electronic resources displayed in the resources region1434are received from the recommendation engine250and displayed in the resource region1434responsive to the observer inputting information into the answer elements1416. For example, upon receiving the input from the observer, the observation unit516transmits a resource request to the recommendation engine250requesting a list of related electronic resources be provided based on the input (e.g., observation data). The resource region1434, as depicted, includes a resource scrolling region1418, a scrollbar1424, one or more electronic resources1420, a resource description region1422, an assignment button1428, a preview button1430, and a due date button1432. The resource scrolling region1418provides the user with functionality to scroll through and select one or more of the various electronic resources displayed therein. The scrolling can be performed by interacting with the scrollbar1024or the resource scrolling region1018(e.g., swiping the resource scrolling region1018via a touch-sensitive display with an input element, such as a finger). The selecting can be performed by interacting with the representations of the electronic resources in the resource scrolling region. For example, selecting on an electronic resource once selects the resource, and selecting it again unselects the resource. Multiple selection is also possible using known selection methods. Once one or more resources have been selected by the observer, they can be previewed or assigned using the corresponding preview and assignment buttons1430and1428. In some embodiments, selecting the preview button transmits a request for a selected electronic resource, and once received, displays the selected electronic resource(s) in a preview interface with interface elements allowing the user to view and interact with the electronic resource. For example, the selected electronic resource is a video and the selecting the preview button displays a media player for viewing the video. In some embodiments, selecting the assignment button1428sends an assignment request to the assignment unit518requesting the assignment of the one or more selected electronic resources to the target student. In reply, the assignment unit518may send a confirmation response to the assignment unit518indicating that the one or more resources were successfully assigned. Once this response has been received, the scrollable resource region may be refreshed to only display the resources that were assigned and the assignment button1428may change to an unassign button to indicate that the displayed resources have been assigned and provide functionality for the observer to unassign them if desired. The due date button is an example of an input element for setting an assignment parameter. As depicted, when the due date button is selected, a calendar dialog is displayed for selecting a date for when the assignment of the one or more electronic resources should be completed. It should be understood that the observation interface1400could include any number of interface elements for setting assignment parameters. In some embodiments, the resource region1434may initially be hidden from display until the user inputs observation data into one or more of the answer elements1416. In other embodiments, the resource region1434may always be displayed, or may be hidden or displayed by selecting a corresponding expansion/contraction button (not shown). While only one assessment region1414and resource region1434are displayed in the depicted embodiment, it should be understood that numerous assessment regions1414and corresponding resource regions1434could be included. For example, there could be numerous standards and associated questions/indicators for measuring the target subject's performance during observation, and thus numerous corresponding resource regions for displaying electronic resources that correspond to the various assessments that have been made by the observer during the observation session. An example system and methods for prescribing electronic resources based on observational assessments have been described. In the above description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the present disclosure. It should be understood that the technology described in the various example embodiments can be practiced without these specific details. In other instances, structures and devices are shown in block diagram form in order to avoid obscuring the description. Reference in the present disclosure to “some embodiments,” “an embodiment,” “an example embodiment,” “other embodiments,” etc., means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the description. The appearances of the phrase “in some embodiments” in various places in the present disclosure are not necessarily all referring to the same embodiments. Some portions of the detailed descriptions that follow are presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of steps leading to a desired result. The steps are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like. It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the following discussion, it is appreciated that throughout the description, discussions utilizing terms including, for example, “processing” or “computing” or “calculating” or “ranking” or “identifying” or “determining” or “displaying” or “receiving” or “conducting” or “collecting” or the like, refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices. The present embodiment of the present disclosure also relates to an apparatus for performing the operations herein. This apparatus may be specially constructed for the required purposes, or it may include a general-purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a computer readable storage medium including, for example, any type of disk including floppy disks, optical disks, CD-ROMs, and magnetic disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, flash memories including USB keys with non-volatile memory or any type of media suitable for storing electronic instructions, each coupled to a computer system bus. The present disclosure can take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment containing both hardware and software elements. In a preferred embodiment, the present disclosure is implemented in software, which includes but is not limited to firmware, resident software, microcode, etc. Furthermore, the description can take the form of a computer program product accessible from a computer-usable or computer-readable medium providing program code for use by or in connection with a computer or any instruction execution system. For the purposes of this description, a computer-usable or computer readable medium can be any apparatus that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. A data processing system suitable for storing and/or executing program code will include at least one processor coupled directly or indirectly to memory elements through a system bus. The memory elements can include local memory employed during actual execution of the program code, bulk storage, and cache memories which provide temporary storage of at least some program code in order to reduce the number of times code must be retrieved from bulk storage during execution. Input/output or I/O devices (including but not limited to keyboards, displays, pointing devices, etc.) can be coupled to the system either directly or through intervening I/O controllers. Network adapters may also be coupled to the system to enable the data processing system to become coupled to other data processing systems or remote printers or storage devices through intervening private or public networks. Modems, cable modems, wireless adapters, and Ethernet cards are just a few of the currently available types of network adapters. Finally, the algorithms and displays presented herein are not inherently related to any particular computer or other apparatus. Various general-purpose systems may be used with programs in accordance with the teachings herein, or it may prove convenient to construct more specialized apparatus to perform the required method steps. The required structure for a variety of these systems will appear from the description. In addition, the present disclosure is not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the present disclosure as described herein. It is intended that the scope of the disclosure be limited not by this detailed description, but rather by the claims of this application. As will be understood by those familiar with the art, the present disclosure may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. Likewise, the particular naming and division of the modules, routines, features, attributes, methodologies and other aspects are not mandatory or significant, and the mechanisms that implement the present disclosure or its features may have different names, divisions and/or formats. Furthermore, as will be apparent to one of ordinary skill in the relevant art, the modules, routines, features, attributes, methodologies and other aspects of the disclosure can be implemented as software, hardware, firmware or any combination of the three. Also, wherever a component, an example of which is a module, of the present disclosure is implemented as software, the component can be implemented as a standalone program, as part of a larger program, as a plurality of separate programs, as a statically or dynamically linked library, as a kernel loadable module, as a device driver, and/or in every and any other way. Additionally, the disclosure is in no way limited to implementation in any specific programming language, or for any specific operating system or environment. Accordingly, the disclosure is intended to be illustrative, but not limiting, of the scope of the subject matter set forth in the following claims. | 155,365 |
11862042 | DETAIL DESCRIPTIONS OF THE INVENTION While systems and methods are described herein in detail in relation to one or more embodiments, it is to be understood that this disclosure is illustrative and exemplary, and are made merely for the purposes of providing a written and enabling disclosure. The detailed disclosure herein is not intended, nor is to be construed, to limit the scope of patent protection afforded in any claim of a patent issuing here from, which scope is to be defined by the claims and the equivalents thereof. The following detailed description refers to the accompanying drawings, which are incorporated herein. Wherever possible, the same reference numbers are used in the drawings and the following description to refer to the same or similar elements. While specific embodiments of the disclosure may be described in detail, modifications, adaptations, and other implementations are foreseeable by the inventors. For example, substitutions, additions, or modifications may be made to the elements illustrated in the drawings, and the systems and methods described herein may be modified by substituting, reordering, or adding stages to the disclosed methods. Accordingly, the following detailed description does not limit the disclosure. Instead, the proper scope of the disclosure is defined by the appended claims. The present disclosure may contain headers. It should be understood that these headers are used as references and are not to be construed as limiting upon the subjected matter disclosed under the header. The present disclosure includes many aspects and features. Moreover, while many aspects and features relate to, and are described in the context of augmented and/or virtual reality, embodiments of the present disclosure are not limited to use only in this context. Location, Prediction, Presentation The inventors discovered that augmented reality systems are not capable of locking geospatially located augmented reality content in a position within an environment that are absent real objects or has limited objects. Imagine that you are flying a plane at 10,000 feet above the ground. The pilot's view is wonderful, but it may absent any real objects that are geolocated with any precision. The pilot may see clouds, the sun, other planes temporarily, but the pilot does not see objects that are generally used to anchor content, such as walls, outdoor geolocated buildings, mapped roads, etc. The inventors further discovered that in such environments the systems, in embodiments, required precision location of the user, precision identification of where the user is looking and tracking of these attributes in real-time such that the geolocated content can be more precisely fixed in position. Add to this problem, as the inventor's discovered, that when presenting augmented reality content to a fast-moving vehicle in such an environment the issues get even more challenging. Systems and methods discovered by the inventors may be used in such environments or even in environments where there are real objects that could be used for anchoring of virtual content. Systems and methods in accordance with the principles of the present inventions may relate to a situation referred to as ‘within visual range’ of a vehicle. Training within visual range is generally training based on up to approximately 10 miles from an aircraft because that is approximately how far a pilot can see on a clear day. The training may involve presenting visual information in the form of augmented reality content to the pilot where the augmented reality content represents a training asset within the pilot's visual range. Embodiments of the present invention may provide systems and methods for training of a pilot in a real aircraft while flying and performing maneuvers. Such a system may include an aircraft sensor system affixed to the aircraft adapted to provide a location of the aircraft, including an altitude of the aircraft, speed of the aircraft, and directional attitude of the aircraft, etc. The system may also include a head mounted display (HMO) sensor system (e.g. helmet position sensor system) adapted to determine a location of HMO within a cockpit of the aircraft and a viewing direction of a pilot wearing the helmet. The HMO may have a see-through computer display through which the pilot sees an environment outside of the aircraft with computer content overlaying the environment to create an augmented reality view of the environment for the pilot. The system may include a computer content presentation system adapted to present computer content to the see-through computer display at a virtual marker, generated by the computer content presentation system, representing a geospatial position of a training asset moving within a visual range of the pilot, such that the pilot sees the computer content from a perspective consistent with the aircraft's position, altitude, attitude, and the pilot's helmet position when the pilot's viewing direction is aligned with the virtual marker. The virtual marker may represent one in a series of geospatial locations that define the movement of the training asset and one of the series may be used as an anchor for the presentation of the virtual training asset content in a frame at a time representing a then current time. In embodiments, the computer content represents a virtual asset in a training exercise for the pilot. The pilot may use the aircraft controls to navigate the aircraft in response to the virtual asset's location or movement. The computer content presentation system may receive information relating to the pilot's navigation of the aircraft and causes the virtual asset to react to the navigation of the aircraft. The reaction may be selected from a set of possible reactions and/or based on artificial intelligence systems. The virtual training asset may be a virtual aircraft, missile, enemy asset, friendly asset, ground asset, etc. In embodiments, the augmented reality content's virtual marker's geospatial position is not associated with a real object in the environment. The environment may or may not have real objects in it, but the virtual marker may not be associated with the real object. The inventor's discovered that augmented reality content is generally locked into a location by using a physical object in the environment as an anchor for the content. For example, generally the content may be associated or ‘connected’ with a building, wall, street, sign, or other object that is either mapped to a location or not. A system or method according to the principles of the present invention may lock the content to a virtual marker in the air such that it can represent a virtual object can be presented as being in the air without being associated with an object in the environment. The apparent stability of such content, as viewed from an operator of a vehicle, may depend on maintaining an accurate geometric understanding of the relative position of the operator's HMD and the content virtual marker's geospatial location. A main cause of error in maintaining the geometric understanding may be maintaining an accurate understanding of the vehicle's position, attitude, speed, vibrations, etc. The geometric understanding between the vehicle and the geospatially located virtual marker may be accurate if the vehicle's location and condition is well understood. In embodiments, the geometric understanding changes quickly because both the vehicle and the virtual marker may be moving through the environment. For example, the vehicle may be a jet fighter aircraft moving at 800 miles per hour and the augmented reality content may represent an antiaircraft missile moving at 1500 miles an hour towards the aircraft. In such a training simulation both the real aircraft and virtual content are moving very fast and the relative geometry between them is changing even faster. A system and method according to the principles of the present invention update the relative geometric understanding describing the relationship between the vehicle and the virtual marker. The system may further include in the relative geometric understanding the vehicle operator's head location and viewing position and/or eye position. To maintain an accurate geometric understanding, a system and method may track information from sensors mounted within the vehicle, including a one or more sensors such as GPS, airspeed sensor, vertical airspeed sensor, stall sensor, IMU, G-Force sensor, avionics sensors, compass, altimeter, angle sensor, attitude heading and reference system sensors, angle of attack sensor, roll sensor, pitch sensor, yaw sensor, force sensors, vibration sensors, gyroscopes, engine sensors, tachometer, control surface sensors, etc. Systems and methods according to the principles of the present inventions may include a helmet position sensor system that includes a plurality of transceivers affixed within the aircraft adapted to triangulate the location and viewing direction of the helmet. The plurality of transceivers may operate at an electromagnetic frequency outside the visible range. The helmet may include at least one marker adapted to be recognized by the triangulation system for the identification the helmet location and helmet viewing direction. For example, the helmet may have several markers on it at known positions and three or more electromagnetic transceivers may be mounted at known locations in the cockpit of an aircraft, or operator's environment in a vehicle. The transceivers each measure, through time of flight measurements, the distance between each transceiver and the marker(s) on the helmet and then the measurements may be used to triangulate the location and viewing position of the helmet. In embodiments, the helmet may be markerless and the triangulation system may ‘image’ the helmet to understand it's location and position. Systems and methods according to the principles of the present inventions may include a helmet position sensor system that triangulates the helmet position by measuring a plurality of distances from the helmet (or other HMD) to known locations within the aircraft. This may generally be referred to as an inside out measurement. The known locations may include a material with a particular reflection characteristic that is matched with the transceiver system in the helmet. As disclosed herein, the augmented reality content presented to an operator of a vehicle may be presented based on the physical environment that the vehicle is actually in or it may be based on a different environment such as an environment of another aircraft involved in the simulated training but is geographically remote from the operator. In such a situation, the virtual content presented to the operator may be influenced by the other vehicle's environment. For example, a first aircraft may be flying in a cloudy environment and a second aircraft may be flying in a bright sunny sky. The first aircraft may be presented a virtual environment based on the second aircraft's actual environment. While the pilot of the second aircraft may have to deal with the bright sun at times, the pilot of the first may not. The virtual content presentation system may present the same virtual training asset to both the first and second pilots, but the content may be faded to mimic a difficult to see asset due to the sun. The computer content may have a brightness and contrast, and at least one of the brightness and contrast may be determined by the pilot's viewing direction when the content is presented. The brightness or contrast may be reduced when the viewing direction is towards the sun. A system and method according to the principles of the present inventions may involve presenting augmented reality content in an environment without relying on real objects in the environment or in environments without real objects. This may involve receiving a geospatial location, including altitude, of virtual content within an environment to understand where the virtual content is to be represented. It may also involve creating a content anchor point at the geospatial location. The system and method may further involve receiving sensor information from a real aircraft sensor system affixed to a real aircraft to provide a location of the aircraft including an altitude of the aircraft, speed of the aircraft, and directional attitude of the aircraft and receiving head position information identifying a viewing position of a pilot within the aircraft. With the virtual content location anchor point understood and the location and conditions of the real aircraft understood, augmented reality content may be presented in a see-through computer display worn by the pilot when the aircraft sensor data, helmet position data and content anchor point align indicating the pilot sees the anchor point. A system and method according to the principles of the present inventions may involve two or more real airplanes operating in a common virtual environment where the pilot's of the respective airplane's are presented common augmented reality content from each's respective perspectives. In embodiments, a computer product, operating on one or more processors, adapted to present augmented reality content to a plurality of aircraft within a common virtual environment may include a data transmission system adapted to receive geospatial location data from the plurality of aircraft, wherein each of the plurality of aircraft is within visual proximity of one another. It may further involve a training simulation system adapted to generate a content anchor at a geospatial location within visual proximity of the plurality of aircraft in an environment. A content presentation system may be adapted to present computer-generated content representing a training asset moving within the visual proximity of the plurality of aircraft to each of the plurality of aircraft such that a pilot in each respective aircraft sees the computer-generated content at a perspective determined at least in part on the respective aircraft's location with respect to the anchor location. A system and method according to the principles of the present inventions may involve two or more real airplanes operating in a common virtual environment where the pilot's of the respective airplane's are presented common augmented reality content from each's respective perspectives. In embodiments, a computer product, operating on one or more processors, adapted to present augmented reality content to a plurality of aircraft within a common virtual environment may include a data transmission system adapted to receive geospatial location data from the plurality of aircraft, wherein each of the plurality of aircraft is geographically separated such that they cannot see one another. Even though they cannot see one another, the training exercise and virtual environment may be configured such that they are virtually in close proximity. Each pilot may be able to ‘see’ the other plane by seeing an augmented reality representation of the other plane. It may further involve a training simulation system adapted to generate a content anchor at a geospatial location within visual proximity of the plurality of aircraft in an environment. A content presentation system may be adapted to present computer-generated content representing a training asset moving within the visual proximity of the plurality of aircraft to each of the plurality of aircraft such that a pilot in each respective aircraft sees the computer-generated content at a perspective determined at least in part on the respective aircraft's location with respect to the anchor location. A system and method according to the principles of the present inventions may involve a simulated training environment with a moving anchor point for virtual content representing a moving augmented reality training asset. In embodiments, a computer product, operating on one or more processors, may be adapted to present augmented reality content to a pilot of an aircraft. A data transmission system may be adapted to receive geospatial location data from the aircraft as it moves through an environment. A training simulation system may be adapted to generate a series of content anchors at geospatial locations within visual proximity of the aircraft, each of the series of content anchors representing a geospatial position of a virtual training asset moving through the environment. A content presentation system may be adapted to present the virtual training asset to the aircraft such that a pilot in the aircraft sees the virtual training asset when it is indicated that the pilot viewing angle is aligned with a content anchor from the series of content anchors that represents a then current location of the virtual training asset. The virtual training asset is shaped in a perspective view consistent with the pilot's viewing angle and the then current location of the virtual training asset. For example, a series of progressively changing geospatial locations may represent a movement of a virtual training asset through a virtual environment over a period of time. The movement may be prescribed or pre-programmed and it may represent a sub-second period of time, second(s) period of time, minute(s) period of time, etc. The time period may represent a future period of time to describe how the virtual training asset is going to move in the future. When it becomes time to present the content to the augmented reality system in the aircraft the content may be located at one of the series of locations that represents the then current time to properly align the content. In embodiments, the selected location from the series of locations may be a time slightly in the future of the then current time to make an accommodation for latency in presenting the content. A system and method according to the principles of the present inventions may involve a simulated training system where a virtual asset has a geospatial location that is independent of a real aircraft's location that is involved in the training. A system and method of presenting the simulated training exercise to a pilot in a real aircraft may involve generating a virtual environment that includes an indication of where the real aircraft is located and what its positional attitude is within the aircraft's real environment. It may further involve generating, within the virtual environment, a virtual asset that is within a visual range of the real aircraft's location and presenting the virtual asset to the pilot as augmented reality content that overlays the pilot's real view of the environment outside of the real aircraft, wherein the virtual asset is presented at a geospatial position that is independent of the real aircraft's location. In embodiments, the virtual asset may move in relation to the aircraft's location and maintain the virtual asset's autonomous movement and location with respect to the aircraft's location. While the virtual asset may react to the real aircraft's movements, the virtual asset may maintain its autonomous control. The inventors discovered that predicting the future location(s) of a real vehicle that is moving through a real environment can improve the accuracy of the positioning of virtual content in an augmented reality system. This may be especially important when the real vehicle is moving quickly. A system and method in accordance with the principles of the present inventions may involve receiving a series of progressively changing content geospatial locations representing future movement of a virtual asset within a virtual environment, which may be predetermined and preprogrammed. It may also involve receiving a series of progressively changing real vehicle geospatial locations, each associated with a then current acquisition time, representing movement of a real vehicle in a real environment, wherein the virtual environment geospatially represents the real environment. The system and method may predict, based on the series of vehicle locations and related acquisition times, a future geospatial location, and series of future locations, of the vehicle. Then the augmented reality content may be presented to an operator of the vehicle at a position within a field-of-view of a see-through computer display based on the future geospatial location of the vehicle, or a location from the series of locations. It may further be based on the geospatial location of the virtual content, from the series of progressively changing content geospatial locations, representative of a time substantially the same as a time represented by the future geospatial location. In embodiments, the prediction of the future geospatial location of the vehicle may be based at least in part on past geospatial vehicle locations identified by a sensor system affixed to the vehicle that periodically communicates a then current geospatial location; wherein the past geospatial vehicle locations are interpolated to form a past vehicle location trend. The prediction of the future geospatial location of the vehicle may then be further based on an extrapolation based at least in part on the past vehicle trend. The vehicle may be further represented by an attitude within the real environment and the virtual asset is represented by an attitude within the virtual environment and the presentation of the augmented reality content is further based on the attitude of the vehicle and the attitude of the virtual asset. A system according to the principles of the present disclosure tracks an airplane's geospatial location (e.g. through GPS) as it moves through the air. It also tracks inertial movements of the plane as well as the avionics in the plane; such as pilot controls for thrust, rudder, alerions, elevator, thrust direction, compass, airspeed indicator, external temperature, g-force meter, etc. With this data, a processor, either onboard or off-plane, can determine an accurate understanding of the plane's current condition, location, attitude, speed, etc. Such processed data can be tracked over time such that a trend analysis can be performed on the data in real time. This real time trend analysis can further be used to predict where the plane is going to be at a future point in time. For example, the plane's data may be collected every 4 ms and a saved data set may include thousands of points representing the immediate past. The data set can then be used to accurately predict where the plane is going to be in the relative near future (e.g. in the next milliseconds, seconds, minutes). The extrapolated future location prediction based on the past data gets less precise the further into the future the prediction is making. However, the augmented reality content is being presented to a see-through optic at a fast refresh rate such that the position of the content in the optic can be based on the millisecond or second level predictions. As a further example, the refresh rate from a software product that is generating and producing the virtual content rendering (e.g. a gaming engine) may be on the order of 4 ms to 12 ms. This means that the position of the content can be shifted to accommodate a predicted location and pilot visions direction every 4 ms to 12 ms. Knowing the plane's weight and performance characteristics may also be used in the calculations. For example, the processor may factor in that an F-22 fighter jet weight just over 40,000 pounds and can make a 5G turn at 1,000 miles per hour and understand what the flight path of such a maneuver may look like. Such flight path characteristics would be quite different in an F-16, Harrier, F-35, Cargo plane, etc. In embodiments, a system may be equipped with a computer processor to read sensor data from the vehicle (e.g. airplane, ground vehicle, space vehicle, etc.) to locate the vehicle and understand its current conditions (e.g. forces, avionics, environment, attitude, etc.). The processor may store the sensor data and evaluate the sensor data. The type of vehicle and/or its powered movement characteristics may be stored and used in conjunction with the sensor data to further understand the present condition of the vehicle. The current and past sensor data and movement characteristics may be fused and analyzed to understand the past performance of the vehicle and this trend analysis may be further used to predict a future position of the vehicle. With the very near future position of the vehicle predicted with precision, virtual content can be presented to the see-through optical system used by a user such that it aligns with a geospatial location of geospatially located content. For example, when the system predicts a location of an airplane one second from now it will be a very accurate prediction. With the accurate prediction of the future location and knowing the future geospatial positioning of the content (e.g. longitude, latitude, and altitude) the virtual content can be positioned relative to the position of the airplane at the future time. The relative, or near absolute, positioning of the content can be refreshed at a very fast rate (e.g. 4 ms). This is fast enough to accommodate the fast repositioning of the fast reposition of the virtual content (e.g. another plane approaching from the opposite direction). The inventors further discovered that the head and/or eye position of the operator or passenger of the vehicle needs to be well understood as it relates to the position of the vehicle. For example. with an airplane moving at 1,000 miles an hour and its location and condition well understood (as described herein) it is not enough to determine the relative position of the geospatial content. The content needs to be presented in the see-through optic at a correct position such that the user perceives that it as being in the proper geospatial position. In a system where the see-through optic is attached to the vehicle surrounding the user's view of the exterior environment, the relative positioning of the content may require an understanding of the user's eye height since the optic is not moving relative to the vehicle. In a system where the see-through optic is attached to the user (e.g. head mounted display (“HMD”), in a helmet, etc.) the position of the user's head will be considered. For example, if the virtual content is on the right side of the vehicle and the user is looking out the left side of the vehicle, the content should not be presented to the see-through optic because the user cannot see the geospatial location anchoring the content. As the user turns her head to view the anchor point the content will be presented at a location within the optic that correlates with a virtual line connecting her position within the vehicle and the anchor position. In embodiments, the user's head position may be derived using an inside-out (e.g. where an HMD emits electromagnetic energy to measure distances to objects within a user environment and then determining position through triangulation), outside-in (e.g. where there are electromagnetic energy emitters set at known locations within the user's environment and use distance measurements from the emitters to the HMD to triangulate), mechanical system, electrical system, wireless system, wired system, etc. For example, an outside-in system in a cockpit of a jet fighter may use electromagnetics to triangulate the head position using emitters located at known positions within the cockpit. The helmet or other HMD may have markers or be markerless. Marks on the helmet may be used to identify the user's direction of vision. A markerless HMD may be programmed to understand the electromagnetic signature of the HMD such that its viewing position can be derived. A system may also include an eye tracking system to identify the direction of the user's eyes. This can be used in conjunction with the head position data to determine the general direction the user is looking (e.g. through head position tracking) and specific direction (e.g. through eye position). This may be useful in conjunction with a foveated display where the resolution of the virtual content is increased in the specific direction and decreased otherwise. The acuity of the human eye is very high within a very narrow angle (e.g. 1 or 2 degrees) and it quickly falls off outside of the narrow angle. This can mean that content outside of the high acuity region can be decreased in resolution or sharpness because it is going to be perceived as ‘peripheral vision’ and it can save processing power and decrease latency because potentially less data is used to render and present content. In embodiments, an augmented reality system used by an operator of a vehicle may make a precision prediction of the vehicle's future geospatial location, orientation, angular position, attitude, direction, speed (this collection of attributes or sub set of attributes or other attributes describing the vehicle within an environment is generally referred to as the vehicle's condition herein), and acceleration based on the vehicle's past performance of the same factors, or subset or other set of factors, leading up to the vehicle's current state. Including an understanding of the vehicle's capabilities and abilities throughout a range of motions, speeds, accelerations, etc. can assist in the future prediction. Such an augmented reality system may employ artificial intelligence, machine language and the like to make the prediction based on such data collected over time. Such system may further include an error prediction and include limits on how much error is tolerable given the current situation. For example, the augmented reality system may be able to predict the future position and geometry with great accuracy for three second in the future. At a frame rate of 10 ms that means three hundred frames of virtual content can be ‘locked in’ as to its location and geometry. If the prediction after three seconds and less than five second, for example, is reasonably predictable, the frames to be generated in that period may be rendered from one perspective (e.g. the geometry may be fixed) but not ‘locked in’ from another (e.g. the location may be approximate to be updated when it gets to the three second prediction point in the data stream. This means you could have three hundred frames locked in and completely available for presentation along with another two hundred frames that are partially rendered in some way. Optional rendering could also be produced if the future prediction system developed more than one alternative path for the vehicle. A method allowing the future rendering of content within a gaming engine could reduce the latency of presenting the content to the see-through optic. The future location/geometric position/condition prediction systems described herein are very useful when used in fast moving vehicles. A jet aircraft may travel at speeds of 1,300 miles per hour. That is equivalent to 1.9 feet per millisecond. If the content rendering system has a content data output rate of 10 ms, that means there could be 19 feet travelled between frames. That could lead to significant misplacement or poor rendering of the geometry, orientation, etc. of the virtual content if a future prediction of the vehicle's location, geometric position, and condition is not used to impact the generation of the content. Even at much slower speeds the error produced without the future prediction may be significant. Cutting the speed down from 1300 miles per hour to 130 miles per hour could still lead to a near two-foot error between frames in content rendering and placement. Even at highway speed of 65 miles per hour, a one-foot error could be produced. The future prediction of the vehicle's location and condition may be made to provide processing time before presenting the virtual content. It may further be made such that when the content is ready for presentation the content can be positioned properly within the see-through optic. An augmented reality system and method in accordance with the principles of the present disclosure may include a geospatial location system adapted to identify a current location of a vehicle (e.g. GPS), a plurality of sensors adapted to identify the vehicle's positional geometry within an environment (e.g. inertial measurement unit (IMU), G-Force sensor, compass) at the current location, a plurality of sensors adapted to identify vectors of force being applied to the vehicle (e.g. IMU, G-Force sensor); a data association and storage module (e.g. a computer processor with memory) adapted to associate and store the geospatial location data, positional geometry data, and force vector data with a time of acquisition of each type of data, a computer processor adapted to analyze the stored data and generate a trend of the vehicle's positions and conditions over a period of time and extrapolate the trend into a future period of time to produce a future predicted performance, wherein the processor is further adapted (e.g. programmed to execute) to present geospatially located augmented reality content to an operator of the vehicle based on the future predicted performance. The presentation of content based on the future predicted performance is estimated to be presented at a time corresponding with the then current time and location. In other words, the future prediction is used to determine the location and condition of the vehicle in the future, and presentation of the content is done using the prediction of location and condition that is timestamped with the then current time or nearest then current time. The system and method may further include a head position tracking system adapted to identify a viewing direction of a user of an augmented reality see-through computer display, wherein the presentation of the geospatially located content is further based on the viewing direction of the user. The presentation of the geospatially located content may also involve positioning the content within a field-of-view of the see-through computer display based on the viewing direction of the user. The system and method may further comprise an eye direction detection system (e.g. a camera system or other sensor system for imaging and tracking the position and movement of the user's eyes, wherein the presentation of the geospatially located content within the field-of-view is further based on a measured eye position, direction, or motion of the user. FIG.1is an illustration of an online platform100consistent with various embodiments of the present disclosure. By way of non-limiting example, the online platform100to allow real pilots in real aircraft using augmented and virtual reality to meet in a virtual piece of airspace may be hosted on a centralized server102, such as, for example, a cloud computing service. The centralized server102may communicate with other network entities, such as, for example, an augmented and virtual reality display device106, a sensor system110of an aircraft (such as an aircraft200, as shown inFIG.2), database114(such as, 3D model database), over a communication network104, such as, but not limited to, the Internet. Accordingly, in some instances, the augmented and virtual reality display device106operated by a pilot (a user112) may be in communication with the online platform100. Further, the sensor system110of the aircraft200may be in communication with the online platform100. All communication between the augmented and virtual reality display device106and the sensor system110with the online platform100may be carried via radio waves. For example, Aircraft Communications Addressing and Reporting System (ACARS) may be used for communication between the augmented and virtual reality display device106or the sensor system110, and the online platform100. Further, the centralized server102may include one or more servers; for example, a master server and one or more local servers. The one or more servers may be stationed on one or more of the aircraft, the ground and a satellite orbiting the earth (such as Satcom and Iridium satellites). Further, as shown inFIG.2, the aircraft200may include a Remote Artificial Intelligence Link (RAIL)202for communication with the centralized server102. Further, the AI-driven processing and the graphics generation may be performed on the centralized server102. The augmented and virtual reality display device106may display content to a pilot flying the aircraft200. The augmented and virtual reality display device106may be one of a head-mounted display (HMD), Eyeglasses, head-up display (HUD), smart contact lenses, Virtual retinal display, EyeTap, and cockpit glass. In some embodiments, the augmented and virtual reality display device106may be integrated with a flight helmet of a pilot. As shown inFIG.2, an Enhanced Visual Environment (EVE)204may be configured to provide high fidelity/wide field of view content to the augmented and virtual reality display device106. The sensor system110of the aircraft200may include one or more internal sensors to track and localize the pilot's head within the cockpit of the aircraft200. Further, the sensor system110of the aircraft200may include one or more external sensors to track the position and orientation of the aircraft200. As shown inFIG.2, an Avionics Integration System (AIS)206may be configured to provide accurate six degrees of freedom positioning of aircraft200. The six degrees of freedom include longitudinal (forward and backward thrust), vertical (aircraft moves upward and downward), lateral (aircraft moves from side to side), pitch (nose pitches up or down), roll (wings roll up or down) and yaw (nose moves from side to side). Data fusion can be an important feature of systems according to embodiments of this disclosure. A processor may need to read data from several different sources in the process of determining a vehicle's current location and condition. The processor may further send data representing the predictions to a content rendering system. The processor may further have to receive renderings from the rendering system and then present the rendering to an HMD at the right time to match the then current position and condition of the vehicle. This may be referred to as data fusion. To make the timing of the presentation of content even more complicated, as the inventors further discovered, data upon which a location/condition prediction might be made may have refresh rates that may be different and the content rendering refresh rate may be different and any of it may possibly have variable refresh rates. In embodiments, the augmented reality system, through a processor, produces a prediction of the future location and condition of a vehicle over a future time period. The future time period may include discrete data points at discrete time intervals. The intervals may be coordinated with the incoming data refresh rates. The processor may interpolate data between the discrete points in time to provide a higher resolution prediction. This may be useful in situations where the rendering engine has a variable or different refresh rate from the data being used for the vehicle's predicted future position and condition. For example, the data used to predict the location and condition of the vehicle may have a refresh rate of 5 ms and the rendering engine may have a variable refresh rate of between 4 and 12 ms. The processor might then interpolate between the discrete future positions and conditions such that when the content does arrive for presentation it knows the vehicle's then current predicted state in a predictable resolution. In embodiments, a processor may interpolate each data type within its own refresh rate such that several data types with different refresh rates can be merged at common timestamps. The merged data may then be analyzed to generate a trend of the vehicle's locations and conditions. This trend may be analyzed and extrapolated to predict future locations and conditions of the vehicle. Further, as shown inFIG.2, Coupled Fusion Tracking (CFT)208may be employed to combine the data received from the one or more internal sensors and the one or more external sensors to provide a highly usable augmented reality solution in a fast-moving environment. Further, the CFT208may integrate both virtual reality and augmented reality to provide robust augmented reality visuals within a dynamic environment. For example, the CFT208may allow for drawing an accurate picture of an enemy aircraft in augmented and virtual reality display device106worn by a pilot. The user112may access online platform100through any useful user interface (e.g. software application, browser, etc. The software application may be embodied as, for example, but not be limited to, a software user interface, network interface, website, web application, desktop application, augmented reality application, virtual reality application, mobile application compatible with a computing device1600, etc. Systems and methods described herein may be used to provide a common virtual environment to more than one person in more than one vehicle. This may be referred to generally as a common virtual environment, coordinated virtual environment or virtual environment. A common virtual environment may or may not be presented in a single geospatial environment. When two jet fighters are practicing maneuvers together in the same airspace (e.g. where the pilots of two planes have the ability to see one another) the common virtual environment may represent the airspace they occupy. If, on the other hand, two jet fighters are practicing maneuvers in separate airspaces (e.g. where the two pilots never actually see one another) the common virtual environment may represent one of their airspaces, neither of their airspaces, etc. In each case, the presentation of virtual content in the common virtual environment may involve understanding the geospatial location and condition of each plane such that the common virtual environment can be presented from the correct perspective in each plane. There could be many real vehicles and people within a common virtual environment. There may be a number of planes, ground vehicles and people participating in a training session, game, or other activity. Each would be seeing the common virtual environment from their own perspective through their own HMD. Each vehicle may have a system as described herein to track the vehicles' location and condition to predict its future location and condition for the placement of virtual content in its associated see-through optical system. Further, each vehicle and/or HMD may have a head and/or eye tracking system upon which the content position may in part depend. Systems and methods according to the principles of the present invention may involve training a plurality of pilots, each in a separate real aircraft, where the plurality of separate aircraft share a common physical environment. This may be useful in a training situation where two or more planes are flying in close proximity and are being presented with a common enemy asset in augmented reality. This could be a dog fight, missile aversion, target bombing, etc. Such systems and methods may include providing a head mounted see-through computer display (HMD) to each of the plurality of pilots such that each of the plurality of pilots can view a common virtual environment with computer rendered training content. Each of the aircraft may track and report its own location, attitude, speed, or other information to a computer simulation system such that the simulation system can manage the training simulation. The simulation system may position the computer rendered training content at a geospatial location within a visual range of each of the plurality of pilots or one of the pilots and the content may be presented to the HMD of each of the plurality of pilots, wherein the presentation in each individual HMD is dependent on an alignment of each respective HMD and the computer rendered content geospatial location. In embodiments, the computer rendered training content presented to each HMD is rendered with its own unique perspective based on the angle from which each HMD views the geospatial location. In this example, each of the plurality of pilots has the ability to see another of the plurality of aircraft through their HMD, forming an augmented reality training environment comprising a see through view of the real environment for each pilot augmented by the computer rendered training content presented in the common virtual environment. Each of the plurality of pilots may be in communication with the other pilots such that the pilots can navigate their separate real aircraft in coordination in response to the computer rendered training content. Systems and methods according to the principles of the present invention may involve presenting a plurality of pilots of separate aircraft with a common augmented reality environment where common computer generated content is positioned and each of the plurality of pilots sees the common computer generated content from at a perspective based on their respective locations and aircraft's attitude. Each of the pilots may be able to communicate with the other pilots such that they can coordinate navigation maneuvers with respect to the computer generated content. In embodiments, the computer generated content may be a representation of an enemy asset, wherein the enemy asset is programmed to engage with at least one of the separate aircraft. The computer generated content may represent a plurality of independently controlled enemy assets, wherein each of the a plurality of independently controlled enemy asset is programmed to engage with at least one of the separate aircraft. This may simulate a coordinated enemy, which may require team navigation and coordination. In embodiments, the presentation of the computer generated content to each of the plurality of pilots may be based on an alignment between each of the plurality of pilots viewing direction and the computer generated content's geospatial location such that each pilot sees the computer generated content when each pilot's aircraft position, pilot viewing direction and the content's geospatial location align in an unobstructed line of sight. For example, if a plane is flying level and within visual range of the content's geospatial location, the pilot may see the content if it is in front of the plane and above the plane horizon such that the pilot can see the geospatial location through the cockpit window. If, on the other hand, the content is directly behind the plane and the pilot cannot turn his head to view the geospatial location of the content, than the content may not be presented in the pilot's HMD. FIG.3is a block diagram of a system300for facilitating provisioning of a virtual experience in accordance with some embodiments. The system300may include a communication device302, a processing device304and a storage device306. The communication device302may be configured for receiving at least one first sensor data corresponding to at least one first sensor310associated with a first vehicle308. Further, the at least one first sensor310may be communicatively coupled to a first transmitter312configured for transmitting the at least one first sensor data over a first communication channel. In some embodiments, the first vehicle308may be a first aircraft. Further, a first user may be a first pilot. Further, the communication device302may be configured for receiving at least one second sensor data corresponding to at least one second sensor320associated with a second vehicle318. Further, the at least one second sensor320may be communicatively coupled to a second transmitter322configured for transmitting the at least one second sensor data over a second communication channel. In some embodiments, the second vehicle318may be a second aircraft. Further, a second user may be a second pilot. In some embodiments, the at least one first sensor data may be received from a first On-Board-Diagnostics (OBD) system of the first vehicle308, the at least one second sensor data may be received from a second On-Board-Diagnostics (OBD) system of the second vehicle318. Further, the communication device302may be configured for transmitting at least one first presentation data to at least one first presentation device314associated with the first vehicle308. Further, the at least one first presentation device314may include a first receiver316configured for receiving the at least one first presentation data over the first communication channel. Further, the at least one first presentation device may be configured for presenting the at least one first presentation data. Further, the communication device302may be configured for transmitting at least one second presentation data to at least one second presentation device324associated with the second vehicle318. Further, the at least one second presentation device324may include a second receiver326configured for receiving the at least one second presentation data over the second communication channel. Further, the at least one second presentation device may be configured for presenting the at least one second presentation data. Further, the processing device304may be configured for generating the at least one first presentation data based on the at least one second sensor data. Further, the processing device304may be configured for generating the at least one second presentation data based on the at least one first sensor data. Further, the storage device306may be configured for storing each of the at least one first presentation data and the at least one second presentation data. In some embodiments, the at least one first sensor310may include one or more of a first orientation sensor, a first motion sensor, a first accelerometer, a first location sensor, a first speed sensor, a first vibration sensor, a first temperature sensor, a first light sensor and a first sound sensor. Further, the at least one second sensor320may include one or more of a second orientation sensor, a second motion sensor, a second accelerometer, a second location sensor, a second speed sensor, a second vibration sensor, a second temperature sensor, a second light sensor and a second sound sensor. In some embodiments, the at least one first sensor310may be configured for sensing at least one first physical variable associated with the first vehicle308. Further, the at least one second sensor320may be configured for sensing at least one second physical variable associated with the second vehicle. In further embodiments, the at least one first physical variable may include one or more of a first orientation, a first motion, a first acceleration, a first location, a first speed, a first vibration, a first temperature, a first light intensity and a first sound. Further, the at least one second physical variable may include one or more of a second orientation, a second motion, a second acceleration, a second location, a second speed, a second vibration, a second temperature, a second light intensity and a second sound. In some embodiments, the at least one first sensor310may include a first environmental sensor configured for sensing a first environmental variable associated with the first vehicle308. Further, the at least one second sensor320may include a second environmental sensor configured for sensing a second environmental variable associated with the second vehicle. In some embodiments, the at least one first sensor310may include a first user sensor configured for sensing a first user variable associated with a first user of the first vehicle308. Further, the at least one second sensor320may include a second user sensor configured for sensing a second user variable associated with a second user of the second vehicle318. In further embodiments, the first user variable may include a first user location and a first user orientation. Further, the second user variable may include a second user location and a second user orientation. Further, the first presentation device may include a first head mount display. Further, the second presentation device may include a second head mount display. In further embodiments, the first head mount display may include a first user location sensor of the at least one first sensor310configured for sensing the first user location and a first user orientation sensor of the at least one first sensor310configured for sensing the first user orientation. The first head mount display is explained in further detail in conjunction withFIG.4below. Further, the second head mount display may include a second user location sensor of the at least one second sensor320configured for sensing the second user location, a second user orientation sensor of the at least one second sensor320configured for sensing the second user orientation. In further embodiments, the first vehicle308may include a first user location sensor of the at least one first sensor310configured for sensing the first user location and a first user orientation sensor of the at least one first sensor310configured for sensing the first user orientation. Further, the second vehicle318may include a second user location sensor of the at least one second sensor320configured for sensing the second user location, a second user orientation sensor of the at least one second sensor320configured for sensing the second user orientation. In further embodiments, the first user orientation sensor may include a first gaze sensor configured for sensing a first eye gaze of the first user. Further, the second user orientation sensor may include a second gaze sensor configured for sensing a second eye gaze of the second user. In further embodiments, the first user location sensor may include a first proximity sensor configured for sensing the first user location in relation to the at least one first presentation device314. Further, the second user location sensor may include a second proximity sensor configured for sensing the second user location in relation to the at least one second presentation device324. In some embodiments, the first head mount display may include a first see-through display device. Further, the second head mount display may include a second see-through display device. In some embodiments, the first head mount display may include a first optical marker configured to facilitate determination of one or more of the first user location and the first user orientation. Further, the at least one first sensor310may include a first camera configured for capturing a first image of the first optical marker. Further, the at least one first sensor310may be communicatively coupled to a first processor associated with the vehicle. Further, the first processor may be configured for determining one or more of the first user location and the first user orientation based on analysis of the first image. Further, the second head mount display may include a second optical marker configured to facilitate determination of one or more of the second user location and the second user orientation. Further, the at least one second sensor320may include a second camera configured for capturing a second image of the second optical marker. Further, the at least one second sensor320may be communicatively coupled to a second processor associated with the vehicle. Further, the second processor may be configured for determining one or more of the second user location and the second user orientation based on analysis of the second image. In some embodiments, the first presentation device may include a first see-through display device disposed in a first windshield of the first vehicle308. Further, the second presentation device may include a second see-through display device disposed in a second windshield of the second vehicle318. In some embodiments, the first vehicle308may include a first watercraft, a first land vehicle, a first aircraft and a first amphibious vehicle. Further, the second vehicle318may include a second watercraft, a second land vehicle, a second aircraft and a second amphibious vehicle. In some embodiments, the at least one first presentation data may include one or more of a first visual data, a first audio data and a first haptic data. Further, the at least one second presentation data may include one or more of a second visual data, a second audio data and a second haptic data. In some embodiments, the at least one first presentation device314may include at least one environmental variable actuator configured for controlling at least one first environmental variable associated with the first vehicle308based on the first presentation data. Further, the at least one second presentation device324may include at least one environmental variable actuator configured for controlling at least one second environmental variable associated with the second vehicle318based on the second presentation data. In further embodiments, the at least one first environmental variable may include one or more of a first temperature level, a first humidity level, a first pressure level, a first oxygen level, a first ambient light, a first ambient sound, a first vibration level, a first turbulence, a first motion, a first speed, a first orientation and a first acceleration, the at least one second environmental variable may include one or more of a second temperature level, a second humidity level, a second pressure level, a second oxygen level, a second ambient light, a second ambient sound, a second vibration level, a second turbulence, a second motion, a second speed, a second orientation and a second acceleration. In some embodiments, the first vehicle308may include each of the at least one first sensor310and the at least one first presentation device314. Further, the second vehicle318may include each of the at least one second sensor320and the at least one second presentation device324. In some embodiments, the storage device306may be further configured for storing a first three-dimensional model corresponding to the first vehicle308and a second three-dimensional model corresponding to the second vehicle318. Further, the generating of the first presentation data may be based on the second three-dimensional model. Further, the generating of the second presentation data may be based on the first three-dimensional model. In some embodiments, the communication device302may be further configured for receiving an administrator command from an administrator device. Further, the generating of one or more of the first presentation data and the second presentation data may be based on the administrator command. In further embodiments, the at least one first presentation data may include at least one first virtual object model corresponding to at least one first virtual object. Further, the at least one second presentation data may include at least one second virtual object model corresponding to at least one second virtual object. Further, the generating of the at least one first virtual object model may be independent of the at least one second sensor model. Further, the generating of the at least one second virtual object model may be independent of the at least one first sensor model. Further, the generating of one or more of the at least one first virtual object model and the at least one second virtual object model may be based on the administrator command. Further, the storage device306may be configured for storing the at least one first virtual object model and the at least one second virtual object model. In further embodiments, the administrator command may include a virtual distance parameter. Further, the generating of each of the at least one first presentation data and the at least one second presentation data may be based on the virtual distance parameter. In further embodiments, the at least one first sensor data may include at least one first proximity data corresponding to at least one first external real object in a vicinity of the first vehicle308. Further, the at least one second sensor data may include at least one second proximity data corresponding to at least one second external real object in a vicinity of the second vehicle318. Further, the generating of the at least one first presentation data may be based on the at least one second proximity data. Further, the generating of the at least one second presentation data may be based on the at least one first proximity data. In further embodiments, the at least one first external real object may include a first cloud, a first landscape feature, a first man-made structure and a first natural object. Further, the at least one second external real object may include a second cloud, a second landscape feature, a second man-made structure and a second natural object. In some embodiments, the at least one first sensor data may include at least one first image data corresponding to at least one first external real object in a vicinity of the first vehicle308. Further, the at least one second sensor data may include at least one second image data corresponding to at least one second external real object in a vicinity of the second vehicle318. Further, the generating of the at least one first presentation data may be based on the at least one second image data. Further, the generating of the at least one second presentation data may be based on the at least one first image data. In some embodiments, the communication device302may be further configured for transmitting a server authentication data to the first receiver316. Further, the first receiver316may be communicatively coupled to first processor associated with the first presentation device. Further, the first processor may be communicatively coupled to a first memory device configured to store a first authentication data. Further, the first processor may be configured for performing a first server authentication based on the first authentication data and the server authentication data. Further, the first processor may be configured for controlling presentation of the at least one first presentation data on the at least one first presentation device314based on the first server authentication. Further, the communication device302may be configured for transmitting a server authentication data to the second receiver326. Further, the second receiver326may be communicatively coupled to second processor associated with the second presentation device. Further, the second processor may be communicatively coupled to a second memory device configured to store a second authentication data. Further, the second processor may be configured for performing a second server authentication based on the second authentication data and the server authentication data. Further, the second processor may be configured for controlling presentation of the at least one second presentation data on the at least one second presentation device324based on the second server authentication. Further, the communication device302may be configured for receiving a first client authentication data from the first transmitter312. Further, the storage device306may be configured for storing the first authentication data. Further, the communication device302may be configured for and receiving a second client authentication data from the second transmitter322. Further, the storage device306may be configured for storing the second authentication data. Further, the processing device304may be further configured for performing a first client authentication based on the first client authentication data and the first authentication data. Further, the generating of the at least one second presentation data may be further based on the first client authentication. Further, the processing device304may be configured for performing a second client authentication based on the second client authentication data and the second authentication data. Further, the generating of the at least one first presentation data may be further based on the second client authentication. FIG.4is a block diagram of a first head mount display400for facilitating provisioning of a virtual experience in accordance with some embodiments. The first head mount display400includes a first user location sensor402of the at least one first sensor configured for sensing the first user location and a first user orientation sensor404of the at least one first sensor configured for sensing the first user orientation. Further, the first head mount display400may include a display device406to present visuals. The display device may a first see-through display device. Further, the first head mount display400may include a processing device408configured to obtain sensor data from the first user location sensor402and the first user orientation sensor404. Further, the processing device408may be configured to send visuals to the display device406. FIG.5is a block diagram of an apparatus500for facilitating provisioning of a virtual experience in accordance with some embodiments. The apparatus500may include at least one first sensor502(such as the at least one first sensor310) configured for sensing at least one first sensor data associated with a first vehicle (such as the first vehicle308). Further, the apparatus500may include a first transmitter504(such as the first transmitter312) configured to be communicatively coupled to the at least first sensor502. Further, the first transmitter504may be further configured for transmitting the at least one first sensor data to a communication device (such as the communication device302) of a system over a first communication channel. Further, the apparatus500may include a first receiver506(such as the first receiver316) configured for receiving the at least one first presentation data from the communication device over the first communication channel. Further, the apparatus500may include at least one first presentation device508(such as the at least one first presentation device314) configured to be communicatively coupled to the first receiver506. The at least one first presentation device508may be configured for presenting the at last one first presentation data. Further, the communication device may be further configured for receiving at least one second sensor data corresponding to at least one second sensor (such as the at least one second sensor320) associated with a second vehicle (such as the second vehicle318). Further, the at least one second sensor may be communicatively coupled to a second transmitter (such as the second transmitter322) configured for transmitting the at least one second sensor data over a second communication channel. Further, the system further may include a processing device (such as the processing device304) communicatively coupled to the communication device. Further, the processing device may be configured for generating the at least one first presentation data based on the at least one second sensor data. FIG.6is a flowchart of a method600of facilitating provisioning of a virtual experience in accordance with some embodiments. At602, the method600may include receiving, using a communication device (such as the communication device302), at least one first sensor data corresponding to at least one first sensor (such as the at least one first sensor310) associated with a first vehicle (such as the first vehicle308). Further, the at least one first sensor may be communicatively coupled to a first transmitter (such as the first transmitter312) configured for transmitting the at least one first sensor data over a first communication channel. At604, the method600may include receiving, using the communication device, at least one second sensor data corresponding to at least one second sensor (such as the at least one second sensor320) associated with a second vehicle (such as the second vehicle318). Further, the at least one second sensor may be communicatively coupled to a second transmitter (such as the second transmitter322) configured for transmitting the at least one second sensor data over a second communication channel. At606, the method600may include transmitting, using the communication device, at least one first presentation data to at least one first presentation device associated with the first vehicle. Further, the at least one first presentation device may include a first receiver (such as the first receiver316) configured for receiving the at least one first presentation data over the first communication channel. Further, the at least one first presentation device may be configured for presenting the at least one first presentation data. At608, the method600may include transmitting, using the communication device, at least one second presentation data to at least one second presentation device (such as the at least one second presentation device324) associated with the second vehicle. Further, the at least one second presentation device may include a second receiver (such as the second receiver326) configured for receiving the at least one second presentation data over the second communication channel. Further, the at least one second presentation device may be configured for presenting the at least one second presentation data. At610, the method600may include generating, using a processing device (such as the processing device304), the at least one first presentation data based on the at least one second sensor data. At612, the method600may include generating, using the processing device, the at least one second presentation data based on the at least one first sensor data. At614, the method600may include storing, using a storage device (such as the storage device306), each of the at least one first presentation data and the at least one second presentation data. FIG.7shows a system700for facilitating provisioning of a virtual experience, in accordance with some embodiments. The system700may include a communication device702configured for receiving at least one first sensor data corresponding to at least one first sensor710associated with a first vehicle708. Further, the at least one first sensor710may be communicatively coupled to a first transmitter712configured for transmitting the at least one first sensor data over a first communication channel. Further, the communication device702may be configured for receiving at least one second sensor data corresponding to at least one second sensor716associated with a second vehicle714. Further, the at least one second sensor716may include a second location sensor configured to detect a second location associated with the second vehicle714. Further, the at least one second sensor716may be communicatively coupled to a second transmitter718configured for transmitting the at least one second sensor data over a second communication channel. Further, in some embodiments, the at least one second sensor716may include a second user sensor configured for sensing a second user variable associated with a second user of the second vehicle714. Further, the second user variable may include a second user location and a second user orientation. Further, the communication device702configured for transmitting at least one second presentation data to at least one second presentation device720associated with the second vehicle714. Further, the at least one second presentation data may include at least one second virtual object model corresponding to at least one second virtual object. Further, in some embodiments, the at least one second virtual object may include one or more of a navigational marker (such as a navigational marker1308, and/or a signboard1504as shown inFIG.15) and an air-corridor (such as a skyway1306as shown inFIG.13). Further, the at least one second presentation device720may include a second receiver722configured for receiving the at least one second presentation data over the second communication channel. Further, the at least one second presentation device720may be configured for presenting the at least one second presentation data. Further, in some embodiments, the at least one second presentation device720may include a second head mount display. Further, the second head mount display may include a second user location sensor of the at least one second sensor716configured for sensing the second user location and a second user orientation sensor of the at least one second sensor716configured for sensing the second user orientation. Further, the second head mount display may include a second see-through display device. Further, the system700may include a processing device704configured for generating the at least one second presentation data based on the at least one first sensor data and the at least one second sensor data. Further, the generating of the at least one second virtual object model may be independent of the at least one first sensor data. Further, in some embodiments, the processing device704may be configured for determining a second airspace class (with reference toFIG.14) associated with the second vehicle714based on the second location including a second altitude associated with the second vehicle714. Further, the generating of the at least one second virtual object model may be based on the second airspace class. Further, the system700may include a storage device706configured for storing the at least one second presentation data. Further, in some embodiments, the storage device706may be configured for retrieving the at least one second virtual object model based on the second location associated with the second vehicle714. Further, in some embodiments, the storage device706may be configured for storing a first three-dimensional model corresponding to the first vehicle708. Further, the generating of the second presentation data may be based on the first three-dimensional model. Further, in some embodiments, the communication device702may be configured for receiving an administrator command from an administrator device. Further, the generating of the at least one second virtual object model may be based on the administrator command. Further, in some embodiments, the communication device702may be configured for transmitting at least one first presentation data to at least one first presentation device (not shown) associated with the first vehicle708. Further, the at least one first presentation device may include a first receiver configured for receiving the at least one first presentation data over the first communication channel. Further, the at least one first presentation device may be configured for presenting the at least one first presentation data. Further, in some embodiments, the processing device704may be configured for generating the at least one first presentation data based on the at least one second sensor data. Further, in some embodiments, the storage device706may be configured for storing the at least one first presentation data. Further, in some embodiments, the storage device706may be configured for storing a second three-dimensional model corresponding to the second vehicle714. Further, the generating of the first presentation data may be based on the second three-dimensional model. Further, in some embodiments, the at least one first presentation data may include at least one first virtual object model corresponding to at least one first virtual object. Further, the generating of the at least one first virtual object model may be independent of the at least one second sensor data. Further, the storage device706may be configured for storing the at least one first virtual object model. Further, in some exemplary embodiment, the communication device702may be configured for receiving at least one second sensor data corresponding to at least one second sensor716associated with a second vehicle714. Further, the at least one second sensor716may be communicatively coupled to a second transmitter718configured for transmitting the at least one second sensor data over a second communication channel. Further, the communication device702may be configured for receiving at least one first sensor data corresponding to at least one first sensor710associated with a first vehicle708. Further, the at least one first sensor710may include a first location sensor configured to detect a first location associated with the first vehicle708. Further, the at least one first sensor710may be communicatively coupled to a first transmitter712configured for transmitting the at least one first sensor data over a first communication channel. Further, in some embodiments, the at least one first sensor710may include a first user sensor configured for sensing a first user variable associated with a first user of the first vehicle708. Further, the first user variable may include a first user location and a first user orientation. Further, the communication device702configured for transmitting at least one first presentation data to at least one first presentation device (not shown) associated with the first vehicle708. Further, the at least one first presentation data may include at least one first virtual object model corresponding to at least one first virtual object. Further, in some embodiments, the at least one first virtual object may include one or more of a navigational marker (such as a navigational marker1308, and/or a signboard1504as shown inFIG.15) and an air-corridor (such as a skyway1306as shown inFIG.13). Further, the at least one first presentation device may include a first receiver configured for receiving the at least one first presentation data over the first communication channel. Further, the at least one first presentation device may be configured for presenting the at least one first presentation data. Further, in some embodiments, the at least one first presentation device may include a first head mount display. Further, the first head mount display may include a first user location sensor of the at least one first sensor710configured for sensing the first user location and a first user orientation sensor of the at least one first sensor710configured for sensing the first user orientation. Further, the first head mount display may include a first see-through display device. Further, the processing device704may be configured for generating the at least one first presentation data based on the at least one second sensor data and the at least one first sensor data. Further, the generating of the at least one first virtual object model may be independent of the at least one second sensor data. Further, in some embodiments, the processing device704may be configured for determining a first airspace class (with reference toFIG.14) associated with the first vehicle708based on the first location including a first altitude associated with the first vehicle708. Further, the generating of the at least one first virtual object model may be based on the first airspace class. Further, in some embodiments, the storage device706may be configured for storing the at least one first presentation data. Further, in some embodiments, the storage device706may be configured for retrieving the at least one first virtual object model based on the first location associated with the first vehicle708. Further, in some embodiments, the storage device706may be configured for storing a second three-dimensional model corresponding to the second vehicle714. Further, the generating of the first presentation data may be based on the second three-dimensional model. Further, in some embodiments, the communication device702may be configured for receiving an administrator command from an administrator device. Further, the generating of the at least one first virtual object model may be based on the administrator command. Further, in some embodiments, the communication device702may be configured for transmitting at least one second presentation data to at least one second presentation device (such as the second presentation device720) associated with the second vehicle714. Further, the at least one second presentation device may include a second receiver (such as the second receiver722) configured for receiving the at least one second presentation data over the second communication channel. Further, the at least one second presentation device may be configured for presenting the at least one second presentation data. Further, in some embodiments, the processing device704may be configured for generating the at least one second presentation data based on the at least one first sensor data. Further, in some embodiments, the storage device706may be configured for storing the at least one second presentation data. Further, in some embodiments, the storage device706may be configured for storing a first three-dimensional model corresponding to the first vehicle708. Further, the generating of the second presentation data may be based on the first three-dimensional model. Further, in some embodiments, the at least one second presentation data may include at least one second virtual object model corresponding to at least one second virtual object. Further, the generating of the at least one second virtual object model may be independent of the at least one first sensor data. Further, the storage device706may be configured for storing the at least one second virtual object model. FIG.8is a flowchart of a method800of facilitating provisioning of a virtual experience, in accordance with some embodiments. Accordingly, at802, the method800may include receiving, using a communication device (such as the communication device702), at least one first sensor data corresponding to at least one first sensor (such as the at least first sensor710) associated with a first vehicle (such as the first vehicle708). Further, the at least one first sensor may be communicatively coupled to a first transmitter (such as the first transmitter712) configured for transmitting the at least one first sensor data over a first communication channel. Further, at804, the method800may include receiving, using the communication device, at least one second sensor data corresponding to at least one second sensor (such as the at least one second sensor716) associated with a second vehicle (such as the second vehicle714). Further, the at least one second sensor may include a second location sensor configured to detect a second location associated with the second vehicle. Further, the at least one second sensor may be communicatively coupled to a second transmitter (such as the second transmitter718) configured for transmitting the at least one second sensor data over a second communication channel. Further, in some embodiments, the at least one second sensor may include a second user sensor configured for sensing a second user variable associated with a second user of the second vehicle. Further, the second user variable may include a second user location and a second user orientation. Further, at806, the method800may include transmitting, using the communication device, at least one second presentation data to at least one second presentation device (such as the at least one second presentation device720) associated with the second vehicle. Further, the at least one second presentation data may include at least one second virtual object model corresponding to at least one second virtual object. Further, in some embodiments, the at least one second virtual object may include one or more of a navigational marker (such as a navigational marker1308, and/or a signboard1504as shown inFIG.15) and an air-corridor (such as a skyway1306as shown inFIG.13). Further, the at least one second presentation device may include a second receiver (such as the second receiver722) configured for receiving the at least one second presentation data over the second communication channel. Further, the at least one second presentation device may be configured for presenting the at least one second presentation data. Further, in some embodiments, the at least one second presentation device may include a second head mount display. Further, the second head mount display may include a second user location sensor of the at least one second sensor configured for sensing the second user location and a second user orientation sensor of the at least one second sensor configured for sensing the second user orientation. Further, the second head mount display may include a second see-through display device. Further, at808, the method800may include generating, using a processing device (such as the processing device704), the at least one second presentation data based on the at least one first sensor data and the at least one second sensor data. Further, the generating of the at least one second virtual object model may be independent of the at least one first sensor data. Further, at810, the method800may include storing, using a storage device (such as the storage device706), the at least one second presentation data. Further, in some embodiments, the method800may include retrieving, using the storage device, the at least one second virtual object model based on the second location associated with the second vehicle. Further, in some embodiments, the method800may include determining, using the processing device, a second airspace class (with reference toFIG.14) associated with the second vehicle based on the second location including a second altitude associated with the second vehicle. Further, the generating of the at least one second virtual object model may be based on the second airspace class. Further, in some embodiments, the method800may include storing, using the storage device, a first three-dimensional model corresponding to the first vehicle. Further, the generating of the second presentation data may be based on the first three-dimensional model. Further, in some embodiments, the method800may include receiving, using the communication device, an administrator command from an administrator device. Further, the generating of the at least one second virtual object model may be based on the administrator command. Further, in some exemplary embodiments, the method800may include receiving, using a communication device (such as the communication device702), at least one second sensor data corresponding to at least one second sensor (such as the at least second sensor716) associated with a second vehicle (such as the second vehicle714). Further, the at least one second sensor may be communicatively coupled to a second transmitter (such as the second transmitter718) configured for transmitting the at least one second sensor data over a second communication channel. Further, the method800may include receiving, using the communication device, at least one first sensor data corresponding to at least one first sensor (such as the at least one first sensor710) associated with a first vehicle (such as the first vehicle708). Further, the at least one first sensor may include a first location sensor configured to detect a first location associated with the first vehicle. Further, the at least one first sensor may be communicatively coupled to a first transmitter (such as the first transmitter712) configured for transmitting the at least one first sensor data over a first communication channel. Further, in some embodiments, the at least one first sensor may include a first user sensor configured for sensing a first user variable associated with a first user of the first vehicle. Further, the first user variable may include a first user location and a first user orientation. Further, the method800may include transmitting, using the communication device, at least one first presentation data to at least one first presentation device associated with the first vehicle708. Further, the at least one first presentation data may include at least one first virtual object model corresponding to at least one first virtual object. Further, in some embodiments, the at least one first virtual object may include one or more of a navigational marker (such as a navigational marker1308, and/or a signboard1504as shown inFIG.15) and an air-corridor (such as a skyway1306as shown inFIG.13). Further, the at least one first presentation device may include a first receiver configured for receiving the at least one first presentation data over the first communication channel. Further, the at least one first presentation device may be configured for presenting the at least one first presentation data. Further, in some embodiments, the at least one first presentation device may include a first head mount display. Further, the first head mount display may include a first user location sensor of the at least one first sensor configured for sensing the first user location and a first user orientation sensor of the at least one first sensor configured for sensing the first user orientation. Further, the first head mount display may include a first see-through display device. Further, the method800may include generating, using a processing device (such as the processing device704), the at least one first presentation data based on the at least one second sensor data and the at least one first sensor data. Further, the generating of the at least one first virtual object model may be independent of the at least one second sensor data. Further, the method800may include storing, using a storage device (such as the storage device706), the at least one first presentation data. Further, in some embodiments, the method800may include retrieving, using the storage device, the at least one first virtual object model based on the first location associated with the first vehicle708. Further, in some embodiments, the method800may include determining, using the processing device, a first airspace class (with reference toFIG.14) associated with the first vehicle708based on the first location including a first altitude associated with the first vehicle. Further, the generating of the at least one first virtual object model may be based on the first airspace class. Further, in some embodiments, the method800may include storing, using the storage device, a second three-dimensional model corresponding to the second vehicle714. Further, the generating of the first presentation data may be based on the second three-dimensional model. Further, in some embodiments, the method800may include receiving, using the communication device, an administrator command from an administrator device. Further, the generating of the at least one first virtual object model may be based on the administrator command. FIG.9is a flowchart of a method900to facilitate providing at least one first presentation data. Accordingly, at902, the method900may include transmitting, using the communication device, at least one first presentation data to at least one first presentation device associated with the first vehicle. Further, the at least one first presentation device may include a first receiver configured for receiving the at least one first presentation data over the first communication channel. Further, the at least one first presentation device may be configured for presenting the at least one first presentation data. Further, at904, the method900may include generating, using the processing device, the at least one first presentation data based on the at least one second sensor data. Further, at906, the method900may include storing, using the storage device, the at least one first presentation data. Further, in some embodiments, the method900may include storing, using the storage device, a second three-dimensional model corresponding to the second vehicle. Further, the generating of the first presentation data may be based on the second three-dimensional model. Further, in some embodiments, the at least one first presentation data may include at least one first virtual object model corresponding to at least one first virtual object. Further, the generating of the at least one first virtual object model may be independent of the at least one second sensor data. Further, the method900may include storing, using the storage device, the at least one first virtual object model. Further, in some exemplary embodiment, the method900may facilitate providing at least one second presentation data. Accordingly, the method900may include transmitting, using the communication device, at least one second presentation data to at least one second presentation device associated with the second vehicle. Further, the at least one second presentation device may include a second receiver configured for receiving the at least one second presentation data over the second communication channel. Further, the at least one second presentation device may be configured for presenting the at least one second presentation data. Further, the method900may include generating, using the processing device, the at least one second presentation data based on the at least one first sensor data. Further, the method900may include storing, using the storage device, the at least one second presentation data. Further, in some embodiments, the method900may include storing, using the storage device, a first three-dimensional model corresponding to the first vehicle. Further, the generating of the second presentation data may be based on the first three-dimensional model. Further, in some embodiments, the at least one second presentation data may include at least one second virtual object model corresponding to at least one second virtual object. Further, the generating of the at least one second virtual object model may be independent of the at least one first sensor data. Further, the method900may include storing, using the storage device, the at least one second virtual object model. FIG.10shows a method1000to allow real pilots in real aircraft using augmented and virtual reality to meet in a virtual airspace, in accordance with some embodiments. Accordingly, at1002, the method1000may include creating the virtual airspace in an augmented and virtual reality environment. The virtual airspace may be a three-dimensional space in which one or more aircraft may meet. Further, at1004, the method1000may include a real pilot in a real aircraft joining the virtual airspace via their augmented and virtual reality equipment. The real aircraft may be flying in the real world. Accordingly, an image of the real aircraft may be included in the virtual airspace. Therefore, this provides a live simulation involving real people operating real systems. In some embodiments, the virtual airspace may include virtual aircraft, which may be flown by real people in simulated systems, on the ground. In some embodiments, the virtual airspace may further include constructed aircraft (and/or targets). The constructed aircraft may be generated and controlled using computer graphics and processing systems. Further, at1006, the method1000may include providing augmented and virtual reality content to the real pilot via their augmented and virtual reality equipment. In some embodiments, the method may include providing augmented and virtual reality content to the real people (on the ground) flying virtual aircraft in the virtual airspace. Further, at1008, the method1000may include tracking the real pilot and the real aircraft. This may include tracking the position and orientation of the pilot's head within the cockpit of the aircraft using the one or more internal sensors. Further, this may include tracking the operational state (e.g. location, speed, direction of travel, etc.) of the aircraft in the virtual airspace using the one or more external sensors. Moreover, at1010, the method1000may include continuously updating the augmented and virtual reality content shown to the real pilot flying the real aircraft based on the tracking of the real pilot and the real aircraft. In some embodiments, the augmented and virtual reality content shown to the real pilot flying the real aircraft may be updated based on the operational state (e.g. location, speed, direction of travel, etc.) of the virtual aircraft flown by the real people (on the ground) and the operational state (e.g. location, speed, direction of travel, etc.) of the constructed aircraft. In some embodiments, the method1000may include continuously updating the augmented and virtual reality content shown to the real pilot (on the ground) flying the virtual aircraft based on the tracking the real pilot and the real aircraft, the operational state (e.g. location, speed, direction of travel, etc.) of the virtual aircraft flown by the real people (on the ground) and the operational state (e.g. location, speed, direction of travel, etc.) of the constructed aircraft. FIG.11shows the augmented and virtual reality content shown to a real pilot (such as pilot1102) flying a real aircraft (such as aircraft1104), in accordance with an exemplary embodiment. The augmented and virtual reality content may include one or more live aircraft1106(representing real pilots flying real aircraft), one or more virtual aircraft1108(representing real people on the ground, flying virtual aircraft) and one or more constructed aircraft1110(representing aircraft generated and controlled using computer graphics and processing systems). Accordingly, the pilot1102wearing an augmented and virtual reality display device may look out the cockpit window to see enemy aircraft (such as live aircraft1106, virtual aircraft1108, and/or constructed aircraft1110) in extremely high fidelity. Further, the pilot1102may then practice offensive/defensive air-to-air maneuvers against the digital enemy while continuing to fly his own aircraft1104. Systems and methods according to the principles of the present inventions relate to an augmented reality system adapted to provide a virtual common training environment to operators of vehicles that are separated by a distance where they cannot see one another; however, the common training environment provides the separated operators to see computer generated representations of one another so they can maneuver as if they were within visual range of one another. This may be useful when teaching separated operators to maneuver in a coordinated fashion (e.g. within a close formation of planes) when the operators cannot otherwise see each other. In embodiments, two separate head-mounted see-through optics, a first and a second, adapted to present digital content viewable by a user and having a transparency that allows the user to see though to the surrounding environment are provided to the visually separated operators. A training simulation system may be adapted to present digital content to each of the first and second optics, wherein the digital content represents a vehicle operated by the other user. With this arrangement, each operator can ‘see’ the other vehicle as digital content in the see-through display such that they can coordinate maneuvers. The digital representation of the other's vehicle may be geospatially positioned based on the other vehicle's actual geospatial position as represented in the common training environment. For example, the position of the digital representation of the other vehicle may be represent the other vehicle's actual geospatial location and condition in its real airspace such that movements of the second vehicle are duplicated by the representation of the second vehicle, but the geospatial location of the representation in the virtual common training environment may be based on the training exercise such that the apparent distance from an operator to the representation is related and relative. Systems and methods according to the principles of the present inventions relate to presenting of a coordinated training scenario to two or more vehicles in separate airspaces where the two or more vehicles are not within visual range of one another. In embodiments, a common virtual airspace is provided to the two or more vehicles, wherein the common virtual airspace includes a computer generated training asset that is viewable by an operator of each vehicle as content overlaying a real airspace surrounding each of the respective vehicles. It is presented as augmented reality content. A system may identify a geospatial location for each of the two or more vehicles within the virtual airspace, which may be based on the vehicle's actual geospatial locations within their respective airspace and represented within the common virtual airspace. The system may position the computer generated training asset at a geospatial location within the virtual airspace within a visual range of the two or more vehicles such that the perspective of the computer generated training asset is separately based on the geospatial location for each of the two or more vehicles. In embodiments, systems and methods may involve presenting a first pilot of a first vehicle, of the two or more vehicles, with computer generated content representing a second vehicle, of the two of more vehicles, within a common virtual environment when the first pilot looks in the direction of the second vehicle's geospatial location as mapped into the common virtual environment. This facilitates visual coordination between the otherwise visually separated vehicles. For example, a real pilot in a real aircraft flying over Nevada may be able to ‘see’ a second plane that is actually flying over Virginia as an augmented reality representation in close proximity to the real aircraft. The relative positioning of the representation of the second aircraft to the real aircraft may be programmed based on the training scenario. The scenario, for example, may begin by geospatially locating the two visually separated aircraft within 50 feet of one another in a common virtual airspace. Each pilot would be able to look and see the other's aircraft represented as augmented reality content at 50 feet away. Then the simulation may track the actual geospatial positions and conditions of both aircraft to move the representations presented to the pilots based on actual movements. If the either plane makes a real maneuver that affects the relative position of the two aircraft in the virtual environment, the result will be shown by changing the position of the representation of the other aircraft. If the second aircraft puts on its afterburners and the first does not, the pilot of the first aircraft will see the second aircraft pull away in the virtual airspace as the second aircraft flies faster in its real airspace. In embodiments, the presenting of the computer generated content representing the second vehicle is further based on an unobstructed line of sight between the first pilot and the location computer content representing the second vehicle in the virtual environment. The apparent relative position between the first vehicle and the computer generated content representing the second vehicle may be based on the actual movements of the first vehicle and a second vehicle. The computer generated training asset and the computer generated content representing the second vehicle may move separately within the virtual environment, the second vehicle representation movements may be based on the actual movement of the second vehicle and the training asset movements may be based on a computer generated path (e.g. predetermined path) intended to interact with at least one of the two or more vehicles. In embodiments, the geospatial boundaries of the virtual airspace are set such that each of the two or more vehicles operate within respectively clear airspace. In embodiments, the common virtual airspace represents one of the two or more vehicle's actual airspace. In embodiments, the common virtual airspace represents an airspace in an enemy environment. Systems and methods according to the principles of the present inventions relate to providing a common virtual environment provided to both air and ground assets. A combat training augmented reality simulation system may be provided to real aircraft and real ground vehicle. The simulation may involve including virtual air and ground assets. In embodiments, at least two separate head-mounted see-through optics, a first and a second, are provided and adapted to present digital content viewable by a user and having a transparency that allows the user to see though to the surrounding environment. An aircraft mounted tracking systems mounted in the real aircraft may be used to track the geo-spatial position and condition of the aircraft and the direction in which a pilot is apparently looking. A ground vehicle tracking system may also be mounted in the real ground vehicle so the ground tracking system tracks the geo-spatial position of the real ground vehicle and the direction in which a driver is apparently looking. A training simulation system adapted to generate 3D virtual content for presentation on the respective separate optics, wherein the virtual content provides the pilot and the driver with a different perspective view of the same 3D virtual object. FIG.12shows two real aircraft (such as aircraft1202, and aircraft1204) in a virtual airspace1206, in accordance with an exemplary embodiment. The two real aircraft (such as aircraft1202, and aircraft1204) may be flown by two real pilots (a pilot A and a pilot B). Further, both the pilots may be capable of using the disclosed system (ATARI) to view the augmented and virtual reality content. Further, the pilot A may be able to see the pilot B via their augmented and virtual reality equipment. Further, the pilot A may be able to see one or more virtual aircraft (not shown inFIG.12) which may be enemy aircraft or friendly aircraft. In some embodiments, the pilot A and the pilot B may be enemies and may engage in combat against each other. In some embodiments, the pilot A and the pilot B may be friendly and may cooperate in combat against enemy aircraft. High-speed communication between the two aircraft may be employed to allow for effective cooperation. In some embodiments, the two aircraft1202-1204may not fly together in the real world. As shown inFIG.12, one aircraft (such as aircraft1202) may take off in the USA and the other aircraft (such as aircraft1204) may take off in the UK. Therefore, the two aircraft1202-1204fly physically in the air in different geographical location, but they may share the same virtual airspace (6D airspace) provided by the disclosed system (ATARI). Accordingly, the pilot A may fight against the pilot B in the common virtual airspace1206. Therefore, each pilot may see other pilot's virtual image in their augmented and virtual reality equipment. Further, the pilot A and the pilot B may fight together against enemies. Again, both pilots may see each other's virtual images. However, in this case, they may collaborate, and not fight against each other. FIG.13shows an augmented reality view1300shown to a real pilot (such as pilot1302), in accordance with an exemplary embodiment. Further, the augmented reality view1300may be generated and displayed over a virtual reality display. For example, the virtual reality display may include a head-mounted display (HMD), eyeglasses, Head-Up Display (HUD), smart contact lenses, a virtual retinal display, an eye tap, a Primary Flight Display (PFD) and a cockpit glass etc. Further, the augmented reality view1300may assist a pilot1302in flying a civilian aircraft1304. As shown inFIG.13, the augmented reality view1300includes a road drawn in the sky (such as a skyway1306) indicating a path that the civilian aircraft1304may take in order to land at an airport. Further, the augmented reality view1300may include a navigation marker1308indicating to the pilot1302that the civilian aircraft1304should take a left turn. The navigation marker1308may assist the pilot1302in navigating towards a landing strip to land the civilian aircraft1304. Therefore, the augmented reality view1300may provide pilots with a similar view as seen by public transport drivers (e.g. taxi or bus) on the ground. The pilots (such as the pilot1302) may see roads (such as the skyway1306) that the pilot1302need to drive on. Further, the pilot1302, in an instance, may see signs just like a taxi driver who may just look out of a window and see road signs. Further, the augmented reality view1300may include (but not limited to) one or more of skyways (such the skyway1306), navigation markers (such as the navigation marker1308), virtual tunnels, weather information, an air corridor, speed, signboards for precautions, airspace class, one or more parameters shown on a conventional horizontal situation indicator (HSI) etc. The skyways may indicate a path that an aircraft (such as the civilian aircraft1304) should take. The skyways may appear similar to roads on the ground. The navigation markers may be similar to regulatory road signs used on the roads on the ground. Further, the navigation markers may instruct pilots (such as the pilot1302) on what they must or should do (or not do) under a given set of circumstances. Further, the navigation markers may be used to reinforce air-traffic laws, regulations or requirements which apply either at all times or at specified times or places upon a flight path. For example, the navigation markers may include one or more of a left curve ahead sign, a right curve ahead sign, a keep left sign, and a keep to right sign. Further, the virtual tunnels may appear similar to tunnels on roads on the ground. The pilot1302may be required to fly the aircraft through the virtual tunnel. Further, the weather information may include real-time weather data that affects flying conditions. For example, the weather information may include information related to one or more of wind speed, gust, and direction; variable wind direction; visibility, and variable visibility; temperature; precipitation; and cloud cover. Further, the air corridor may indicate an air route along which the aircraft is allowed to fly, especially when the aircraft is over a foreign country. Further, the augmented reality view1300may include speed information. The speed information may include one or more of a current speed, a ground speed, and a recommended speed. The signboards for precautions may be related to warnings shown to the pilot1302. The one or more parameters shown on a conventional horizontal situation indicator (HSI) include NAV warning flag, lubber line, compass warning flag, course select pointer, TO/FROM indicator, glideslope deviation scale, heading select knob, compass card, course deviation scale, course select knob, course deviation bar (CDI), symbolic aircraft, dual glideslope pointers, and heading select bug. Further, in some embodiments, information such as altitude, attitude, airspeed, the rate of climb, heading, autopilot and auto-throttle engagement status, flight director modes and approach status etc. that may be displayed on a conventional primary flight display may also be displayed in the augmented reality view1300. Further, in some embodiments, the augmented reality view1300may include a one or more of other vehicles (such as another airplane1310). Further, the one or more other vehicles, in an instance, may include one or more live vehicles (such as representing real pilots flying real aircraft), one or more virtual vehicles (such as representing real people on the ground, flying virtual aircraft), and one or more constructed vehicles (such as representing aircraft generated and controlled using computer graphics and processing systems). Further, the augmented reality view1300may include an airspace.FIG.14is a chart related to the United States airspace system's classification scheme. Specifically,FIG.14illustrates various parameters related to one or more classes defined in the United States airspace system's classification scheme. The classification scheme is intended to maximize pilot flexibility within acceptable levels of risk appropriate to the type of operation and traffic density within that class of airspace—in particular, to provide separation and active control in areas of dense or high-speed flight operations. The Albert Roper (1919-10-13 The Paris Convention) implementation of International Civil Aviation Organization (ICAO) airspace classes defines classes A through G (with the exception of class F which is not used in the United States). For an instance, a computing device (such as the computing device1600) may analyze one or more parameters such as altitude, Visual Flight Rules (VFR), Instrument Flight Rules (IFR), VFR cloud clearance, and VFR minimum visibility etc. to determine an applicable airspace class. Further, the determined airspace class may be displayed on the virtual reality display. Further, the applicable airspace class may be determined using a location tracker such as a GPS and may be displayed as a notification on the virtual reality display. Further, a special use airspace class may be determined. The special use airspace class may include alert areas, warning areas, restricted areas, prohibited airspace, military operation area, national security area, controlled firing areas etc. For an instance, if an aircraft (such as the civilian aircraft1304) enters a prohibited area by mistake, then a notification may be displayed in the augmented reality view1300. Accordingly, the pilot1302may reroute the aircraft towards a permitted airspace. Further, the augmented reality view1300may include one or more live aircraft (representing real pilots flying real aircraft), one or more virtual aircraft (representing real people on the ground, flying virtual aircraft) and one or more constructed aircraft (representing aircraft generated and controlled using computer graphics and processing systems). Further, the augmented reality view1300shown to a pilot (such as the pilot1302) in a first aircraft (such as the civilian aircraft1304) may be modified based on sensor data received from another aircraft (such as another airplane1310). The sensor data may include data received from one or more internal sensors to track and localize the pilot's head within the cockpit of the aircraft. Further, the sensor data may include data received from one or more external sensors to track the position and orientation of the aircraft. Further, the data received from the one or more internal sensors and the one or more external sensors may be combined to provide a highly usable augmented reality solution in a fast-moving environment. FIG.15shows an augmented reality view1500shown to a real pilot while a civilian aircraft1502is taxiing at an airport, in accordance with an exemplary embodiment. The augmented reality view1500may include one or more navigational markers (such as the navigation marker1308) and signboards (such as a signboard1504) that assist a pilot to taxi the civilian aircraft1502at the airport. The navigational markers may indicate the direction of movement. The signboards may indicate the speed limits. The augmented reality view1500may help the pilot to taxi the civilian aircraft1502towards a parking location after landing. Further, augmented reality view1500may help the pilot to taxi the civilian aircraft1502towards a runway for taking-off. Therefore, a ground crew may no longer be required to instruct the pilot while taxiing the civilian aircraft1502at the airport. Further, the augmented reality view1500may include one or more live aircraft (such as a live aircraft1506) at the airport (representing real pilots in real aircraft), one or more virtual aircraft at the airport (representing real people on the ground, controlling a virtual aircraft) and one or more constructed aircraft at the airport (representing aircraft generated and controlled using computer graphics and processing systems). Further, the augmented reality view1500shown to a pilot in a first aircraft may be modified based on sensor data received from another aircraft. The sensor data may include data received from one or more internal sensors to track and localize the pilot's head within the cockpit of the aircraft. Further, the sensor data may include data received from one or more external sensors to track the position and orientation of the aircraft. Further, the data received from the one or more internal sensors and the one or more external sensors may be combined to provide a highly usable augmented reality solution in a fast-moving environment. With reference toFIG.16, a system consistent with an embodiment of the disclosure may include a computing device or cloud service, such as computing device1600. In a basic configuration, computing device1600may include at least one processing unit1602and a system memory1604. Depending on the configuration and type of computing device, system memory1604may comprise, but is not limited to, volatile (e.g. random-access memory (RAM)), non-volatile (e.g. read-only memory (ROM)), flash memory, or any combination. System memory1604may include operating system1605, one or more programming modules1606, and may include a program data1607. Operating system1605, for example, may be suitable for controlling computing device1600's operation. In one embodiment, programming modules1606may include virtualization module, image-processing module, machine learning module and/or tracking module. Furthermore, embodiments of the disclosure may be practiced in conjunction with a graphics library, other operating systems, or any other application program and is not limited to any particular application or system. This basic configuration is illustrated inFIG.16by those components within a dashed line1608. Computing device1600may have additional features or functionality. For example, computing device1600may also include additional data storage devices (removable and/or non-removable) such as, for example, magnetic disks, optical disks, or tape. Such additional storage is illustrated inFIG.16by a removable storage1609and a non-removable storage1610. Computer storage media may include volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information, such as computer-readable instructions, data structures, program modules, or other data. System memory1604, removable storage1609, and non-removable storage1610are all computer storage media examples (i.e., memory storage.) Computer storage media may include, but is not limited to, RAM, ROM, electrically erasable read-only memory (EEPROM), flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store information and which can be accessed by computing device1600. Any such computer storage media may be part of device1600. Computing device1600may also have input device(s)1612such as a keyboard, a mouse, a pen, a sound input device, a touch input device, a location sensor, a camera, a biometric sensor, etc. Output device(s)1614such as a display, speakers, a printer, etc. may also be included. The aforementioned devices are examples and others may be used. Computing device1600may also contain a communication connection1616that may allow device1600to communicate with other computing devices1618, such as over a network in a distributed computing environment, for example, an intranet or the Internet. Communication connection1616is one example of communication media. Communication media may typically be embodied by computer readable instructions, data structures, program modules, or other data in a modulated data signal, such as a carrier wave or other transport mechanism, and includes any information delivery media. The term “modulated data signal” may describe a signal that has one or more characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media may include wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, radio frequency (RF), infrared, and other wireless media. The term computer readable media as used herein may include both storage media and communication media. As stated above, a number of program modules and data files may be stored in system memory1604, including operating system1605. While executing on processing unit1602, programming modules1606(e.g., application1620such as a media player) may perform processes including, for example, one or more stages of methods, algorithms, systems, applications, servers, databases as described above. The aforementioned process is an example, and processing unit1602may perform other processes. Other programming modules that may be used in accordance with embodiments of the present disclosure may include sound encoding/decoding applications, machine learning application, acoustic classifiers etc. There are a number of ways to track aircraft and other vehicles. You can watch a vehicle move within your own local environment or in a longer-range environment with the aid of magnification. You can track things with radar when they are within visual sight or beyond visual sight. Radar systems may be local (e.g. within your vehicle) or remote (e.g. within another vehicle, on the ground separate, in the air, in space, etc.). Satellite imagery and other satellite sensor systems can track things. And other systems can be adapted to track vehicles, people and objects that are in a person's near environment or far from the person. Military training exercises typically use a number of tracking technologies. In a situation where there are a number of real vehicles acting as enemies each vehicle may be equipped with tracking technologies to track the opponent. As the two vehicles (e.g. fighter jets) come within visual range of one another the pilot's will also likely track the opposition visually. This may be a practice dog fighting scenario. Each pilot would be able to track the opponent through all available tracking system, including her own vision. The inventors discovered that this places a significant burden on training exercises. There is a real risk of loss of life and loss of assets. There is a massive expense in operating assets in these training scenarios. The situations are limited to those of the skills of the individual pilots and other vehicle operators and restricted by the exercise rules. The rules may limit the participants from performing extreme maneuvers, for example, to minimize the risk to life and assets. To incorporate more extreme conditions, preserve life and assets, save money, and train many more vehicle operators, ground based simulators are used for visual contact situations. Some ground-based simulators are very complex and can provide a near real feeling of being in a vehicle in certain situations, but they are limited because they are not in real vehicles and the simulations of a hostile cockpit environment, for example, is very difficult to accurately simulate. To simulate a turn at 5G's a ground-based simulator may move the operator and place pressure on the operator, but it is not like pulling 5G's in a real airplane. In the real jet fighter, the situation could go well above 5G's, risking the pilot to passing out and/or reduced cognitive ability, which greatly impact the pilot's ability to think and react. Systems and methods according to the present disclosure use vehicle tracking and operator vision direction tracking as described herein to properly manage the presentation of virtual content within a real vehicle to simulate and augment the environment outside of the vehicle. Such systems and methods may further integrate non-visual tracking technologies. For example, a pilot in a fighter jet may have a radar system, whether local radar or other radar, to track assets out of visual range (e.g. approximately 10 miles away in a clear sky, much less in a cloudy sky). In a training simulation, the radar system may be presenting information not about another real vehicle but, rather, a virtual asset being controlled by a simulation system. The pilot may use the simulated radar information and maneuver the plane in a training exercise. This provides the pilot with the real environment of the cockpit while reacting in a simulation, which can provide a very good simulation with respect to non-visual tracking of assets. In embodiments, the in-vehicle augmented reality system (e.g. HMD) simulation content is coordinated with non-visual tracking technologies. For example, a plane's radar system may indicate that an enemy plane is approaching at a fast rate and from a certain direction. When the enemy is more than 10 miles away, for example, the only indication the pilot may have of the enemy is the radar information. As the enemy approaches and comes within the pilot's visual range, the augmented reality system and simulation system may coordinate to ‘hand off’ the non-visual representation of the enemy to an augmented reality view of the enemy. The non-visual portion of the simulation may continue to provide information to the pilot is engaged in the augmented reality view. The transition from non-visual data to visual presentation of content is intended to be smooth and life like. For example, the size of the visual representation would be very small at the initial transition and it would follow the path indicated by the non-visual system. The tracking in the non-visual and virtual visual may coordinate throughout the visual range for the pilot such that she does not have any inconsistent input. The non-visual and visual coordination may be generated, monitored, corrected, etc. in real time by a computer system. The computer system may be a central system that controls or influences the virtual environment and content presented to a vehicle operator or passenger. The computer system may be arranged to coordinate the simulated non-visual content that is presented to the operator of a vehicle with visual content that is presented to the operator such that the operator perceives the two information feeds as coordinated. The visual content would be displayed at a geospatial position and condition (e.g. perspective geometry) that is consistent with the simulated non-visual content's apparent location and condition (e.g. perspective geometry, location, speed, etc.). In embodiments, coordination between non-visual information concerning another vehicle or object in a vehicle's environment may be representative of a real vehicle or object (e.g. radar tracking and presentation of another airplane or missel more than 10 miles away). Once the real object enters the vehicle operator's visual range, virtual visual content may be presented to the operator in his see-through optical system as augmented reality content. The virtual content may provide the operator with information pertaining to the other vehicle or object and/or cues to guide the operator with respect to the other vehicle or object. A system and method according to the principles of the present invention may be an augmented reality system for the training of vehicle operators involving visual and non-visual information. It may involve a head-mounted see-through optic (e.g. HMD) adapted to present digital content viewable by a user and having a transparency that allows the user to see though to the surrounding environment. It may also involve a non-visual tracking system adapted to identify and track objects in a surrounding environment that cannot be seen visually. A training simulation system may be adapted to present a virtual training object on a display on the non-visual tracking system and virtual visual content in the see-through optic. Both the displays, HMD and non-visual display, may be representing a location and movement of the same training object and the presentation maybe coordinated such that both displays indicate the training object is in the same position. A system and method according to the principles of the present invention may involve tracking and coordinating visual information and non-visual information relating to a virtual object in a training simulation. This may involve providing a non-visual object tracking system in a vehicle and providing an augmented reality see-through computer display adapted to present virtual content representing an object to an operator of the vehicle. A training simulation system may generate a geospatial location and path of movement of the virtual object at a geospatial location outside of a visual range of the operator. The geospatial location and path of movement of the virtual object may be displayed on the non-visual object tracking system while the object maintains a distance from the vehicle that is outside of the operator's visual range. The system may present a representation of the virtual object in the operator's see-through computer display when the location of the object enters the operator's visual range. The representation may be presented at a position within a field of view of the see-through computer display that is consistent with the position of the object as presented on the display of the non-visual object tracking system. In embodiments, the non-visual tracking system may be a radar tracking system. In embodiments, the virtual object may be an enemy asset. In embodiments, the step of presenting the virtual object on the display of the non-visual object tracking system may be part of a simulated training exercise where the computer simulation system generates the virtual object and determines the virtual objects path of movement. The system may coordinate a substantially simultaneous presentation of the visual representation of the virtual object and the non-visual representation of the virtual object. In embodiments, the step of coordination involves alignment of the geospatial location and direction of movement consistent in the see-through computer display and the non-visual display. Systems and methods according to the principles of the present inventions involve replaying live simulated training sessions. A live simulated training session may involve a real vehicle operating in a real environment (as described herein) and an operator's and or vehicle's reactions to a virtual object presented as if within visual range. The content may be presented as augmented reality content to the operator. In embodiments, such a system and method may involve saving in-flight data from an aircraft during a simulated training exercise, wherein the in-flight data includes geospatial locations of the aircraft, positional attitudes of the aircraft, and head positions of a pilot operating the aircraft. It may further involve saving simulation data relating to a simulated virtual object presented to the pilot as augmented reality content in-flight, wherein the virtual object was programmed to interact with the aircraft during the simulated training exercise. With the relevant data from the augmented reality simulated training saved, the system may represent the in-flight data from the aircraft and the simulation data relating to the simulated virtual object as a replay of the simulated training exercise. The replay may be reviewed in real time, slow motion, fast forward, etc. to understand or teach lessons based on the training session. Systems and methods described herein can be used to create a virtual environment presented to an operator or a passenger in a real operating vehicle. As disclosed herein, the virtual environment may be used for a variety of uses (e.g. combat training, operational training, navigation guidance, visual cues for situations occurring during operation). The virtual environment maybe generated and/or controlled in-part or entirely by an artificial intelligence system, machine learning system, deep learning system, etc. (AI). For example, simulations of situations may have been completed in on-ground simulation systems or live vehicle systems and the operator performance in those simulations may affect the control of virtual objects in the in-vehicle virtual environment. For example, if a pilot usually turns the plane, his eyes, or his head in a certain direction in a certain situation, the AI may control or influence the content to try to cause the pilot to turn his head or change something to cause some response. As has been described herein, systems and methods disclosed herein may be used for training, gaming, cueing an operator or passenger during training or in a non-simulated, live, situation, coordinating information amongst various participants (e.g. keeping vehicles in a formation), etc. While many of the embodiments herein describe training simulations it should be understood that the principles of the present inventions may relate to cueing an operator in a live situation. With the introduction of systems and methods of visual cueing and training, the inventors discovered that the systems and methods may be used to generate and analyze an entirely new data type relating to feedback from the in-vehicle situations. For example, a pilot may be in a real jet fighter flying at the speed of sound using an augmented reality system as described herein. The system may present visual content to the pilot in augmented reality causing a response by the pilot and the response, biomarkers from the pilot or passenger, results of the response, etc. may be recorded, stored and analyzed by a computer system. The data may be used to train an AI system, form trend analysis for a group of people in similar situations, form personal trend analysis for individual pilots or passengers, identify reaction times for the pilot and the vehicle, form a trend analysis of the vehicle's conditions through real maneuvers, etc. Models, trends, situational reactions from the analysis can be used to train groups of vehicle operators, individual vehicle operators, provide trainers feedback, modify virtual environments and content in simulations and/or the control of the content in simulations, modify cues presented to operators in non-simulated, live, situations, etc. Generally, consistent with embodiments of the disclosure, program modules may include routines, programs, components, data structures, and other types of structures that may perform particular tasks or that may implement particular abstract data types. Moreover, embodiments of the disclosure may be practiced with other computer system configurations, including hand-held devices, general purpose graphics processor-based systems, multiprocessor systems, microprocessor-based or programmable consumer electronics, application specific integrated circuit-based electronics, minicomputers, mainframe computers, and the like. Embodiments of the disclosure may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote memory storage devices. Furthermore, embodiments of the disclosure may be practiced in an electrical circuit comprising discrete electronic elements, packaged or integrated electronic chips containing logic gates, a circuit utilizing a microprocessor, or on a single chip containing electronic elements or microprocessors. Embodiments of the disclosure may also be practiced using other technologies capable of performing logical operations such as, for example, AND, OR, and NOT, including but not limited to mechanical, optical, fluidic, and quantum technologies. In addition, embodiments of the disclosure may be practiced within a general-purpose computer or in any other circuits or systems. Embodiments of the disclosure, for example, may be implemented as a computer process (method), a computing system, or as an article of manufacture, such as a computer program product or computer readable media. The computer program product may be a computer storage media readable by a computer system and encoding a computer program of instructions for executing a computer process. The computer program product may also be a propagated signal on a carrier readable by a computing system and encoding a computer program of instructions for executing a computer process. Accordingly, the present disclosure may be embodied in hardware and/or in software (including firmware, resident software, micro-code, etc.). In other words, embodiments of the present disclosure may take the form of a computer program product on a computer-usable or computer-readable storage medium having computer-usable or computer-readable program code embodied in the medium for use by or in connection with an instruction execution system. A computer-usable or computer-readable medium may be any medium that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. The computer-usable or computer-readable medium may be, for example but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, device, or propagation medium. More specific computer-readable medium examples (a non-exhaustive list), the computer-readable medium may include the following: an electrical connection having one or more wires, a portable computer diskette, a random-access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, and a portable compact disc read-only memory (CD-ROM). Note that the computer-usable or computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via, for instance, optical scanning of the paper or other medium, then compiled, interpreted, or otherwise processed in a suitable manner, if necessary, and then stored in a computer memory. Embodiments of the present disclosure, for example, are described above with reference to block diagrams and/or operational illustrations of methods, systems, and computer program products according to embodiments of the disclosure. The functions/acts noted in the blocks may occur out of the order as shown in any flowchart. For example, two blocks shown in succession may in fact be executed substantially concurrently or the blocks may sometimes be executed in the reverse order, depending upon the functionality/acts involved. While certain embodiments of the disclosure have been described, other embodiments may exist. Furthermore, although embodiments of the present disclosure have been described as being associated with data stored in memory and other storage mediums, data can also be stored on or read from other types of computer-readable media, such as secondary storage devices, like hard disks, solid state storage (e.g., USB drive), or a CD-ROM, a carrier wave from the Internet, or other forms of RAM or ROM. Further, the disclosed methods' stages may be modified in any manner, including by reordering stages and/or inserting or deleting stages, without departing from the disclosure. Although the invention has been explained in relation to its preferred embodiment, it is to be understood that many other possible modifications and variations can be made without departing from the spirit and scope of the invention. | 140,031 |
11862043 | DETAILED DESCRIPTION OF THE EMBODIMENTS The present invention will be described below with reference to the drawings and embodiments in detail. The respective examples are provided by way of explanation of the present invention without limiting the present invention. Indeed, it will be apparent to a person skilled in the art that modifications and variations can be made in the present invention without departing from the scope or spirit of the invention. For example, features illustrated or described as being part of one embodiment may be used in another embodiment to yield yet another embodiment. Therefore, it is expected that the present invention includes such modifications and variations within the scope of the accompanying claims and their equivalent. In the description of the present invention, orientation or position relationships indicated by terms such as “longitudinal”, “lateral”, “up”, “down”, “front”, “rear”, “left”, “right”, “vertical”, “horizontal”, “top”, and “bottom” are based on orientation or position relationships shown in the accompanying drawings, which are only used to facilitate description of the present invention rather than requiring that the present invention must be constructed and operated in a specific orientation, and therefore cannot be construed as a limitation on the present invention. The terms “connecting”, “connected” and “provided” used in the present invention should be understood broadly, for example, may be fixedly connected, and may also be detachably connected; may also be direct connections or indirect connections via intervening components; may also be wired connections or radio connections; and may also be wireless communication signal connections. A person of ordinary skill in the art would have been able to understand the specific meaning of the described terms according to specific situations. One or more examples of the present invention are illustrated in the accompanying drawings. The detailed description uses numerals and letter markings to refer to features in the drawings. The similar or analogous markings in the drawings and descriptions have been used to refer to similar or analogous parts of the present invention. As used herein, the terms “first”, “second”, and “third”, etc., are used interchangeably to distinguish one member from another, and are not intended to represent the location or importance of the individual members. In order to solve the problems of high assembly difficulty, slow speed and uneven assembly quality existing in conventional letter and number-shaped decorative panels, this embodiment provides a letter and number-shaped decorative panel. Referring toFIGS.1and7, which is the same as the prior art, the decorative panel is also composed of a base plate1and side plates2. However, in this embodiment, the base plate1and the side plates2are specially designed, thereby greatly reducing the assembly difficulty between the base plate1and the side plate2, and increasing the assembly speed. The base plate1can be divided into two types according to the shape of z letter and number. One is a hollowed-out base plate and the other is a non-hollowed-out base plate. For 24 English letters, corresponding base plates for A, B, D, O, P, Q, R are hollowed-out base plates, and corresponding base plates for C, E, F, G, H, I, J, K, L, M, N, S, T, U, V, W, X, Y, Z are non-hollowed-out base plates. For 10 Arabic numerals, corresponding base plates for 0, 4, 6, 8, and 9 are hollowed-out base plates, and corresponding base plates for 1, 2, 3, 5, and 7 are non-hollowed-out base plates. The decorative panel will be described below in two different forms of hollowed-out and non-hollowed-out base plates respectively. Referring toFIG.1,FIG.1is a number1-shaped decorative panel, and the decorative panel is mainly composed of a base plate1and a side plate2. Referring toFIG.2, the shape of the base plate1is the same as that of the number1, and belongs to a non-hollowed-out base plate; a plurality of first slots11are arranged at intervals along a periphery of an edge of the shape of the base plate on the base plate1; and the side plate2is perpendicularly provided along the periphery of the edge of the shape of the base plate1. Referring toFIG.3, first insertion plates21in one-to-one correspondence with the first slots11in the base plate1are provided at a side of the side plate2close to the base plate1; and the first insertion plates21are inserted into the corresponding first slots11to connect the side plate2to the base plate1, so as to form a letter and number1-shaped decorative cover having an opening on one side as shown inFIG.1. Referring toFIG.7,FIG.7is a number6-shaped decorative panel, and the decorative panel is mainly composed of a base plate1and two side plates2. Referring toFIG.8, the shape of the base plate1is the same as that of the number6, and belongs to a hollowed-out base plate; a plurality of first slots11are arranged at intervals along a periphery of an edge of the shape of the base plate on the base plate1; the edge include an outer edge and an inner edge (an edge at the hollowed-out position); and one of the side plates2is perpendicularly provided along the outer edge of the shape of the base plate1, and the other is perpendicularly provided along the inner edge of the shape of the base plate1. Referring toFIG.9, first insertion plates21corresponding to the first slots11in the base plate1in one-to-one correspondence are provided at a side of each of the side plates2close to the base plate1; and the first insertion plates21are inserted into the corresponding first slots11to connect two side plates2to the base plate1respectively, so as to form a letter and number6-shaped decorative cover having an opening on one side as shown inFIG.7. The side plates2can be of an integrated structure. However, in order to facilitate processing and transportation, preferably, as shown inFIGS.3and9, the side plates2are designed to be of a split structure, and is composed of a plurality of side plate units22and a plurality of connection plates23; wherein the plurality of side plate units22are connected head to end in sequence, and the first insertion plates21correspondingly fitting with the first slots11in the base plate1are provided at one side of each of the side plate units22close to the base plate1. Referring toFIG.4, second slots221are provided at two ends of each of the side plate units22. Referring toFIGS.3and9, the connection plates23are provided at a head-to-tail connection of two adjacent side plate units22. Referring toFIG.5, second insertion plates231fitting with the second slots221are provided at two ends of a side wall of each of the connection plates23, the second insertion plates231at two ends of each of the connection plates23are respectively inserted in the second slots221of the two adjacent side plate units22to connect the two adjacent side plate units22. In order to improve the stability of the connection plates23with respect to the connection between two adjacent side plate units22, as an improvement of the technical solution, referring toFIGS.3and9, two second slots221are provided at intervals along a height direction at an end of each of the side plate units22, and the two adjacent side plate units22are connected by means of two connection plates23. Due to the fact that the above decorative panel mainly achieves the effect of letter or number decoration by mounting a light bulb therein, as shown inFIGS.2and8, decorative light bulb mounting holes12are provided at intervals on the base plate1to facilitate the mounting and limiting of a decorative light bulb. The decorative panel in the above embodiment may not only be attached to a wall, but also be decorated by hanging, and can also be directly placed on a panel. In order to make the decorative panel more stable when placed, as an improvement of the technical solution, referring toFIGS.1and7, a support plate3may be further provided in the decoration panel to assist in the stable placement of the decoration panel. In order to fit the mounting of the support plate3, as an improvement of the technical solution, referring toFIGS.2and8, third slots13are provided at a lower portion of the base plate1. Referring toFIG.6, third insertion plates31fitting with the third slots13are provided at an end of the support plate3, and the third insertion plates31are inserted into the correspondingly fitted third slots13to connect the support plate3to the base plate1. One end of the support plate3close to the base plate1has a higher height, and the other end has a lower height, thereby facilitating placement. In order to further improve stability, as an improvement of the technical solution, two support plates3are provided in each decoration panel, two third slots13are arranged at intervals in the transverse direction at the lower portion of the base plate1, and the support plates3is in one-to-one correspondence with the third slots13, the third insertion plates31on each of the support plates3are inserted into the corresponding third slots13. As shown inFIG.6, two fitting third slots31are provided on the support plates3. Referring toFIG.2, two fitting third slots13are correspondingly provided on the base plate1. In actual use, one or more sets of fitting insertion plates and slots can be provided as needed. Preferably, the first slots11, the second slots221, and the third slots13are long holes; and a groove is provided on an end of each of the first insertion plates21, the second insertion plates231, and the third insertion plates31. After the first insertion plates21pass through the first slots11, the second insertion plates231pass through the second slots221, and the third insertion plates31pass through the third slots13, rubber sleeves are sleeved outside the corresponding insertion plates to limit the position of the insertion plates, thereby improving the stability of the connection. Other embodiments of the present invention will be readily apparent to a person skilled in the art from consideration of the description and practice of the invention disclosed herein. The present application is intended to cover any variations, uses, or adaptive changes of the present invention, which follow the general principles of the present invention and include common general knowledge or customary technical means in the art that are not disclosed in the present invention. The description and examples be considered as exemplary only, and the true scope and spirit of the present invention are indicated by the claims. The above description is only the preferred embodiments of the present invention, and is not intended to limit the present invention. For a person skilled in the art, the present invention may have various modifications and variations. Any modifications, equivalent replacements, improvements and the like made within the spirit and principle of the present invention shall belong to the protection scope of the present invention. | 11,048 |
11862044 | DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS The present invention may be embodied in different forms and should not be construed as limited to the embodiments set forth herein. For example, features disclosed as a part of one embodiment can be used in the context of another embodiment to yield a further embodiment. FIGS.1-3illustrate one embodiment of the present invention.FIG.1shows a frame10comprising a first support leg12and a second support leg13joined together by a horizontal support member11. With regard to each embodiment described herein, the length of the horizontal support member is less than the horizontal length (or width) of the flag, as opposed to the height of the flag. The length of the horizontal support member11can be less than 90, 80, 70, 60, 50, 40, or 30 percent of the length of the flag. In one or more embodiments as shown inFIGS.9and10, the length of the horizontal member can be adjustable so that the length of the horizontal member can be adjusted between 90 percent and 10 percent of the length of the flag or banner or between 80 percent and 20 percent or between 70 percent and 30 percent or between 60 percent and 40 percent of the length of the flag or banner14. FIG.2shows the flag or banner display20, which has been placed in the ground, comprising frame10ofFIG.1supporting flag14. Flag or banner14can include a channel or a sleeve17along one or more edges15of the flag or banner14to allow the flag or banner to be attached to or supported by the frame10. One or more sleeves17may be formed, for example, by folding one or more edges15of flag14and attaching edge15to flag14by, for example, stitching18. Alternatively, a separate piece of material can be folded and sewn along one or more edges15or portions of edges of flag14to form a sleeve or channel17. For example, as shown inFIG.2, a piece of fabric21is folded over and sewn along the vertical edge of the flag14to provide stitching18to form sleeve or channel17. A sleeve or channel can also be formed along the upper horizontal edge of the flag or banner in the same manner. Additionally, the flag or banner14can be folded completely or partially around the second support leg13and attached, such as by stitching, partially or completely along the length or perimeter of the flag or banner. The flag or banner can include openings21, such as eyelets, that can be used to attach the flag or banner14to frame10as further described herein. Flag14extends beyond the end of horizontal member11so that a first portion19of the flag is unfurled while a second portion16of flag14hangs downwardly and is furled or partially furled and is capable of waiving freely. Thus, flag display20presents the flag or banner14in a partially unfurled configuration in the absence of wind and allows the second portion16of the flag or banner14to unfurl and wave in the presence of wind. The flag or banner14, the first portion19of which remains at least partially unfurled, can be recognized even in the absence of wind. The flag or banner14is not completely held by the frame10so that the second portion16of the flag or banner can unfurl and/or wave in the presence of wind thereby drawing attention to the flag and/or banner14. As shown inFIG.3, in the presence of sufficient wind, the portion16of the flag14that hangs downwardly off of frame10can completely unfurl and wave in the wind. Thus, the present invention can provide a flag or banner display that can restrain or limit the movement of one portion of the flag or banner and allow another portion of the flag or banner to move (e.g., wave, swing, unfurl, soar, or flutter) based on differing wind conditions (e.g. light winds, strong winds, or gusty winds). FIGS.4-6illustrate a second embodiment of the present invention.FIG.4shows a frame100comprising a support leg112and a horizontal support member111connected to the support leg112. Optional reinforcement members (not shown) may extend between and connect support leg112to horizontal support member111. A leg anchor121can be connected to the support leg112to provide additional stability to the frame100when the support leg112is inserted into the ground. FIG.5shows a flag or banner display120, including frame100ofFIG.4, supporting a flag114. Flag114typically includes a channel or sleeve117along one or more edges115of the flag to allow the flag114to be attached to and supported by the frame100. One or more sleeves117may be formed, for example, by folding one or more edges115of the flag and stitching the edge to the flag using thread118. Because flag114extends beyond the end of horizontal support member111, a portion119of flag114is unfurled even during the absence of wind while another portion116of the flag114hangs downwardly in a furled or partially furled manner depending upon the presence or absence of wind.FIG.6shows the flag or banner display apparatus described inFIG.5. As shown inFIG.6, in the presence of sufficient wind, the portion116of flag114that hangs downwardly off of frame10can completely unfurl and waive in the wind. When the flags or banners are to be displayed, the support legs12,13,112of frames10,100of flag display apparatuses20,120can be inserted into the ground. Additionally, with regard to the first embodiment shown in FIGS.1-3, the first and second support legs12,13can be held generally parallel to each other when being inserted into the ground so that the portion of the flag between the legs remains generally taut. Alternatively, where frame10is flexible, the first and second support legs12,13can be forced toward each other so that the portion19of the flag14between the support legs12,13is not taut and is capable of waving in response to movement of wind around the flag. In yet another embodiment, one portion of frames10,100may be flexible and the remaining portion of the frame may be rigid. For example, one or more of the support legs12,13,112may be rigid and one or more of the horizontal members11,111may be flexible or vice versa. In a further embodiment of the flag or banner display described with respect toFIGS.1-3, the horizontal member can be omitted so that the frame10comprises first and second legs12,13. When placing the flag or banner display, the first and second legs12,13can be placed into the ground so that the first portion19of the flag or banner14is taut (i.e., not slack) or the first and second legs12,13can be placed into the ground so that the first portion19is slack or partially extended. When the first portion19of the flag or banner14is slack, the first portion can move in the presence of the wind and can draw attention to the flag or banner14together with the second portion16of the flag or banner that can move (e.g., wave) in the presence of the wind. FIG.9shows frame400comprising a horizontal support member comprising two horizontal segments411,414attached to each support leg412,413respectively. The horizontal segments411,414are in an overlapping and slidable relationship with one another so that the distance between support legs412,413can be lengthened or shortened, which controls the degree to which a first portion of a flag or banner (for example first portion19of flag or banner14inFIG.3) is extended or partially extended between the support legs412. As shown inFIG.10, the horizontal support member can comprise a telescoping tube comprising a larger horizontal diameter support tube511and a smaller horizontal support tube513slidably disposed at least partially inside the larger horizontal support tube511to allow the combined length of support members511,513to be extended or shortened to provide for different visual presentation of the flag display apparatus. Thus, decreasing the combined length of horizontal members411,414and511,513can decrease the width of the unfurled portions of the flags (not shown) attached to and supported by frames400and500, while increasing the combined length of horizontal members411,414and511,513increases the width of the unfurled portions of the flags (not shown) attached to and supported by frames400and500. The support legs and the horizontal support members of the present invention shown inFIGS.1-11typically have a circular cross-section but may also have any cross-section, including oval, square, rectangular, and hexagonal. The support legs and horizontal member may be solid (i.e., a rod) or hollow (i.e., a tube) as is required for horizontal support member511shown inFIG.10to provide for a telescoping horizontal member. The support legs and horizontal members of the various embodiments of the present invention can be made of metal, plastic, or wood. For example, the support legs and horizontal members may be made of galvanized steel or aluminum, or fiberglass, polyvinyl chloride, polypropylene, polyethylene, polyester, and vinyl. Preferably, the support legs and horizontal members comprising the frames of the present invention are made of the same type of material, but the support legs and horizontal members may be made of different materials and of different thicknesses or configurations, such as noted above. One part, such as the leg or legs, of the frame can be flexible while the remaining part of frame, such as the horizontal member, can be rigid. For example, referring toFIGS.1-10, the support leg or legs can be made of galvanized steel with a sufficient diameter to provide a rigid support for the display so that the support leg or legs do not bend or flex when being inserted into soil, while the horizontal member is made of plastic with a sufficient diameter to provide support for the flag or banner but also to provide flexibility when the flag or banner display is being packaged for distribution or are being inserted into the soil. In one embodiment, the support legs and/or horizontal members can be made of fiberglass or other material of sufficiently small diameter so that the support legs and/or horizontal members are capable of being folded, rolled up, or twisted to allow the frames to occupy a smaller area or volume to facilitate shipping and/or storage yet of sufficient diameter so that the flags are supported when displayed. Thus, in one embodiment, for example, the fiberglass support legs can have a larger diameter or cross-section than the horizontal support member, while in another embodiment, the fiberglass support legs can have a smaller diameter or cross-section than the horizontal support member. When the flag or banner display shown inFIGS.1-3does not include the horizontal support member11, the flag or banner display can be easily rolled up or folded to facilitate shipping and/or storage. A flag or banner may be attached to a frame for displaying the flag or banner using any suitable means, such as or in addition to, the sleeves and/or stitching shown inFIGS.1-6.FIG.7shows a flag or banner display220with frame200supporting flag224. Flag or banner224is attached to frame200by three openings, for example, rings or eyelets230, located along a vertical and/or horizontal edge of flag224. The openings230on the flag or banner can slide over leg212and horizontal support member211to position the flag224on frame200.FIG.8shows yet another embodiment where flag or banner334is attached to frame300using strips of hook and loop fasteners, such Velcro® hook and loop fasteners (Velcro is a registered trademark of Velcro Industries B.V.) or loops of material340, such as loops of fabric, wire, or cable ties. Any combination of rings, hook and loop fastener strips, or loops can be used. The flag or banner334can also be attached to the frame with binder clips, which are available from ACCO Brands Corporation. The binder clips may include a rubber or plastic material lining the inside of the binder clip to improve the grip strength of the binder clips. The rubber or plastic material can comprise, for example, pieces of rubber cut from the inner tube of a bicycle tire. For example, a rectangular piece can be cut from the inner tube that approximates the inner surface of the binder clip. The piece of rubber can be folded over the flag and the frame at a predetermined location and the binder clip can then be placed over the piece of rubber. Alternatively, the piece of rubber can be attached to the inner surface of the binder clip by, for example, an adhesive. The rubber or plastic material can increase the amount of force required to pull the flag out of the binder clip. Preferably at least three rings, hook and loop fastener strips, loops, binder clips, or combinations thereof can be used to attach flags224,324to frames200,300with at least one opening or eyelet230, hook and loop fastener strip or loop340being attached to support leg212,312and at least two rings230or hook and loop fastener strip or loop340. FIG.11is a front elevation view of an embodiment of the flag or banner display apparatus620that includes a frame610such as frame10shown inFIG.1and a flag or banner614having first and second portions619. Frame610includes first and second support legs612,613and a horizontal support member611connecting the upper ends of the support legs. Leg612extends through sleeve617, which can be formed by folding a side of the flag or banner614over leg612and providing stitching618along the edge of the side of the flag or banner or by folding a strip of fabric620over the side of the flag or banner614. The flag or banner614is also connected by, for example, stitching621to the horizontal support member611and/or to the second support leg613at or adjacent the intersection of the horizontal support member611and the second support leg613. Hook and loop fastener strips, loops, binder clips or other suitable means can be used instead of stitching621. The flag or banner displays described herein allow flags or banners to be placed, moved and/or removed quickly with minimum effort. In the specification and/or figures, examples of embodiments have been disclosed. The present invention is not limited to such exemplary embodiments. Unless otherwise noted, specific terms have been used in a generic and descriptive sense and not for purposes of limitation. The use of the term “and/or” includes any and all combinations of one or more of the associated listed items. | 14,208 |
11862045 | DETAILED DESCRIPTION The disclosure relates generally to a displaceable liner usable in a plethora of applications requiring the attachment of one substrate to another, such as labels, tape, et cetera. These applications and the use of the novel displaceable liner therewith are discussed in-turn. The artisan will understand from the disclosure herein that the displaceable liner is usable in other applications that can benefit from selectively adhering one or more surfaces together (e.g., envelopes, building materials, signage, et cetera). As is known, a shipping label is adhered to a package and identifies the sender and recipient of the package. Conventional shipping labels have a front face for the printing of indicia and a back face that is adhered to the package. Traditionally, the back face is covered with adhesive, and a liner is removably secured to the back face via this adhesive. Prior to use, the liner is removed, either by hand or otherwise, to expose the adhesive, and the label (specifically, the face ply thereof) is adhered to the package being delivered using the exposed adhesive. As is known, during transportation or otherwise before the label is adhered to a substrate (e.g., a package or other similar surface), the liner covers the adhesive to ensure that the label does not undesirably stick to objects (e.g., other labels, print heads, and/or other components of apparatus used to make and/or print the label) other than the substrate to which the label is to be adhered. The label liner is traditionally a single-use, disposable object. Considering that there are many millions of shipping labels in use each day, disposal of these liners represents significant waste. It may be desirable to reduce this waste to lower the cost and the carbon footprint of labels in the world; particularly when this waste is reduced without adversely affecting the quality or capabilities of the label, or their ease of use. FIG.1shows a conventional label10, as is known in the art. The label10has a face ply (or face stock)12and a liner14. The face ply12is typically made of paper. The face ply12has an upper side12A and a lower side12B. At least the top side12A of the face ply12may contain a topcoat16. The topcoat16is a coating configured for the reception of printed indicia and/or which otherwise improves the appearance or functionality of the face ply12. A layer of adhesive18is disposed on the lower side12B of the face ply12to allow the liner14to be coupled to the face ply12. The liner14is most commonly made of paper or polyester (PET). The prior art liner14may also be referred to herein as a liner ply because the prior art liner14comprises a ply (or multiple plies) of paper, polyester (e.g., film), et cetera. The liner ply14has a top side14A and a bottom side14B. The top side14A of the liner ply14contains a release agent20(e.g., silicone), and the bottom side14B may comprise paper or PET. The liner ply14is adhered to the face ply12such that the release agent20on the top side14A of the liner ply14is adjacent and in contact with the adhesive18disposed on the lower side12B of the face ply12. The release agent20may ensure that the adhesion between the top ply12and the bottom ply14is releasable; that is, the liner ply14may be selectively disassociated from the face ply12to expose the adhesive18on the lower side12B of the face ply. In use, the liner ply14is releasably adhered to the face ply12. The label10is then passed through the printer to print indicia on the topcoat16. During the printing process, the liner ply14covers the adhesive18and ensures that the adhesive18does not interact with the printer. Once the printing is complete, the liner ply14is disassociated from the face ply12to expose the adhesive18. The face ply12is then adhered to a substrate (e.g., a package, a box, an envelope, or other object or surface to which the label is adhered) via the adhesive18, and the liner ply14is disposed in a trash can or elsewhere. As noted, disposable liner plies represent significant waste and cost. The prior art indicates that efforts have been made to configure a label without a disposable liner. U.S. Pat. No. 8,109,537 illustrates one example of a label devoid of a disposable liner. The '537 Patent label includes a single ply that comprises adhesive on one side and a release material on the other. This “linerless” configuration allows for multiple labels to be removably overlaid to one another, e.g., on a roll. Specifically, the release material of the underlying label ensures that this label does not permanently adhere to the overlaid label because of the adhesive thereof. As the '537 Patent's linerless labels are devoid of a conventional liner, they address at least some of the deficiencies associated with conventional liners. However, the '537 Patent's (and other such) linerless labels present other issues that must be addressed. Because a liner is absent from the label, the adhesive on the labels is exposed to the printer during the printing process. This exposed adhesive may cause the label to undesirably stick to the printer roller and necessitate expensive repairs. To alleviate this concern, the linerless labels are typically printed with specialty printing equipment having coated rollers (e.g., direct thermal printers having rollers comprising silicone embedded rubber) specifically adapted to ensure that the labels do not adhere thereto. Much if not all of the cost savings associated with the liner are lost in purchasing and configuring the specialty printing equipment, which is undesirable. Furthermore, conventional linerless labels typically preclude easy customization of the label shape. For instance, applying a die-cut to a conventional linerless label may be difficult because the face ply lacks the structural support of a liner ply. Another issue with some conventional linerless labels lies in the adhesive used therein. Water based adhesives (e.g., remoistenable adhesives) typically used in conventional linerless labels require a relatively long time to dry, for instance. Conversely, thermal adhesives (e.g., hot melt adhesives) have a relatively quick drying time, though they are generally incompatible with certain printing methods. Importantly, an issue with conventional linerless labels is that the exposed nature of the adhesive prevents printing on the linerless label using most typical printing methods (e.g., laser printers, thermal transfer printers, or any printer other than direct thermal printers). It is to be understood that, when taking these many considerations into account, conventional linerless labels may have limited applicability. Further concerns may stem from the adhesive used in conventional labels, such as remoistenable adhesives (i.e., adhesives that are “activated” and gain adhesive properties upon sufficient contact with a fluid such as water). Remoistenable adhesives may not only represent a significant portion of the label cost, but also a significant portion of the label size. Accordingly, the remoistenable adhesive layer in conventional labels contributes significantly to the size of the rolls of shipping labels, the amount of space required to ship and/or store the shipping label rolls, et cetera. The relatively large size of this adhesive layer adds cost to the shipping, storing, and production of the conventional shipping labels. Moreover, remoistenable adhesives may have other undesirable traits. For instance, conventional remoistenable adhesives may be difficult to handle due to their tackiness upon activation, which may foul up equipment, among other things. Further, manually handling a label with an activated remoistenable adhesive layer may cause the adhesive to get on the hands of the handler, which may be undesirable. As another example, conventional remoistenable adhesives typically require a large amount of water to activate and adhere to objects, which may add up over the application of many thousands of labels. Remoistenable adhesives also tend to curl and thus are typically unusable to secure objects having larger surface areas. These and other such drawbacks have caused the label industry to move away from remoistenable adhesives. It may be advantageous to have a label that does not suffer from the disadvantages associated with conventional liners and/or conventional adhesives. It may further be desirable to have a label that does not suffer from the drawbacks of linerless labels, and which, like traditional labels having liners, can be printed using one of a variety of printing methods. The present disclosure may provide for such a label. The displaceable liner of the present disclosure is first illustrated herein with a simplex (i.e., single ply) label100(FIGS.2-,3A-3B, and4A-4B). A method of making and using this simplex label100is then discussed (FIG.5). Use of the displaceable liner is subsequently discussed in connection with another simplex label300printable on both sides (FIGS.6A-6B). Workings of the displaceable liner are then detailed in connection with a duplex label400(FIGS.7A-7C) and a duplex label600(FIGS.8A-8D). Next, use of the displaceable liner with a tape product100′ is illustrated (FIGS.9-10). Thereafter, systems (e.g., system1000) and methods of fully or partially automating the process of using labels having the displaceable liner of the present disclosure are discussed (FIGS.11-17,17A,18-20). The artisan will understand that the label products, tape products, labeling systems and methods, et cetera, disclosed herein are exemplary and are not intended to be independently limiting. Focus is directed now toFIG.2, which shows an example embodiment100of a simplex label (i.e., a label where indicia is printed on one side of the label, e.g., a single ply label) having a displaceable liner110, according to the teachings of the present disclosure. The illustrated label100has a top side100T and a bottom side100B. As discussed herein, indicia may be printed on the top side100T and the label100may be adhered to a substrate50(e.g., a cardboard box, a piece of paper, a plastic jug, an envelope, a porous surface, a non-porous surface, and/or any other suitable surface) at the bottom side100B. The indicia may be printed media (e.g., text, icons, pictures, graphics, colors, etc.) and may be configured to convey information to a user (e.g., personalized information, generalized information, et cetera). In more detail, the label100may have a face stock102, which may have an upper side102U and a lower side102L. The face stock102may comprise a solitary ply102, made, for example, of paper. This face stock102may also be referred to herein as a face ply to indicate that the face stock comprises a solitary ply. Alternately, in other embodiments, the face stock102may contain more than one ply. In other embodiments still, the face stock102may comprise a film (e.g., a clear plastic film) or other printable substrate. The face ply102, at its upper side102U, may be provided with a topcoat104. The topcoat104, akin to the topcoat16of the prior art label10, may be configured for the reception of printed (e.g., black and/or colored) indicia (e.g., content configured to be consumed by consumers). The topcoat16may be, e.g., a direct thermal or other printable coating. In embodiments, the face ply102may be inherently printable and not require a separate printable coating to be disposed thereon. The label100may have a hydrophilic layer106located on a face ply lower side102L. The hydrophilic layer106may have hydrophilic or semi-hydrophilic properties (e.g., a substantial affinity for liquid absorption). The hydrophilic layer106may additionally provide structural support to the label100, such as by preventing deformation and/or disintegration of the label100when the face ply102or the displaceable liner110absorbs moisture (e.g., when they become saturated with a liquid). The hydrophilic layer106may be, for example, an inkjet coating. In another embodiment, soft feel coating or other such coating may be employed. In some embodiments, the hydrophilic layer106may be a combination of two or more hydrophilic coatings; alternately, the hydrophilic coating106may be a combination of substances that, when mixed together, have a tendency to absorb water. While the hydrophilic coating106may cover the entire face ply lower side102L, in embodiments, the hydrophilic coating106may instead be arranged in a pattern. The pattern may be any pattern (e.g., a checkerboard pattern, a dot pattern, lines, stripes, random, et cetera), and may but need not be symmetrical. The pattern may include openings (i.e., areas that are devoid of the hydrophilic coating106). In certain applications, the face ply102may inherently include the desirable properties of a hydrophilic layer106(e.g., the face ply102may have the ability to draw in water, may have sufficient structural integrity, et cetera) such that use of a separate hydrophilic coating106may be unnecessary. For instance, where the face ply102is relatively thick, it may by itself emulate a relatively thin face ply102that is layered with a hydrophilic coating106. An adhesive layer108may be located on a hydrophilic layer lower side106L, and may be covered (e.g., wholly, partially) by the displaceable liner110(i.e., the displaceable liner110may initially be located on an adhesive layer lower side108L). The adhesive layer108may be any suitable adhesive now known or subsequently developed, such as a pressure sensitive adhesive. In an embodiment, the adhesive108may be a hot-melt adhesive. In use, when the displaceable liner110is displaced from the adhesive108to expose the adhesive108as discussed herein, the adhesive layer108may be used to secure the face ply102to the substrate50. Upon displacement of the displaceable liner110, the exposed adhesive108may contact and bond with the substrate50to cause the face ply102to become secured to the substrate50; the displaceable liner110, conversely, may not contact the substrate50and therefore may not interfere with the bond between the substrate50and the label100. In embodiments, it may be important to arrange the adhesive layer108in a pattern108P (seeFIG.3A) having areas comprising adhesive and areas devoid of adhesive (or at least having areas having a substantially lower concentration of adhesive as compared to other areas of the adhesive pattern108P). In these embodiments, the adhesive layer108may cover only portions of the hydrophilic layer106(i.e., the hydrophilic layer106may be uncovered by adhesive108in portions of the adhesive pattern108P devoid of the adhesive). The adhesive pattern108P may be any pattern (e.g., a checkerboard pattern, a dot pattern, lines, stripes, random, etc.), and may but need not be symmetrical. As discussed herein, the adhesive layer pattern108P may facilitate the workings of the displaceable liner110and may, in some embodiments, be a requirement therefor to ensure a secure bond between the label100and the substrate50. The displaceable liner110may initially cover the adhesive108and ensure the adhesive108does not undesirably contact a surface (e.g., the printer, the conveyer belt, et cetera) or debris to cause the label100to inadvertently bond to such surfaces or debris; upon activation, the displaceable liner110may get displaced as discussed herein and consequently expose the adhesive108to allow for securement of the label100to a substrate50. Thus, the displaceable liner110may selectively shield the adhesive108, in effect functioning like a traditional liner, until such time that exposing the adhesive108to bond the label100with the substrate50is desired. The adhesive pattern108P may include openings (i.e., areas that are devoid of the adhesive layer108). For example, in embodiments, the adhesive layer pattern108P may have a one or more of each of recesses108R and crests108C, as shown inFIGS.3through4, which are used to illustrate example operation of displaceable liner110in view of the adhesive108. The adhesive pattern crests108C may be areas of the adhesive pattern108P on the label100(e.g., at the lower side106L of the hydrophilic layer) where the adhesive108is present, and the adhesive pattern recesses108R may be areas of the adhesive pattern108P devoid of the adhesive108. Each adhesive crest108C may have a height108H (seeFIG.4A), which may (though need not be) substantially the same as the height of adjacent adhesive crests108C. This height108C may correspond to the thickness of the adhesive layer108. Further, each adhesive crest108C may be spaced apart (e.g., laterally spaced apart) from an adjoining adhesive crest108C by a distance109W, which distance may be equal to a width of an adhesive pattern recess108R. Each adhesive recess108R, encapsulated on one or more sides by adjoining crests108C, may form a “pocket” (or a “liner receiving region”) for receiving the displaceable liner110once the displaceable liner110is activated by a fluid. Reception of the displaceable liner110by the pockets108R may be facilitated by the hydrophilic coating106underneath the adhesive layer recesses108R, which coating106may facilitate the displacement of the displaceable liner110by drawing in the displaceable liner110into the pockets108R upon activation. The adhesive pattern108P may be arranged in any suitable manner such that the pockets108R thereof are configured to receive the displaceable liner110upon activation. In some embodiments, the recesses108R may contain some adhesive108but a height of the adhesive therein may be less than the height108H of the adhesive crests108C, thereby allowing for the displaceable liner110to be received within the recesses108R. In more detail, the adhesive crests108C may define the boundaries of the adhesive recesses108R. For example, the adhesive crests108C may be arranged along the label100in a plurality of lines or crisscrossed lines (i.e., a grid pattern), and a plurality of adhesive recesses108R may be located in the spaces between these lines of adhesive. The adhesive crests108C may each have any suitable width, height, and spacing, so long as the displaceable liner110situated thereon can cleanly transition from the adhesive crests108C to the pockets108R upon activation. FIG.3Ashows a bottom view of an example label100. Prior to activation, the displaceable liner110may be disposed on the crests108C of the adhesive pattern108P. The crests108C inFIG.3Aare thus labeled with a dashed line to indicate that this portion of the adhesive pattern108P lies beneath the displaceable liner110. Once the displaceable liner110is activated (e.g., by water), the displaceable liner110may be displaced from above the crests108P to within the recesses108R or pockets, as shown inFIG.3B. The crests108C inFIG.3Bare demarcated with a solid line to indicate the displaceable liner110has moved from the crests108C into the pockets108R, thereby exposing the adhesive crests108C. To illustrate further,FIG.4Ashows that the displace liner110may overlie the adhesive crests108C before the liner110is activated. Prior to activation, the adhesive recesses108R, which are devoid of adhesive108, may also be devoid of the displaceable liner110. Once activated with water or another fluid, the displaceable liner110may transition to within the recesses108R and expose the crests108C of the adhesive layer108, as shown inFIG.4B. The exposed adhesive crests108C may now be usable to secure the label100to the substrate50. In embodiments, the adhesive crests108C may form a bond with the substrate50whereas the displaceable liner110within the pockets108R may not contact the substrate50because of the appreciably greater height108H of the crests108C relative to a height110H of the displaceable liner110within the pockets108R. To this end, a thickness of the adhesive108layer may be substantially greater than a thickness of the displaceable liner110. If an undesirably thick layer of the displaceable liner110is disposed on the crests108C, upon activation the displaceable liner110may not fit within the pockets108R and thus lead to insufficient exposure of the adhesive108. WhileFIGS.3A and3Bshow a lined grid pattern of adhesive crests108C and recesses108R, other suitable adhesive patterns108P are contemplated and are within the scope of the disclosure (e.g., concentric shapes, checkered, random, et cetera). Further, whileFIGS.4A and4Bshow adhesive crests108C that are generally rounded, other suitable adhesive layer108shapes are contemplated and are within the scope of this disclosure (e.g., rectangular, triangular, random, et cetera). In embodiments, an important consideration may include ensuring that the pattern108P has suitably sized pockets108R or regions to receive the specific type and amount of displaceable liner110being used upon activation. As noted, the displaceable liner110, before it is activated, may shield the adhesive108and preclude the adhesive layer108from coming into contact with—and thus adhering to—undesirable surfaces or debris. The displaceable liner110may be displaced, i.e., may be made to travel from its original location vertically adjacent the crests108C into the recesses108R to expose the adhesive crests108C (i.e., transition from being vertically adjacent the crests108C be being laterally adjacent the crests108C) by applying a fluid to the displaceable liner110. In some embodiments, at least a part of the activated displaceable liner110may be dissolved into the label100(e.g., into the hydrophilic layer106thereof). While not required, depending on the configuration of the displaceable liner110and the substrate, in some embodiments a part of the displaceable liner110may contact the substrate50and be dissolved into the substrate50. Such contact between the displaceable liner110and the substrate50, however, is not needed, when bonding the label100to the substrate50. In effect, the displaceable liner110may be a liner that is selectively changeable between a first state and a second state. The first state may be a generally inert state where the displaceable liner110acts in a similar manner to the conventional liner, and precludes the adhesion of the label100to surfaces (e.g., undesirable surfaces) until the label100is ready to be adhered to the substrate50. The second state may be an “actuated” state. The displaceable liner110may be actuated by bringing the displaceable liner110in contact with a fluid (e.g., water), which fluid may, for example, be provided on the substrate50. When the displaceable liner110is brought into contact with the fluid on the substrate50, the displaceable liner110may actuate and dispel, or otherwise be displaced from its original location. Broadly, the phrase “displaceable liner”, as used herein, refers to a cover or coating for covering a first composition, which cover is specifically adapted to begin to displace or otherwise dispel when the cover is brought into contact with a second composition. Upon such contact, the cover may be displaced such that the first composition is usable for contacting a third composition. In embodiments, the first composition may be the adhesive layer108, the second composition may be water (e.g., water vapor, liquid water, et cetera), and the third composition may be the substrate50. That is, in embodiments, the displaceable liner110may be a composition that: (a) covers the adhesive layer108so as to preclude the adhesive layer lower side108L from undesirably sticking to another object or surface (the inert state); and (b) is configured to displace and/or dispel when the displaceable liner is brought into contact with a fluid (the actuated or activated state). The term “displaceable liner”, as used herein, specifically excludes a traditional liner ply or plies, such as paper coated at least in part with silicone or other release material, a film, et cetera. The term “displace”, as used herein, connotes that the displaceable liner coating, once wetted, is dispelled, dissolved, or otherwise moves from its original location to another location. In embodiments, the inactivated displaceable liner110may not have any (or any appreciable) adhesion. For example, while the displaceable liner110is covering the adhesive layer lower side108L prior to displacement, the displaceable liner110may not undesirably stick to objects that it touches. The displaceable liner110, even upon activation, may not form a bond with a nonporous substrate in contact therewith. The activated displaceable liner110may in embodiments be capable of forming a bond with certain porous substrates upon contact, however, this bond may be weak relative to the bond formed by the adhesive108(e.g., the hotmelt). Further, if the objective were to cause the dissolvable liner110to contact the substrate50, the amount of dissolvable liner110on the label100may need to be increased, which may then detract from the transition thereof into the pockets108R and unduly interfere with the bond to be formed by the adhesive layer108. In view of these considerations, in embodiments, only the adhesive108may be used to bond the label100to the substrate50and the dissolvable liner110may be used not for any bonding capabilities but to move out of the way of the adhesive108when desired to allow the adhesive108to create the bond. In embodiments, the constituents of the displaceable liner110may include an enabler222, a facilitator224, and a stabilizer226. In some embodiments, the displaceable liner110may also include a slip agent228. The enabler222may be the base or main ingredient of the displaceable liner110. In embodiments, the enabler222may comprise a remoistenable adhesive or other similar material. The artisan will understand from the discussion herein that the displaceable liner110, once composed, behaves disparately from the enabler222and from any of its other ingredients separately. The facilitator224may be an ingredient that facilitates displacement of the displaceable liner110into the pockets108R upon contact with a fluid (e.g., water). The facilitator224may do so by desirably impacting the properties (e.g., the viscosity) of the enabler222. In an embodiment, the facilitator224may be activated coconut carbon water224A, which, as is known, may be devoid of many of the impurities typically found in tap water. Applicant's experimentation has shown that use of activated coconut carbon water as the facilitator224as opposed to tap water allows the displaceable liner110to be activated by a larger group of fluids. The stabilizer226may serve, among other things, to increase the stability and the temperature resistance of the enabler222. The stabilizer226may also serve as a blocking agent, such as by precluding the enabler222from being undesirably activated in humid ambient conditions. In some embodiments, the stabilizer226may influence other properties of the displaceable liner110, such as the surface tension of the displaceable liner100. The slip agent28may be, for example, a release material (e.g., safflower oil228A, silicone228B. etc.) that increases the temperature resistance properties and/or the non-adhesion properties of the displaceable liner110. The slip agent228, which may make up about 0.25% by weight the displaceable liner110mixture, may facilitate the use of certain printing methods with the label100, such as laser printing or direct thermal printing. For example, the slip agent228may ensure that the adhesive crests108C do not ooze out into the recesses108R because of the high temperatures to which the label100is subjected in laser printers. In embodiments, the displaceable liner110may include different (e.g., alternate, additional) ingredients that may influence the properties and/or the applicability of the displaceable liner110. For example, embodiments of the displaceable liner110may incorporate various ingredients whose properties are more compatible with certain types of substrates50(as demonstrated below in tables1and2). As another example, where it is desired to give the displaceable liner110a hue (e.g., an off-white—or any other—hue such that the displaceable liner110resembles a traditional paper liner), a colored pigment may be included to impart such a hue to the displaceable liner110. Table 1 below shows the constituents202of a displaceable liner110in an embodiment110A. This embodiment110A may include a mixture of non-toxic remoistenable adhesive222A, activated carbon coconut water224A (“ACC water”), precipitated calcium carbonate (PCC)226A, and safflower oil228A. The label100(specifically the bottom side100B thereof) may then be coated with this mixture to preclude the face ply102from undesirably adhering to objects and to allow the label to be adhered to the substrate50when desired. TABLE 1DISPLACEABLE LINER 110AQuantityPreferredNo.Ingredient 202range 204quantity 2061Enabler 222:1 lbs. to 5 lbs.3lbs.Non-toxicremoistenableadhesive 222A2Facilitator 224:0.5 lbs. to 1.5 lbs.1lbs.ACC water 224A3Stabilizer 226:0.09 lbs. to 0.27 lbs.0.18lbs.Precipitatedcalciumcarbonate 226A4Slip Agent 228:0.004 lbs. to 0.017 lbs.0.0105lbs.Safflower oil 228A Applicant's experiments have shown that this combination of ingredients202may enable the displaceable liner110A to readily be displaced from the adhesive crests108C to the adhesive recesses108R once activated by a fluid (e.g., water) to expose the adhesive crests108C. By retreating within the label100(e.g., the recesses108R therein), the activated displaceable liner110A may be precluded from interfering with the bond between the adhesive108and the substrate50. If a substantial part of the displaceable liner110did not get displaced from the adhesive crests108C to the pockets108R, this liner110would continue to block the adhesive crests108C at least in part and thus preclude the crests108C from serving their intended purpose—to securely adhere the label100to the substrate50. By being displaced, the displaceable liner110may allow the label100to be adhered to any object that bonds with the adhesive108C (e.g., with a hot-melt adhesive). In embodiments, the transition of the displaceable liner110from the adhesive crests108C to the pockets108R may generally be in toto such that the entire adhesive layer108(as opposed to only portions thereof) may be exposed. This may allow the label100to be secured to substrates50that require substantial amounts of adhesive for bonding the label thereto (e.g., plastics, high-density polyethylene, et cetera). Of course, the label100may also be secured to conventional substrates50(e.g., cardboard, paper, et cetera). As noted, the displaceable liner110, once composed, behaves disparately from the enabler222and from any of its other ingredients separately. For instance, the enabler222by itself could not be used in place of the displaceable liner110because the enabler222would cause the label100to undesirably curl, and would cause the label100to unduly adhere to surfaces (e.g., hands, printing equipment, et cetera). Further, Applicant's experiments have shown the enabler222by itself does not adequately traverse to the pockets108R upon the application of a fluid (e.g., water) to expose the adhesive108. And further yet, the amount of water required to cause the enabler222(e.g., remoistenable adhesive) to be used to adhere the label100to the substrate50is orders of magnitude (specifically, 10-20 times) the amount of water it takes for the displaceable liner110to be displaced to give way to the underlying adhesive108. In the same vein, the displaceable liner110does not behave as one would expect the facilitator224, the stabilizer224, or the slip agent228to behave, either individually or combined together (with or without the enabler222). In this regard, the properties of the displaceable liner110are unexpected and surprisingly beneficial. The quantity ranges204and the preferred quantities206of the various ingredients202listed above are merely exemplary and are not intended to be independently limiting. For example, in embodiments, more activated coconut carbon filtered water224A may be added to reduce the viscosity of the displaceable liner coating110, more PCC226A may be added to further enhance the stability of the enabler222, et cetera. Further, in embodiments, the preferred quantities206of the various ingredients202listed above may be proportionally reduced or increased for smaller or larger applications, respectively. The preferred quantities206listed above will yield a volume of about 4.1905 lbs. of the displaceable liner coating110A, which may be used to coat many thousands of labels100to cover the face ply lower sides102L thereof. Table 2 below shows the constituents212of another displaceable liner110in an embodiment110B. This embodiment110B may include a mixture of non-toxic remoistenable adhesive222B, activated carbon coconut water224B (“ACC water”), gypsum226B, and silicone228B. The label100(specifically the bottom side100B thereof) may then be coated with this mixture to preclude the face ply102from undesirably adhering to objects and to allow the label to be adhered to the substrate50when desired. TABLE 2DISPLACEABLE LINER 110BQuantityPreferredNo.Ingredient 212range 214quantity 2161Enabler 222:2 lbs. to 6 lbs.4lbs.Non-toxicremoistenableadhesive 222A2Facilitator 224:0.125 lbs. to 0.375 lbs..25lbs.ACC water 224A3Stabilizer 226:1-50 heaping teaspoons21 heapingGypsum 226B(about 0.05 lbs.teaspoonsto 2.8 lbs.)(about 1.2 lbs.)4Slip Agent 228:0.0055 lbs. to 0.023 lbs.0.014lbs.Silicone 228B The displaceable liner110B may operate similarly to the displaceable liner110A (e.g., by precluding undue contact between the adhesive108and substrates until activated). A key difference between the displaceable liner110B and the displaceable liner110A may be that the displaceable liner110B may require a porous substrate50, such as a cardboard box or other conventional substrate, which serves to absorb at least a portion of the displaceable liner110. Thus, with the displaceable liner110B, the transition of the displaceable liner110B into the pockets108R together with the dissolving of the displaceable liner110B by the porous substrate50may collectively allow for the adhesive108to be exposed and work to securely adhere the label100to the substrate50. Unlike the displaceable liner110A, the displaceable liner110B may not function effectively with nonporous substrates (such as plastic sheets, milk jugs, pill bottles, et cetera). In embodiments, the label100may have its components modified to compensate for such a displaceable liner110B, such as by including a thicker hydrophilic layer106that more readily absorbs the activated displaceable liner110B. The displaceable liner110B, like the displaceable liner110A, may comprise an enabler222, a facilitator224, a stabilizer226, and a slip agent228. In embodiments, the enabler222may be the non-toxic remoistenable adhesive222A, i.e., the same enabler222that is used in the displaceable liner110B. In an embodiment, the facilitator224of the displaceable liner110B may be the same as the facilitator224A, e.g., ACC water. In other embodiments, a different enabler222and/or facilitator224may be used in the different displaceable liners. The stabilizer226used in the displaceable liner110A and110B may be different. For example, in an embodiment, instead of precipitated calcium carbonate, the displaceable liner110B may employ gypsum226B as the stabilizer226. Where a slip agent228is used, the displaceable liner110B may use the same slip agent or a different slip agent relative to the displaceable liner110A (e.g., silicone). Like the displaceable liner110B, the quantity ranges214and the preferred quantities216of the various ingredients212listed above are merely exemplary and are not intended to be independently limiting. For example, in embodiments, more activated coconut carbon filtered water224A may be added to reduce the viscosity of the displaceable liner coating110, more gypsum226B may be added to further enhance the stability of the enabler222, et cetera. Further, in embodiments, the preferred quantities216of the various ingredients212listed above may be proportionally reduced or increased for smaller or larger applications, respectively. The artisan would understand from the examples above that there may be a variety of enablers222, facilitators224, stabilizers226, and slip agents228that may be used in embodiments of the displaceable liner110, and that the composition of the displaceable liner110may be varied in line with a particular application. For instance, precipitated calcium carbonate may be used as the stabilizer226for applications involving any type of substrate (including nonporous substrates) as the dissolvable liner110A comprising precipitated calcium carbonate226A may not need to be dissolved into a substrate50to allow the label100to adhere to the substrate50via the exposed adhesive108. Alternately, gypsum226B may be used as the stabilizer226in applications where the substrate50is porous and capable of absorbing the dissolvable liner110. As noted, precipitated calcium carbonate226A may also be used as the stabilizer226when the substrate50is porous; however, dissolving of this dissolvable liner110A by the substrate50may not be a prerequisite, and indeed, may deter from the secure adhesion of the label100to the porous substrate50. The dissolvable liner110A and110B may have other differences that may make them uniquely suitable for particular applications. For example, the hot melt108, once exposed by the activated dissolvable liner110A, may be usable to secure the label100to the substrate50after an extended wait period (e.g., a day). The hot melt108exposed by the activated dissolvable liner110B, conversely, may be repositionable but may need to be applied to a substrate within minutes upon wetting. In some embodiments, one or more of the ingredients may be omitted. For example, the slip agent228may be omitted in certain low temperature applications. Thus, in embodiments, one or more of a suitable enabler222, facilitator224, stabilizer226, and/or slip agent228may be used in the displaceable liner110to impart a desired property. In an embodiment, the enabler222A may have a vapor pressure at 20° C. of about 23.4 hPa, a density at 20° C. of about 1.08 g/cm3, a pH value at 20° C. of 4.0-6.0, a flash point of over 232° C., and a VOC content of 1.6 g/l/0.01 lb/gl. For example, in an embodiment, the remoistenable adhesive222A may be the PriscoBond 121-H remoistenable adhesive commercially available by Prisco®. Alternately or additionally, in other embodiments, the remoistenable adhesive may be one or more of the remoistenable adhesives disclosed in U.S. Pat. No. 3,574,153 to Sirota, U.S. Pat. No. 4,575,525 to Wancome et al., U.S. Pat. No. 4,623,688 to Flanagan, U.S. Pat. No. 5,296,535 to Fazioli et al., each of which are incorporated by reference herein. Other remoistenable adhesives known to the artisan and/or subsequently developed may likewise be employed. Applicant's experimentation confirms that off-the-shelf remoistenable adhesives222disclosed herein, such as the PriscoBond 121-H product, cannot suitably be used as adhesive covers for labels until other ingredients are combined therewith. The displaceable liner110may temporarily cover the adhesive layer108while the topcoat104is exposed for printing. As such, the label100may be printed using any suitable technology now known or subsequently developed (such as a direct thermal printer, a thermal transfer printer, a laser printer, an inkjet printer, et cetera). The displaceable liner110in its inert state may preclude adhesion between the label100and objects with which the label100comes into contact with (e.g., a printer roller, another label, small debris, a table or other surface) before it is time to adhere the label100to the substrate50. The displaceable liner110may be heat-resistant and may be able to readily withstand the relatively high temperatures encountered by labels in printers (e.g., laser printers). Further, the displaceable liner110—which may comprise a non-toxic remoistenable adhesive as a constituent thereof—may as a whole be a non-sticky substance when dry (i.e., when in the inert state). Thus, the displaceable liner110itself may not undesirably stick to a surface before the label100is ready to be applied to the substrate50. In embodiments, the displaceable liner110and the adhesive layer108on the label, prior to activation, may be in registry. For example, where the adhesive layer108is disposed in the pattern108P, the displaceable liner110may be disposed on the pattern108P such that the two patterns are in registry. Such may be effectuated, e.g., by using a roller having cells corresponding to the adhesive pattern108P to dispose the dissolvable liner110on the label100. In some embodiments, the adhesive pattern and the displaceable liner pattern may not be in registry. Alternately, in embodiments, one or both of the adhesive and the displaceable liner may not be disposed in a true pattern. One advantage of the displaceable liner110over conventional liners may be that unlike labels having traditional liner plies, the user may ready the label100for adhesion to the substrate50without the need to discard any liner in a waste basket or elsewhere. Another advantage of the label100(and the other displaceable liner label embodiments disclosed herein) may be the low cost of the label100. As discussed herein, the label100may be made inexpensively at least in part because the label100, including the dissolvable liner layer(s) disposed thereon, may be relatively thin as compared to other labels. The artisan will understand the thin layers may require fewer raw materials which may translate into cost savings. In an embodiment, for example, the face ply102(together with top coat104such as the direct thermal coating) may be about 0.003″ thick, the hydrophilic layer106may be about 0.00001″ thick (±0.000005″), the hot melt adhesive grid108may be about 0.0008″ thick (±0.00004″), and the corresponding displaceable liner110grid may be about 0.0002″ thick (±0.0001″). In some embodiments, two (or a different number of individual) layers of the dissolvable liner110may be applied, and each layer may be about 0.0001″ thick. The thickness of the hydrophilic layer106, the adhesive layer108, and the dissolvable liner110, even collectively, may be insignificant compared to the thickness of the face ply102, whereas conventional linerless compositions may double the size of the face ply (i.e., by adding 0.003″ of thickness to the face ply). In addition to cost benefits, the thinness of the labels100may allow for storage and transportation benefits to be reaped. In embodiments, the thickness of the hot melt adhesive layer108and the other constituents (e.g., the dissolvable liner110) may be increased or decreased in line with a particular application. Care may be taken though to ensure that the recesses108R have sufficient volume to retain the dissolvable liner110upon activation. For instance, where the thickness of the adhesive layer108is reduced, care may be taken prior to increasing the thickness of the dissolvable liner110to ensure that the increased amount of dissolvable liner110would be properly received within the pockets108R of reduced size. The artisan will thus understand that the dimensions and arrangement of the adhesive pattern108P may influence the quantity and arrangement of a pattern110P of the dissolvable liner. For example, where the adhesive pattern108P comprises relatively thick and wide lines of adhesive (e.g., relatively high and wide crests108C), a greater amount and relatively wide lines of displaceable liner coating110may be required to adequately cover the crests108C, and this amount of displaceable liner110may in turn require larger recesses108R so that the displaceable liner110can be accommodated therein. In embodiments, and as discussed herein, the displacement of the displaceable liner110into the pockets108P may be facilitated by physically moving the label100on the substrate50(in addition to use of the fluid). FIG.5is a flow chart illustrating a method500of making and using the displaceable liner110, in an embodiment. At step502, an enabler222may be placed in a container together with a facilitator224. For example, 4 lbs. of PB121-H-Prisco®222A may be weighed and placed in a container together with 0.25 lbs. of activated coconut carbon filtered water224A. Thereafter, at step504, the stabilizer226and the slip agent228may be added to the mixture. For instance, about 1.2 lbs. (i.e., about 21 heaping teaspoons) of gypsum226A or 0.18 lbs. of precipitated calcium carbonate226B may be placed in the container along with about 0.014 lbs. of safflower oil228A. The quantities of the various ingredients may be proportionally changed or otherwise different. At step506, the ingredients202may be mixed together. For example, in an embodiment, a cutting blade spinning at about 2,000 rpm may be used to mix all the ingredients202until the resulting mixture becomes relatively smooth and homogenous. At step508, a label face stock102with the lower side102L thereof covered with a hydrophilic coating106(e.g., an inkjet or other suitable coating) may be provided. The hydrophilic coating106may be dried (e.g., by any suitable dryer now known or subsequently developed) after being applied to the face stock lower side102L. At step510, the dried hydrophilic coating106may be coated with a layer of adhesive108(i.e., the adhesive layer108may be applied to the lower side102L such that the hydrophilic coating106is between the lower side102L and the adhesive layer108). The adhesive108may be applied in a pattern108P, as described above. Once the adhesive pattern108P is disposed, then, at step512, a layer of the displaceable liner110may be applied and dried. The displaceable liner110may be disposed in a pattern110P. The adhesive pattern108P (e.g., the quantity of adhesive, the configuration of the pattern, et cetera) may be specifically configured to allow the pockets108R thereof to receive the displaceable liner therein. As detailed above, the displaceable liner pattern110P may correspond to the adhesive pattern108P (e.g., the two patterns may be in registry with each other). Additional layers of the displaceable liner110may also be disposed in the same pattern110P, as it has been found that disposing the displaceable liner110in a plurality of layers (e.g., two layers of 0.0001″ each instead of one layer that is 0.0002″ thick) may facilitate cleaner transition of the displaceable liner110into the pockets108R. At step514, indicia may be printed on the upper side102U of the face stock102(e.g., on the topcoat104thereof). The label100may be printed using any printer (including any conventional printer, such as a direct thermal printer, a thermal transfer printer, a laser printer, et cetera). Specifically, as the label100is passed through the printer, the topcoat104thereof may receive printed indicia whereas the displaceable liner110may cover the face ply lower side102L and preclude the label100from adhering to printer parts. In some embodiments, a cooling module may be used to cool the label100at the printer or downstream from the printer, which may keep adhesive from building up on the printer cutter. When the face ply102is ready to be adhered to a substrate, the displaceable liner coating110may be brought into contact with water or another fluid at step516to cause the displaceable liner coating110to dispel and reveal the adhesive layer108below. Moisture may be introduced to the face ply102directly and/or indirectly. The terms “water”, “moisture”, “liquid”, and “fluid” may be used interchangeably herein. In an embodiment, the substrate50(e.g., the box, package, envelope, plastic jug, etc.) and/or a section thereof may be moistened with water and the label bottom side100B may be placed on the moistened section of the substrate50so as to allow the displaceable liner coating110to interact with the moisture on the substrate50(indirect moistening) and displace (e.g., disperse from the adhesive layer108). In another embodiment, instead of moistening the substrate50and then placing the face ply102on the moistened substrate50, the face ply102(i.e., the displaceable liner coating110thereof) itself may be moistened to cause the displaceable liner coating110to dispel (direct moistening) and then the face ply102may be situated on the substrate50. If the moisture is applied directly to the displaceable liner coating110on the face stock102, and if the displaceable liner110B (as opposed to110A) is used, the face stock102may then be adhered to the substrate50any time within the next 90 seconds or so. Alternately, if the substrate50is moistened instead of directly moistening the displaceable liner coating110, then the face ply102may have to be placed on the moistened section of the substrate50within 3-20 seconds or so (as the moisture may thereafter be absorbed by the substrate50or otherwise removed and may not be able to serve to activate the displaceable liner coating110B). Where the displaceable liner110A is used, the label100may be moistened and may be placed on the substrate50for secure adhesion thereto even after an extended period (e.g., several hours later). In some embodiments, moisture may be introduced to the displaceable liner coating110both directly and indirectly (i.e., the substrate50may be moistened and the displaceable liner coating110may also be moistened before the face ply102contacts the moistened substrate50). In embodiments, water (or other fluid) may be added to the substrate50and/or the displaceable liner110via a sprayer. Use of a sprayer may allow for a small volume of water to be disposed on the substrate50and/or the displaceable liner110and may reduce the risk that too much water may be disposed on the substrate50and/or the label100, causing damage to the label100. For example, over saturating the label100with fluid may cause the label100to undesirably curl and/or disintegrate. In other embodiments, water may be added to the substrate50and/or the face ply102via other means (e.g., via a different water dispensing mechanism, via a moistened cloth or wipe, et cetera). An alternate spraying mechanism is discussed in greater detail below. In embodiments, a solution may instead be sprayed onto the substrate50and/or the displaceable liner110. For example, the solution to be sprayed may be a mixture of water and the displaceable liner110(e.g., about 1 tsp of displaceable liner110mixture for every 16 fluid ounces of water). Experimentation has shown that such a solution may more readily activate the displaceable liner110relative to just, for example, water. At step518, the moisture introduced to the displaceable liner coating110(e.g., directly and/or indirectly) may cause the displaceable liner coating110to begin to transition. At step520, the activating liner110may move from the adhesive crests108C to the recesses108R. For example, the liner110may pool into the recesses108R with the assistance of the underlying hydrophilic layer106whose affinity to the dissolvable liner110may serve to retain the dissolvable liner110in the pockets108R. In other words, the hydrophilic properties of the face ply102and/or the hydrophilic layer106may draw the activated displaceable liner110into the pockets that are the recesses108R. At step522, the adhesive layer108(e.g., the adhesive crests108C thereof) may be exposed by the now receding displaceable liner110and may be ready to secure the label100to the substrate50. At step524, if the moisture was introduced to the displaceable liner110directly, the label100may now be situated on the substrate50, where, in some embodiments, the substrate50may absorb (e.g., partially) the displaceable liner coating110(e.g., where the displaceable liner110B is used). Conversely, if the moisture was introduced to the displaceable liner110indirectly (e.g., a section of the substrate50was moistened and the displaceable liner110was placed in contact with the moistened section of the substrate50), the moisture on the substrate50may cause the displaceable liner coating110to dispel and be drawn into the recesses108R, exposing the adhesive108. While the hydrophilic properties of the label100may be sufficient to draw the activated liner110into the recesses108R, other methods may be employed in embodiments to facilitate transition the displaceable liner110from the adhesive crests108C to the recesses108R. For example, at step526, after the label is brought into contact with the substrate50, the directly or indirectly moistened label100may be physically moved (e.g., a relatively small amount, about 0.5 to 1 mm, etc.) while the label100is in contact with the substrate50. This shifting of the label100may facilitate the movement of the displaceable liner110relative to the adhesive layer108(e.g., via the friction created by contact between the liner110and the substrate50). Shifting the label100against the substrate50may be accomplished in any suitable manner. For instance, the label100may be “actively” shifted, such as by a user or machine (e.g., a vacuum driven tamp head that moves the label against the substrate50). As another example, the label100may be “passively” shifted, such as by the motion of the substrate50itself (e.g., the label100may be held in place on the substrate50while the substrate50is moving down a conveyor of an assembly line). Experimentation has shown that, depending on the configuration of the displaceable liner110, moving the label100relative to the substrate50after it is placed thereupon may assist in causing the displaceable liner110to transition to the pockets108R. At step528, the label100may bond to the substrate50by virtue of the now-exposed adhesive layer108. In this way, by needing an activating fluid (e.g., water) to activate the displaceable liner110, the displaceable liner110may remain in the inert state until the label100is to be applied to the substrate50. Furthermore, the requirement for a traditional liner ply may be negated. The amount of activating fluid used to dissolve the liner coating110may be negligible (e.g., relative to traditional remoistenable adhesives) and may not cause any appreciable damage to the substrate50. Once the displaceable liner110is wetted (directly and/or indirectly) and the face ply102is situated on the substrate50, the displaceable liner110may dispel relatively quickly such that the label100can generally simultaneously be adhered to the substrate50. The label100and the dissolvable liner110thereof may operate as intended at room temperature. However, in embodiments, increasing the operating temperature may improve the efficacy (e.g., bond strength, time-to-bond, etc.) of the label100. As such, in embodiments, the method500may include the step of heating the label100and/or the substrate50prior to the application of the label100to the substrate50. The label100or the substrate50may be heated to a higher temperature (e.g., up to about 130° F.) in any suitable manner now known or subsequently developed, such as with a fan or an oven with which the substrate50passes through. Care may be taken as to not cause damage (e.g., cause burning, curling, etc.) to the substrate50and/or the label100. As such, the temperatures that the substrate50and/or the label100may be subjected to may be adjusted based on the heat tolerance of the substrate50or the label100(i.e., objects with a higher heat tolerance may be able to withstand greater temperatures). It is to be understood that the steps of the method500may be modified, added to, and/or omitted as desired, and that such considerations have been contemplated and are within the scope of the present disclosure. For example, the step of heating the substrate50and/or the label100may be added to the method500. As another example, the artisan may understand that the method500may be readily modified to construct, print, and apply the label300(and other embodiments of the displaceable liner label) as described below. Thus, as has been described, the displaceable liner110may, in effect, replace the traditional liner plies of prior art labels, and the label100may be used in any application where prior art labels were heretofore employed. The illustrated simplex label100, as discussed herein, may be configured for single-sided printing. Such, however, is merely exemplary, and the displaceable liner concept disclosed herein may likewise be used with labels that are printable on both sides. For example,FIGS.6A through6Billustrate a simplex label300—employing a displaceable liner314—that includes a single ply and is printable on both sides. It is to be understood that the components of the embodiment300may be substantially similar or the same as the components of the embodiment100, except as specifically noted and/or shown, or as would be inherent. Further, those skilled in the art will appreciate that the embodiment100(and thus the embodiment300) may be modified in various ways, such as through incorporating all or part of any of the various described embodiments, for example. The label300may have a face ply302top side300T (FIG.6A) and a back side300B. The top side300T may include a topcoat304having a printable coating. The topcoat304may allow the top side300T to receive monochrome and/or color printing via any printing means now known or subsequently developed. FIG.6Bshows the back side300B of the label300. The label300may, in an embodiment, include a perforation (or a line of weakness)306. The perforation306may demarcate a central portion307circumscribed by a border portion308. In embodiments, the central portion307may be separated from the border portion308along the perforation306. On the top side300T, in embodiments, each of the central portion307and the border portion308may include the printable coating304. The central portion307may include a printable coating304B opposing the topcoat304, which may enable the central portion307to be printed by any printer. Starting from the central portion307and moving away from the top coat304, the backside border portion308may include a hydrophilic layer312, an adhesive layer309, and one or more layers of displaceable liner314. The adhesive layer309and the displaceable liner314may be generally the same as the adhesive layer108and the displaceable liner110described above, and thus may exhibit the same or similar properties. The displaceable liner coating314may temporarily cover the adhesive layer309and preclude the face ply302from unintentionally adhering to an object (e.g., a printer roller) until the label300is ready to be applied to the substrate50. The label300may thus be printed on both sides (e.g., in a double-sided printer or otherwise), using any printing technology. The adhesive layer309may be arranged in a pattern of crests and recesses, much like the adhesive layer108and its pattern108P of crests108C and recesses108R. In embodiments, the adhesive layer309(and the hydrophilic layer312) may be confined to the border region308. Similarly, the displaceable liner314may be arranged in a pattern that generally corresponds to the pattern of the adhesive layer309, much like the displaceable liner pattern110P and the adhesive layer pattern108P. In operation, the adhesive layer309and displaceable liner314may act similarly to the adhesive layer108and the displaceable liner110. In other words, when activated, the displaceable liner314may transition to the recesses of the adhesive layer309, thus exposing the crests of the adhesive layer309for adherence to a substrate50. When it is time to adhere the label300to the substrate50(e.g., a package), the substrate50may be moistened (e.g., a small quantity of water may be sprayed on the portion of the substrate to which the label300is to be applied). Alternately or in addition, the label300may be moistened. In embodiments, only the border region308may be moistened so as to not wet the printed indicia on the back side. The label300may then be brought into contact with the substrate50such that the back side300B, and specifically the displaceable liner314coating disposed thereon, contacts the substrate50. The moisture may cause the displaceable liner314to transition to the adhesive layer recesses. With the adhesive layer309now exposed, the label100may be secured to the substrate50. When the recipient receives the package50, he may disassociate the central portion307from the border portion308via the perforations306, and access the indicia printed on the back side300B of the label300. In this way, thus, the displaceable liner concept disclosed herein may be used to do away with conventional adhesives and wasteful conventional liners of both single-sided and double-sided labels. It is to be understood that the label300may be constructed, printed, and applied by modifying the steps of the method500accordingly. For example, the method500may be modified such that the adhesive layer309and the displaceable liner314are only placed in the border portion308of the label300. The illustrated labels100and300, as discussed herein, may each include only a single face ply for printing thereon. Such, however, is merely exemplary, and the displaceable liner concept disclosed herein may likewise be used with labels that include two or more face plies (e.g., a duplex label). For example,FIGS.7A-7Cillustrate an embodiment400of a duplex label with the displaceable liner. Embodiment400is substantially similar to the embodiment100, except as specifically noted and/or shown, or as would be inherent. Further, those skilled in the art will appreciate that the embodiment100(and thus the embodiment400) may be modified in various ways, such as through incorporating all or part of any of the various described embodiments, for example. The label400may have a top side400T and a back side400B. Starting from the top side400T of the label400inFIG.7A, a first face ply402is shown. A first face ply upper face402U may include a first topcoat404having a printable coating. The first topcoat404may allow the upper face402U to receive monochrome and/or color printing via any printing means now known or subsequently developed. The dimensions of the first topcoat404may be substantially equal to that of the first face ply402, such that the entirety of the first face ply upper face402U may be configured for printing. Alternatively, only a portion of the first face ply upper face402U may be configured for printing. Continuing from the first face ply402downward, a first displaceable liner portion413may be located between the first face ply402and a second face ply403(i.e., the first displaceable liner413may be in contact with a first ply lower face402L and a second face ply upper face403U). The first displaceable liner layer413may be substantially similar to the displaceable liner110(i.e., both displaceable liners413and110may be constructed using the steps from the method500). In operation, the first displaceable liner layer413may serve to secure the first face ply402and the second face ply403together. That is, the first displaceable liner layer413may be activated (e.g., by water) and may then be absorbed by the first face ply402and the second face ply403to create a bond therebetween. Alternately or additionally, the first face ply402and the second face ply403may be adhered using a hot-melt or other adhesive (e.g., adhesive108, adhesive309, et cetera). In embodiments, the length and/or the width of the area of the face ply402on which the first displaceable liner413is disposed may be disparate from the length and/or the width of the first face ply402. For example, as shown inFIG.7A, the first displaceable liner413may fit within the perimeter of the first face ply402such that there may be a non-zero distance between each edge of the first displaceable liner413and the edges of the first face ply402. The second ply lower face403L may be entirely covered with a second topcoat405(FIG.7B) having a printable coating. The second topcoat405may allow for the second ply lower face403L to receive monochrome and/or color printing via any printing means now know or subsequently developed. In embodiments, the length and/or the width of the second face ply403may be disparate from the length and/or the width of the first face ply402. For example, the second face ply403may fit within the perimeter of the first face ply402such that there may be a non-zero distance between one or more edges of the second face ply403and the edges of the first face ply402(seeFIG.7C). As another example, the dimensions of the second face ply403may generally match the dimensions of the first displaceable liner413. Displaceable liner414may be disposed such that at least a portion of the displaceable liner414is adjacent and in contact with an adhesive layer409that is located on the first ply lower face402L (seeFIG.7C). While some embodiments of the label400may have the second face ply403, the topcoat405, and the displaceable liner414arranged in an overlapping manner, other embodiments of the label400may have second topcoat405boundaries that are defined by a border portion408where the displaceable liner414may be arranged (i.e., there may be little to no overlap between the second topcoat405and the displaceable liner414layers in the border portion408). The border portion408may be provided on a part of the first ply lower side402L adjacent the outer boundaries of the first ply402, and may, in embodiments, also overlap part of the second ply lower side403L adjacent the outer boundaries thereof. Thus, the second ply lower face403L may have a central region407that is devoid of the displaceable liner414and is printable by virtue of the second topcoat405. That is, the label400layers may be formed such that at least a part of the central region407and the topcoat405arranged thereon remains exposed (e.g., for printing) once the label400construction is complete. In this manner, the label400may be configured for double-sided printing (e.g., successive and/or simultaneous printing). The artisan would understand that the border portion408may include the space that is bound by both the perimeter of the central portion407and the perimeter of the largest label400layer (e.g., the first face ply402inFIG.7A). Alternatively, the border portion408may include any amount of space along the label400as long as at least a portion of the central region407remains exposed. In embodiments, the border portion408does not encompass the entirety of the perimeter of the central region407(e.g., the border portion408may consist only of one or more strips located at opposing sides of the central region407). In operation, non-uniform label400layer dimensions may allow some layers to contact other layers to increase label400structural integrity. Some embodiments may include perforation and/or die cut lines (i.e., lines of weakness)406to facilitate access to the second ply lower side403L after the label400has been adhered to the substrate50. For example, the first face ply402and/or the second face ply403may contain perforations/die cuts406that generally demarcate the central region407. These lines of weakness406may be exploited to separate a portion of the label400from the remainder, thus exposing the central portion407for viewing. Because any indicia printed onto the second ply lower side403L may be hidden from view until a user tears the lines of weakness406of the label400, private or personalized indicia may be arranged there. For instance, a private message, a packing slip detailing package contents, and/or advertisement materials may be located on the second face ply lower side403L. Conversely, the first face ply upper face402U may have public information indicia, such as a shipping/mailing address. At the bottom side400B of the label400, there may be the second displaceable liner414. The second displaceable liner414may be generally the same as or similar to the first displaceable liner413, though in embodiments the second displaceable liner414may differ (e.g., by containing different amounts of ingredients202). The second displaceable liner414may be arranged along the border portion408such that displaceable liner414may cover the hydrophilic coating412. While not indicated in this figure, the displaceable liner414(and/or413) may be disposed on the label400in a plurality of layers. The adhesive layer409may be arranged in a pattern of crests and recesses, much like the adhesive layer108and its pattern108P of crests108C and recesses108R. Similarly, the displaceable liner414may be arranged in a pattern that generally corresponds to the pattern of the adhesive layer409. In operation, the adhesive layer409and displaceable liner414may act similarly to the adhesive layer108and the displaceable liner110. In other words, when activated, the displaceable liner414may transition to the recesses of the adhesive layer409, thus exposing the crests of the adhesive layer409for adherence to a substrate50. The substrate50may be a porous substrate or a non-porous substrate. The displaceable liner coating414may temporarily cover the adhesive layer409and preclude the label400from unintentionally adhering with an object (e.g., a printer roller) until the label400is ready to be applied to the substrate50. The label400may thus be printed on both sides (e.g., in a double-sided printer or otherwise). When it is time to adhere the label400to the substrate50(e.g., a package), the substrate50may be moistened (e.g., a small quantity of water may be sprayed on the portion of the substrate to which the label400is to be applied). The label400may then be brought into contact with the substrate50such that the back side400B, and specifically the second displaceable liner414coating disposed thereon, contacts the moistened substrate50. The moisture may cause the second displaceable liner414to dispel and move into the recesses of the adhesive layer409. The label400may also be shifted after it is placed on the substrate50to facilitate the transition of the dissolvable liner414into the pockets, as discussed above. Now exposed, the adhesive409may contact and secure to a substrate50. In embodiments having perforation, die-cuts, or other lines of weakness, the recipient of the package may disassociate the central portion407(or other portion defined by the lines of weakness) from the border portion408, and access the indicia printed on the second ply lower face403L. In this way, thus, the displaceable liner concept disclosed herein may be used to do away with wasteful conventional liners of both single-sided and double-sided labels. It is to be understood that the label400may be constructed, printed, and adhered by modifying the steps of the method500accordingly. For example, the step of adhering a first face ply402and second face ply403together with an adhesive or displaceable liner mixture may be added to the method500. The label400as discussed herein may be one example of a duplex label with the dissolvable liner. Yet another duplex label with the displaceable liner concept is shown inFIGS.8A-8D, which illustrate an embodiment600. Embodiment600is substantially similar to the embodiment400except as specifically noted and/or shown, or as would be inherent. Further, those skilled in the art will appreciate that the embodiment400(and thus the embodiment600) may be modified in various ways, such as through incorporating all or part of any of the various described embodiments, for example. The label600may have a top side600T and a back or bottom side600B. Starting from the top side600T of the label600inFIG.8A, a first ply602is shown. A first ply upper face602U may include a topcoat604having a printable coating (e.g., a direct thermal coating, an inkjet coating, a thermal transfer coating, et cetera). The topcoat604, which may, in embodiments, cover the entirety of the first ply upper face602U, may allow the upper face602U to receive monochrome and/or color printing via any printing means now known or subsequently developed. The first ply602may have a central portion607T, which may be demarcated by one or more lines of weakness605(e.g., perforations, die cuts, etc.). In embodiments, the entire first ply602may have printable coating. Alternatively, only a portion of the first ply (e.g., the central portion607T thereof) may be configured for printing. In such an embodiment, one or more first ply border portions608T may be devoid of the topcoat604. In embodiments, one or more boundaries or edges618of the label600may have non-linear geometry. For example, as illustrated inFIG.8D, one boundary618may include a wavy or non-linear edge620. While a wavy edge620is depicted inFIG.8D, such a non-linear edge620shape is merely illustrative, and other non-linear shapes are envisioned for the edge620(e.g., a zig-zag shape, an undulating shape, a corrugated shape, et cetera). The non-linear edge620may mitigate some or all of the chance that the label600may undesirably bond to a surface, such as a printer part when the label600is undergoing printing. The artisan would understand that this utility may be desirable in other labels, thus it is contemplated herein that other label embodiments (e.g., labels100,200,300,400, etc.) may be modified to include a non-linear edge. Continuing from the first face ply602downward, a displaceable or dissolvable liner portion610may be located between the first ply602and a second ply603(i.e., the first displaceable liner portion610may be in contact with a first ply lower face602L and a second ply upper face603U). In embodiments, the first displaceable liner portion610may cover an entirety of the first ply lower face602L, while in other embodiments the first displaceable liner portion610may cover only a portion of the first ply lower face602L (e.g., in a pattern). The first displaceable liner portion610may include a first hydrophilic coating612and a first dissolvable liner layer614. The first dissolvable liner layer614may be substantially similar to the displaceable liner110(i.e., both displaceable liner layers614and110may be constructed using the steps from the method500). In operation, the first displaceable liner portion610may serve to secure the first face ply602and the second face ply603together. That is, the first displaceable liner layer614may be activated (e.g., by water) and may then be absorbed by the first face ply602and/or the first hydrophilic coating612, and the second face ply603to create a bond therebetween. Alternately or additionally, the first face ply602and the second face ply603may be adhered using a hot-melt or other adhesive (e.g., adhesive108, adhesive309, et cetera). Continuing downward, the dissolvable liner layer614may contact the second ply upper side603U of the second ply603. The second ply603may be like the first ply602, except the second ply603may have smaller dimensions than the first ply603. As seen inFIG.8C, one or more outer edges of the second ply603may be inwardly adjacent external regions606of the displaceable liner610. That is, the external regions606may be one or more areas that are outwardly adjacent of the second ply603. Thus, the second ply603may reside only in a second ply region607B, and the dissolvable liner614may be exposed (e.g., exposed for contact with the substrate50) in the external regions606. In embodiments, the second ply603may be smaller (e.g., have less width) than the first ply central portion607T. This may result in some non-zero distance between two or more adjacent edges of the perimeter of the central portions607T and the second ply603, when the second ply603is adhered to the first ply central portion607T via the displaceable liner610. This non-zero distance between edges may be seen inFIG.8A, where the boundaries of the second ply603are inwardly adjacent the perforations605that define the first ply central portion607T. Further downward, a lower face603L of the second ply603may have a coating609, as illustrated inFIG.8C. Like the topcoat604, the coating609may be a printable coating that may receive monochrome and/or color printing via any printing means now know or subsequently developed, thus enabling the printing of indicia on the second ply lower side603L. The coating609may have dry tac properties (e.g., temporary adhesive properties), and in embodiments, may be used because of its superior abilities to accept printed indicia. In an embodiment, the coating609may be a layer that may change between a first state, second state, and third state. The coating609first state may be where the coating609does not have any significant adhesive properties, and is activatable with a fluid or by another method. Once activated, in the second state, the coating609may exhibit adhesive properties. The coating309may transition (e.g., after a period of time) to the third state, where the coating309adhesive properties may diminish (e.g., partially, entirely). In use, the coating609may be applied to the second ply lower face103L, where it may reside in the first state until a fluid is applied to the coating609. Then, the coating609may transition to the second state, and may be brought into contact with a substrate50and adhere thereto. Next, the adhesive properties of the coating609may diminish (e.g., by drying out). In embodiments, the coating609may instead transition between only two of the above states. For example, the coating609may instead exhibit adhesive properties inherently (e.g., after being applied to the second ply lower face103L), and then those adhesive properties may diminish over time as the coating609dries (e.g., after the coating609is brought into contact with the substrate50). Similar to the topcoat604, the coating609may partially or entirely cover the second ply203(i.e., second ply central portion607B). The lines of weakness605may allow the central portion607T to be disassociated from the label600after the label600is secured to a package. When the central portion607T is so disassociated, the second ply603may remain adhered to the central portion607T of the first ply602, and thus allow for indicia printed on the bottom face603L of the second ply603to be read. When the central portion607T of the first ply602and the second ply603are so removed, the border portion608T of the first ply602may remain on the substrate. Because any indicia printed onto the second ply lower side603L may be hidden from view (e.g., by the substrate the label600is applied to) until a user exploits the lines of weakness605of the label600, private or personalized indicia may be arranged there. For instance, a private message, a packing slip detailing package contents, and/or advertisement materials may be located on the second ply lower side603L. Conversely, the first face ply upper face602U may have public information indicia, such as a shipping/mailing address. The displaceable liner coating614may selectively preclude the label600from unintentionally adhering with an object (e.g., a printer part) until the label600is ready to be applied to the substrate50. The label600may thus be printed on both sides (e.g., in a double-sided printer or otherwise). When it is time to adhere the label600to the substrate50(e.g., a package, a porous surface, etc.), the substrate50and/or the displaceable liner portion610may be moistened (e.g., a small quantity of water may be sprayed on the dissolvable liner layer614and/or the portion of the substrate to which the label600is to be applied). The label600(e.g., the dissolvable liner portion610and the coating609thereof may then be brought into contact with the substrate50such that the back side600B contacts the moistened substrate50. The moisture may cause the displaceable liner layer614to dispel and move into the first ply602and the substrate50. Now dispelled, the dissolvable liner layer614may dry and secure the label600to the substrate50. A recipient of the substrate50may disassociate the central portions607T and607B (or other portion), as defined by the lines of weakness605, from the border portions608T, and access the indicia printed on the second ply lower face603L. In this way, the displaceable liner concept disclosed herein may be used to do away with wasteful conventional liners of both single-sided and double-sided labels. The artisan would understand that the label600may be constructed, printed, and adhered by modifying the steps of the method500accordingly. For example, the steps of adhering a first face ply602and second face ply603together with the displaceable liner portion610and printing on the coating609may be added to the method500. The artisan would understand thatFIGS.8A-8Dare shown for illustrative purposes and that the figures are not to scale. Similarly, the dimensions of the label600components may differ from what is shown. While not indicated in the figures, the dissolvable liner portions610may be disposed on the label600in a plurality of layers. In some embodiments, the displaceable liner portions610may forego a hydrophilic coating, though it may reduce the effective bonding strength of the dissolvable liner layers614. Thus, as has been described, the displaceable liner disclosed herein may serve to do away with traditional liner and adhesive layers, and in so doing, provide a label that is relatively more environmentally friendly. Moreover, the labels using the displaceable liners disclosed herein may significantly reduce the manufacturing costs of the labels. Indeed, according to some preliminary estimates, just circumventing the need for a disposable liner may reduce the cost of traditional labels (i.e., labels having silicone laden liner plies) by up to 50%. While embodiments of the displaceable liner may be incorporated with labels as described above, other embodiments of the displaceable liner may be incorporated with tape, such as adhesive tape. Conventional adhesive tape may use a remoistenable adhesive. That is, conventional adhesive tapes may use a type of adhesive that must be moistened, often with water, before the adhesive is active for adhering to a substrate. The remoistenable adhesive of the conventional tape may require a relatively large amount of water to activate, and once the remoistenable adhesive has been activated, it may have reduced adhering strength in subsequent activations if left to dry without being applied to a substrate. In the embodiments discussed above, the displaceable liner (e.g., the liner110,314, and/or410) is displaced by the water, allowing the various labels to be adhered to the substrate50. The displaceable liner may be absorbed at least in part by the substrate50, thus facilitating the exposure of the adhesive layer. In some applications, however, the substrate50may be unable to absorb water (or other liquids). For example, where the substrate50is glass, a plastic film, etc., it may be unable to absorb the displaceable liner displaced from the label when compared to, for example, a porous substrate. However, the inability of the displaceable liner to dissolve into the non-porous substrate may not hinder adhesion of the tape or label to the substrate because the displaceable liner may transition to the pockets and thereby move out of the way of the adhesive and thus allow the adhesive to do its job to secure the tape or label to the substrate. In some embodiments, the displaceable liner of label100, and its embodiments (e.g., embodiments300and400), may instead be used in an adhesive tape100′ (e.g., masking tape, painter's tape, duct tape, packaging tape, et cetera). The adhesive tape embodiments100′ may have many of the same, or similar, components as the label100. For example, and as shown inFIG.9, the adhesive tape100′ may have a face ply102′ (e.g., paper or film) corresponding to face ply102, an adhesive108′ corresponding to adhesive108, and a displaceable liner110′ (e.g., a hydrophilic layer106′ and a displaceable liner layer110′) corresponding to displaceable liner110. The displaceable liner110′ may be made with ingredients202(e.g., enabler222, facilitator224, stabilizer226, and a slip agent228) as discussed above. A difference between the label100and the adhesive tape100′ may be that the adhesive tape100′ may have a relatively long length compared to the label100(e.g., while the label100may be around the size of a shipping label, the adhesive tape100′ may be a relatively long length of tape which may be wrapped around a cylinder). Indicia (e.g., icons, text, logos, graphics, colors, etc.) may still be printed or otherwise added to the face ply102′ (e.g., the top side100T′ of the face ply102′). The adhesive tape100′ may have indicia printed thereon via thermal transfer methods (e.g., flexo printing, offset printing, et cetera) or other printing methods. In embodiments, the face ply102′ of the tape100′ may not be printable. Such may provide cost savings as compared to printable tape. The artisan will understand that the tape100′ may require less water for application relative to conventional remoistenable adhesive tape. In embodiments, the adhesive tape100′ may require about 1/10thof the amount of water a conventional remoistenable adhesive tape would require activating the adhesive layer. In some embodiments, the adhesive tape100′ may be incorporated with a dispenser700(FIG.10). The dispenser700may include a roller750(e.g., a cloth roller, also referred to herein as a moistener) configured to retain water for moistening the displaceable liner110′ of the adhesive tape100′. The adhesive tape100′ may pass over the moistened roller750to apply the water necessary to dispel the displaceable liner110′. One advantage of the adhesive tape100′ compared to conventional remoistenable tape may be that the relatively reduced thickness of the tape100′ may allow for a greater quantity thereof to be retained for use in the dispenser700. The artisan will understand that the tape100′ may be constructed, printed, and applied by modifying the steps of the method500accordingly. For example, the method500may be modified to include the steps of applying the adhesive layer108′ in a pattern of crests and recesses, and applying the displaceable liner layer110′ on top of the adhesive layer108′ peaks. Focus is directed now toFIGS.11-17,17A, and18-20to illustrate how application of labels employing the dissolvable liner (e.g., label100,300,400, et cetera) may be automated in full or in part. Label applicators for applying labels (e.g., shipping labels, return labels, product labels, etc.) to substrates are known in the art. A traditional label applicator apparatus comprises a printer for printing indicia on the label and a tamp head which in its original position is situated upwardly adjacent the printed label. The tamp head working surface extends generally horizontally and may have vacuum nozzles or other means for holding the label to the tamp head during the application process. The printer prints indicia on the label and the label is pushed laterally underneath the tamp head. The tamp head remains stationary until the printing of the label is complete and the label is brought in registry with the tamp head above the label. Once the label printing is complete and the entire label is below and in registry with the tamp head, the tamp head moves vertically downward towards a substrate and, due to the vacuum, causes the printed label to travel with the tamp head. The tamp head eventually sandwiches the printed label between itself and the substrate (e.g., the package to which the label is to be adhered, which may be brought underneath the tamp head via a conveyer belt for instance). The adhesive on the underside of the label (e.g., on the face stock thereof) causes the label to adhere to the substrate. The tamp head then moves vertically back up to its original position, and the next label is subsequently printed and situated underneath the tamp head so that the tamp head can apply the next label to the next substrate (e.g., another box on the moving conveyer belt). This process is repeated for each label that is printed and applied to a substrate. One issue with the traditional label applicator is that the next label cannot be prepared for application (e.g., printed) until the tamp head returns to its original position after applying the preceding label. This is because if the next label were to be printed (and all or part thereof were to exit the printer), the tamp head would not be able to move vertically upwards to its original position without interacting with the next label. Such interaction between the tamp head and the fully or partially printed label may be problematic because the conventional label, because of its exposed adhesive, may undesirably stick to the tamp head as the tamp head moves upward from its lowermost position (upon applying the label) toward its original position. To preclude such contact, the printer of the prior art labeling apparatus typically waits to print the next label until after the tamp head has applied the preceding label to the substrate and has returned to its original position thereafter. Once the preceding label has been applied and the tamp head has returned to its original position, the printer then prints the next label. As before, the next label is brought into registry with the tamp head, and once the printing is complete, the tamp head moves downward and sandwiches the next label between the tamp head and the next substrate to cause the next label to adhere to the next substrate. It may be inefficient to have to wait to start printing the next label until after the tamp head has returned to its original position after applying the preceding label to a substrate (which may be referred to herein, as a “wait time” or a “waiting time requirement”). The wait time is downtime which may reduce the number of labels that may be printed and applied to substrates in a period of time (e.g., every minute). Such downtime may be particularly undesirable because the process of printing and applying labels to substrates may be repeated a multitude (e.g., many thousands) of times every day. Elimination of this waiting time requirement may allow for additional labels to be printed and applied in a time period (e.g., each minute), and consequently, improve the efficiencies of the label printing and application process and reduce the costs associated therewith. Embodiments of the present disclosure may relate to a label making and applying apparatus that eliminates the waiting time requirement. FIGS.11through14show a label making and applying system embodiment1000(also referred to herein as the “labeling apparatus”). The labeling apparatus1000may be used to print labels (e.g., labels100, labels300, labels400, tape100′, etc.) and may, in embodiments, include a staging area1100, an automated or semi-automated arm1200, and a tamp head1300. The labeling apparatus1000may also have associated therewith means (e.g., a conveyer belt) to allow for one or more substrates50(e.g., a cardboard or other box, a surface, a ply, clothing, packaging, etc.) to be successively placed at a location where a label may be adhered thereto by the apparatus1000. In embodiments, the labeling apparatus1000may make use of a computing system1600(FIG.20) to perform the functions described herein. As seen inFIG.12, the staging area1100may in embodiments comprise a printer1120and a holding tray1140. The printer1120may be any printer now known or subsequently developed (e.g., a laser printer, an inkjet printer, a direct thermal printer, a thermal transfer printer, a commercial printer, a handheld printer, etc.) for suitably printing the label, and may be configured to print indicia (e.g., personalized and/or generic indicia, color and/or black and white indicia, etc.) thereon. The holding tray1140may be configured to hold labels (e.g., a label100) during the printing process and/or after the label has been printed by the printer1120(e.g., until the tamp head1300returns to its original position after applying the preceding label, as discussed herein). The printer1120may print relevant indicia (e.g., packaging information, shipping information, marketing materials, etc.) on the label, and deposit the label in the holding tray1140. In an embodiment, the printer1120may begin the printing of an additional label as soon as the preceding label is removed from the holding tray1140. In another embodiment, the printer1120may begin printing the next label within 1, 2, 3, or 4 seconds of the removal of the preceding label from the tray1140. The holding tray1140may be a receptacle (e.g., a plate, bin, tub, tray, etc.) configured to receive and hold the labels processed by the printer1120for the tamp head1300. The holding tray1140may, in embodiments, extend generally vertically. In embodiments, the holding tray1140may have a lip or one or more protruding edges1140A (which may extend generally laterally or otherwise be perpendicular to the vertically extending portion of the holding tray1140) to aid in holding the label within the holding tray1140after the label has been printed by the printer1120. In other embodiments, the holding tray1140may have a textured plasma or other coating configured to inhibit the labels from undesirably adhering to the holding tray1140. Alternately or additionally to the lip1140A, the holding tray1140may, in embodiments, be charged with a vacuum to hold the label within it. For example, the holding tray1140may include a vacuum plate1140B configured to selectively retain the label with an applied vacuum. The vacuum plate1140B may, for example, apply the vacuum constantly, intermittently (e.g., at timed intervals that are in synchronization with a printing cycle of the printer1120), manually, and/or automatically. The vacuum plate1140B may automatically apply the vacuum in response to, for example, a sensor (e.g., a sensor1280) detection of the printed label. As another example, the vacuum plate1140B may automatically apply the vacuum in response to a signal from the printer1120indicating that that the printing of the label is (or is about) complete and the label will be deposited within the holding tray1140. To allow the label to be collected by the automated arm1200, the vacuum plate1140B may cease operation and release the label from the vacuum plate1140B vacuum. Similar to the methods of applying the vacuum described above, the label may be released from the vacuum plate1140B intermittently (e.g., at timed intervals that are in synchronization with a collection cycle of the automated arm1200), manually, and/or automatically. The vacuum plate1140B may automatically release the label in response to, for example, a sensor (e.g., a sensor1280) detection that the automated arm1200is ready, or is about ready, to collect the label. As another example, the vacuum plate1140B may automatically release the label in response to a signal from the automated arm1200indicating that the automated arm is prepared to collect the label. In some embodiments, the vacuum plate1140B may apply a vacuum charge that is configured to be overpowered or otherwise replaced by another vacuum charge (e.g., by a vacuum charge of the tamp head1300, as will be discussed in greater detail below). That is to say, the vacuum plate1140B vacuum may be overridden by a vacuum from another source, and thus the other source may collect the label form the holding tray1140. As discussed above, a label having a displaceable liner or a displaceable adhesive liner may transition to the activated state (and thus be made ready for adherence to a surface) once said liner is brought into contact with a fluid. As such, the staging area1100may, in embodiments, comprise a sprayer or other fluid dispensing means1160downstream the printer1120, as shown inFIG.12. In embodiments, the sprayer1160may be downstream the holding tray1140. The sprayer1160may be fluidly coupled to a tank for retaining fluid (e.g., water or other fluid for dispelling the displaceable liner to expose the adhesive in case of the displaceable liner label or for otherwise activating the displaceable liner in case of the displaceable adhesive liner label). In embodiments, the sprayer1160may spray the fluid onto the label, e.g., on the underside thereof, before the label is adhered to the surface or substrate50. Alternately or in addition, the sprayer1160may be used to spray fluid onto the substrate50itself before the label is brought in contact therewith. The fluid dispensing means1160may, in embodiments, include a pump, a spray nozzle, valves, delivery tubes, etc., to allow for the fluid to be dispersed as desired (e.g., onto the underside of the printed label as the printed label travels from the holding tray1140and comes adjacent the sprayer1160, onto the substrate50prior to the application of the label thereto, et cetera). The artisan will understand from the disclosure herein that liners other than the displaceable liners may also, in embodiments, be employed with the label. Alternately, the label may employ no liner (i.e., the label may have an exposed adhesive layer) and the holding tray1140may include a non-stick or other adhesion-resistant coating to preclude undue interaction between the exposed adhesive layer and the holding tray1140. One example of a sprayer1160usable with the labeling apparatus1000is a pulse width modulation (PWM) flow control sprayer. PWM flow control sprayers, like Spraying Systems Co.'s Pulsajet® spray nozzle, may spray at several thousand cycles a minute (e.g., 10,000 cycles a minute) to allow for continual use on an assembly line. PWM flow control may allow a great control over the spraying function of the sprayer1160, and thus wastage of sprayed fluids may be mitigated while coverage of the sprayed object (e.g., the labels) may remain consistent. The arm1200, as seen inFIG.13, may comprise a plunger1220, a plate1240, a rotation device1260, and one or more sensors1280. The plunger1220may be configured to be telescoping (or may otherwise be configured to selectively retract, extend, and/or otherwise adjust position), and may have a plate1240attached to a distal end thereof. The plate1240may be configured to hold the tamp head1300(seeFIG.11), and may, in embodiments, comprise vacuum nozzles to charge the tamp head1300with a vacuum. The rotation device1260may be operably coupled to a proximal end of the plunger1220and may be configured to cause the arm1200to rotate (or otherwise move) such that the tamp head1300pulls the printed label from the tray1140and eventually brings the label in proximity with the substrate50for adhesion of the label thereto. In embodiments, before the label is applied to the substrate, the arm1200may cause the label to be brought proximate the sprayer1160so that the sprayer may spray fluid on an underside of the label to activate the displaceable liner or the displaceable adhesive liner. In other embodiments, the sprayer1160may directly moisten the substrate50before the label is brought into contact therewith by the arm1200. In such cases, the liner may activate when the liner is brought into contact with the wetted substrate50. One or more sensors1280(e.g., LiDAR, infrared, etc.) may be used to detect the presence of the substrate50, and aid in the process of applying a label to the surface50. That is, the arm1200may move the tamp head1300to the tray1140for collection of a label, and then, using the sensors1280, move the tamp head1300together with the label to cause the tamp head1300to adhere the label to the substrate50. Importantly, printing of the next label may advantageously begin as soon as the tamp head1300removes the preceding label from the tray1140, resulting in valuable time savings. That is, and as will become clear from the disclosure herein, the waiting time requirement of prior art label applicators may be eliminated or otherwise mitigated. The tamp head1300, as seen inFIG.14, may in embodiments comprise filter media1320made of a compressible material which may be charged with a vacuum. For example, the filter media1320may comprise a foam block about two inches thick, which easily allows air to pass through the block. The filter media1320may be attached to the arm1200via the plate1240. In operation, the vacuum charged filter media1320may be used for the collection of a label from the holding tray1140and for the subsequent application of the label to the substrate50. In embodiments, multiple apparatuses1000may be provided, e.g., in line, to allow for various labels and associated documents to be printed and applied to the substrate50as the substrate50travels to the various apparatuses1000on a conveyer belt. For instance, in embodiments, one labeling apparatus1000may be used to print and apply to the substrate50the label, another downstream label applicator1000may be used to print and apply to the substrate50a packing list (which may, e.g., be secured above the label), yet another downstream apparatus1000may be used to adhere a coupon above the packing list, etc. FIG.15is a flowchart depicting a method1400of printing labels and applying these labels to substrates, in an example embodiment. First, at step1420, a label100may be printed using the printer1120and deposited within the holding tray1140. For example, as discussed above, the label100may be held within the holding tray1140using a laterally extending edge1140A, a vacuum, et cetera. FIG.11shows the automated arm1200in its initial or original position. At step1440, once the label100is printed and held in the holding tray1140as shown inFIGS.11-12, the automated arm1200and/or a portion thereof may extend (e.g., horizontally) to an intermediate position. For example, the telescoping plunger1220may telescope and/or otherwise extend in the horizontal plane to a first position such that the tamp head1300contacts the label100being held in the holding tray1140(seeFIG.16) for collection. Alternately, the plunger1220may be brought proximate the label100in the holding tray1140so that the label adheres to the tamp head1300by virtue of a vacuum. At step1460, once the tamp head1300has collected the label100from the holding tray1140, the rotatable arm1200may rotate (towards the substrate50to another intermediate position) while the label100is secured to the tamp head1300(e.g., via a vacuum), and resultantly, remove the label100from the tray1140. At step1480, as soon as the label100is removed from the tray1140, printing of the next label100N by the printer1120may be initiated. At step1500, fluid may be sprayed onto the label100and/or on the substrate50(which substrate50may be moving on the conveyer belt) by the sprayer1160to allow for the displaceable liner at the underside of the label to be activated. For example, the rotatable arm1200may continue to rotate towards the substrate50while the label100is adhered to the tamp head1300and resultantly bring the label100proximate the sprayer1160(seeFIG.17) in another intermediate position. The sprayer1160may spray fluid F (e.g., water) on the label100to dispel the displaceable liner thereon. Alternately or additionally, the sprayer1160may spray the fluid F onto the substrate50itself so that the moistened substrate50may dispel the displaceable liner on the label100when the label100is brought in contact therewith, as shown inFIG.17A. At step1520, the rotatable arm1200may continue to rotate towards the substrate50, and eventually, the movable plunger1220may cause the tamp head1300to sandwich the label100between the substrate50and the tamp head1300(seeFIG.18). This position of the arm1200may be referred to as the second position. When the underside of the label100contacts the substrate50, the moisture on the underside of the label100and/or on the substrate50may cause the displaceable liner coating disposed on the label100to dissolve into the substrate50to adhere thereto (e.g., by nature of the label100exposed adhesive, by nature of the label100displaceable liner infiltrating the substrate50and drying therein, et cetera). At step1560, once the label100is adhered, the rotatable arm1200may return to its original position (seeFIG.19showing the arm1200returning to its original position). By this time, the printer1120may already have printed the next label100N in its entirety and deposited same into the tray1140. Alternately, the printer1120may have printed at least part of the next label100N. The rotatable arm1200may therefore collect the next label100N from the tray1140, and apply the next label100N to the next substrate50as discussed above. In this way, the waiting time requirement may be eliminated or at least greatly reduced, allowing for a greater number of labels to be printed and applied to substrates in a given time period compared to the prior art, yielding significant cost savings. The artisan would understand that the steps of the method1400need not be carried out in the exact order as described, that some steps may occur simultaneously with other steps, and that some steps may be optional, and that each of these combinations of carrying out the method1400are within the scope of the present disclosure. For example, spraying of the fluid F by the sprayer1160at step1500may be unnecessary where a traditional paper liner is being used as opposed to a displaceable liner. FIG.20is a functional block diagram of the computing system1600which may be used to implement the various labeling apparatus embodiments according to the different aspects of the present disclosure. The computing system1600may be, for example, a smartphone, a laptop computer, a desktop computer, a flexible circuit board, or other computing device whether now known or subsequently developed. The computing system1600comprises a processor1620, the memory1640, a communication module1660, and a dataport1680. These components may be communicatively coupled together by an interconnect bus1690. The processor1620may include any processor used in smartphones and/or other computing devices, including an analog processor (e.g., a Nano carbon-based processor). In certain embodiments, the processor1620may include one or more other processors, such as one or more microprocessors, and/or one or more supplementary co-processors, such as math co-processors. The memory1640may include both operating memory, such as random access memory (RAM), as well as data storage, such as read-only memory (ROM), hard drives, optical, flash memory, or any other suitable memory/storage element. The memory1640may include removable memory elements, such as a CompactFlash card, a MultiMediaCard (MMC), and/or a Secure Digital (SD) card. In certain embodiments, the memory1640includes a combination of magnetic, optical, and/or semiconductor memory, and may include, for example, RAM, ROM, flash drive, and/or a hard disk or drive. The processor1620and the memory1640each may be located entirely within a single device, or may be connected to each other by a communication medium, such as a USB port, a serial port cable, a coaxial cable, an Ethernet-type cable, a telephone line, a radio frequency transceiver, or other similar wireless or wired medium or combination of the foregoing. For example, the processor1620may be connected to the memory1640via the dataport1680. The communication module1660may be configured to handle communication links between the computing system1600and other external devices or receivers, and to route incoming/outgoing data appropriately. For example, inbound data from the dataport1680may be routed through the communication module1660before being directed to the processor1620, and outbound data from the processor1620may be routed through the communication module1660before being directed to the dataport1680. The communication module1660may include one or more transceiver modules configured for transmitting and receiving data, and using, for example, one or more protocols and/or technologies, such as GSM, UMTS (3GSM), IS-95 (CDMA one), IS-2000 (CDMA 2000), LTE, FDMA, TDMA, W-CDMA, CDMA, OFDMA, Wi-Fi, WiMAX, 5G, or any other protocol and/or technology. The dataport1680may be any type of connector used for physically interfacing with a smartphone, computer, and/or other devices, such as a mini-USB port or an IPHONE®/IPOD® 30-pin connector or LIGHTNING® connector. In other embodiments, the dataport1680may include multiple communication channels for simultaneous communication with, for example, other processors, servers, and/or client terminals. The memory1640may store instructions for communicating with other systems, such as a computer. The memory1640may store, for example, a program (e.g., computer program code) adapted to direct the processor1620in accordance with the embodiments described herein. The instructions also may include program elements, such as an operating system. While execution of sequences of instructions in the program causes the processor1620to perform the process steps described herein, hard-wired circuitry may be used in place of, or in combination with, software/firmware instructions for implementation of the processes of the present embodiments. Thus, unless expressly noted, the present embodiments are not limited to any specific combination of hardware and software. In embodiments, the memory1640includes software1610. The software1610may contain machine-readable instructions configured to be executed by the processor1620. The software1610may, for example, process data obtained from the sensor1280. In embodiments, the software1610may cause the computing system1600to dynamically respond to a reading obtained by the sensor1280. For example, the software1610may direct the automated arm1200to collect a label in response to a sensor1280determination that the label has been deposited in the holding tray1140. As another example, the software1610may direct the automated arm1200to bring the label into contact with the substrate50in response to a sensor1280determination that the substrate50is ready to receive the label (i.e., the substrate50is within reach of the automate arm1200). The computing system1600may be in data communication with a remote storage70over a network60. The network60may be a wired network, a wireless network, or comprise elements of both. In embodiments, the network60may communicatively link one or more components of the labeling apparatus1000. For example, the sensor1280may be communicatively linked to the computing system1600via the network60for the exchange of information therebetween. The remote storage70may be, for example, the “cloud” or other remote storage in communication with other computing systems. In embodiments, data (e.g., readings obtained by the sensor1280and the dynamic responses of the computing system1600thereto) may be stored in the remote storage70for analytics. As noted, one advantage of the labeling system1000may be that it may allow a printer to continuously print labels while the tamp head is moving between the printer and a desired surface for label application. Conversely, printers on existing label applicator systems may only be able to print off the next label for application once the tamp head has returned to the printer. Because the next label in an automatic label applicator system1000may be ready and waiting for pick up by the tamp head as soon as the tamp head completes its cycle, there may be a significant reduction in the time it takes to apply a large number (e.g., thousands) of labels, relative to existing label application systems. While example labels (e.g., shipping labels), are used to illustrate the workings of the system1000, the artisan will understand that the automatic label applicator system1000disclosed herein may be adapted to other similar label application functions, and that such adaptions are within the scope of the present disclosure. Examples of other similar label application functions may include pharmaceutical packaging, food and beverage packaging, parts labeling, etc. The artisan will understand that the labeling system1000disclosed herein may include or have associated therewith electronics (e.g., the computing system1600, the sensors1280, etc.). The electronics may be used to control and modify the operation of the labeling system (e.g., to change the timing of the system1000, to turn the system1000on and off, to dynamically control the system1000in response to a sensor1280detection, et cetera). In some example embodiments, the processor or processors may be configured through particularly configured hardware, such as an application specific integrated circuit (ASIC), field-programmable gate array (FPGA), etc., and/or through execution of software to allow the labeling system1000to function in accordance with the disclosure herein. Many different arrangements of the various components depicted, as well as components not shown, are possible without departing from the spirit and scope of the present disclosure. Embodiments of the present disclosure have been described with the intent to be illustrative rather than restrictive. Alternative embodiments will become apparent to those skilled in the art that do not depart from its scope. A skilled artisan may develop alternative means of implementing the aforementioned improvements without departing from the scope of the present disclosure. It will be understood that certain features and subcombinations are of utility and may be employed without reference to other features and subcombinations and are contemplated within the scope of the claims. Not all steps listed in the various figures need be completed in the specific order described. | 113,835 |
11862046 | DETAILED DESCRIPTION Aspects of the present disclosure relate to labels for growing plants and methods of making and using them. Although the illustrated embodiments are primarily horticultural, the labels and methods disclosed herein will find applicability anywhere growing plants are used or sold, including both horticultural and agricultural uses. It will be apparent that a combined fixable identifier portion and a detachable tag portion will reduce time, inventory, and costs in application to plant containers, reduce waste, and aid end users in growing and managing plants. Printing and applying the labels herein described can be particularly effected in a one-step process using a machine sold under the Label Gator brand by Great Lakes Label, Inc. Looking first toFIGS.1A-1F, several varieties of labels10are shown. In each embodiment, the label comprises a fixable portion12and a detachable portion14. The fixable portion12is fixable in the sense that it is configured to be affixed permanently to a plant container or pot. The fixable portion12will typically have identifier information related to the plant growing in the pot to which it is affixed, including among other things, at least one of a name16of the plant, a photo18of the plant, and/or a bar code20for tracking the potted plant through sale to an end user. The fixable or identifier portion12may also include other information22related to the grower or source of the plant, the retail seller, and size of the container or the plant. The detachable or tag portion14will typically have growing information24related to the care and growing of the plant in the container. Such plant information24typically includes data about planting depth, spacing, watering, feeding, light requirements, appropriate temperatures zones, harvesting, and the like. The identifier portion12and the tag portion14will be separable from each other along a detach line26. The location of the detach line26will vary depending the shapes and relative positioning of the identifier portion12and the tag portion14. Various relative positions can be seen inFIGS.1A-1F. InFIG.1A, the identifier portion12is to the left and the tag portion14is to the right. InFIG.1B, the identifier portion12is to the right and the tag portion14is to the left. InFIG.1C, the identifier portion12is to the bottom and the tag portion14is to the top. InFIG.1D, the identifier portion12is to the top and the tag portion14is to the bottom. InFIG.1E, the identifier portion12is to the right and left and the tag portion14is between the right and left sides of the identifier portion12. InFIG.1F, the identifier portion12is to the top and bottom and the tag portion14is between the top and bottom sides of the identifier portion12. It will be understood than any arrangement of the relative positions of the identifier portion12and the tag portion14is within the scope of this disclosure so long as the identifier portion12and the tag portion14are connected but separable from each other. The shapes of the identifier portion12and the tag portion14are likewise not limited to those disclosed herein. It is anticipated that a common shape for the identifier portion12will generally be a quadrilateral, and that a common shape for the tag portion14will be a rectangle with a wedge-shaped tab28at one end, as shown also inFIG.4. Any portion of the label10can be partially printed as, for example inFIG.2, or the label10can be made without printing, for later printing nu an end user or intermediary. If partially printed as inFIG.2, indicia may be printed, for example, on all or part of the fixable portion12, leaving an unprinted portion15thereof, and leaving the tag portion14unprinted. It will be understood that either both the fixable portion of the tag portion or both may be fully printed, or partially printed or completely unprinted at the time of manufacture. Looking now atFIG.3, one can see an exemplary layered structure of the label10. A principal layer includes a substrate30which will most commonly be a plastic such as polypropylene or a paper, such as Polyart. The substrate may have a thickness in a range of about 4-12 mills, preferably about 10 mills. Indicia32can be printed on an upper surface34of the substrate30, and in some cases a protective layer35such as a thermal transfer varnish may be applied to all or part of the upper surface34either before or after printing the indicia32. A layer of permanent adhesive36is disposed on a lower surface38of the substrate, preferably covering the entire lower surface38. The permanent adhesive layer36may comprise an acrylic or a rubber depending on the material of the container to which the label10is to be attached, and will preferably be pressure sensitive, i.e., able to adhere the substrate layer30to a surface by pressing the substrate30(and the adhesive layer36) to the surface. The adhesive layer36will typically be in a range of about 0.75-2 mills thick. A release liner40covers the permanent adhesive layer36to enable the label10to be handled and transported without interference from the adhesive36. It is contemplated that the release liner40can be in the form of a web sheet on which an array of multiple labels10can be disposed, or in the form of a roll on which an array of labels10can be linearly disposed, or in a form contiguous with the shape of the label10for individual handling. An adhesive deadener42is applied to the adhesive layer36at a predetermined portion44of the label10, preferably at the tag portion14. The adhesive deadener42may be any compound or coating or process that effectively neutralizes the adhesive layer36at the predetermined portion44. Preferably, the adhesive deadener42will be a coating that enables printing of indicia32on the lower surface38of the substrate30or on the adhesive deadener42itself. As shown inFIG.4, the predetermined portion44will cover all or most of the tag portion14so it will be free from a container to which the label30is adhered, preferably leaving a small strip45of permanent adhesive36undeadened adjacent the detach line26so it can lightly adhere to a container. It is anticipated that at least the tip46of the tab28will have a full deadener42so it will not adhere to a container. The detach line26can be any structure that permits the tag portion14to be separated or detached from the identifier portion12. For example, the detach line26may be scored or perforated or slotted. As well, the tab28may be disconnected completely from the identifier portion12to enable easier grasping of the tab28to aid in detaching the tag portion14from the identifier portion12. Hence, all or portion of the detach line26may connect the tag portion14to the identifier portion12. A method of making the label10, and more particularly, a method of making a sheet of labels10, is schematically described. The method commences at a first step with providing a sheet comprising the substrate layer30, the permanent adhesive layer36, and the release liner40. The sheet may be in a form of a roll. At a second step, the release liner40is delaminated from the adhesive layer36, preferably as the roll is unwound. When the adhesive layer becomes exposed, the adhesive deadener42is applied to the predetermined portion44at a third step. At this point in the method, several optional steps are available. At an optional fourth step, indicia32can be printed on the lower surface38of the substrate30and/or on the adhesive deadener42at the tag portion14. Recall that the tag portion14is removable from the label10and it may be desirable to have indicia on both sides of the tag portion14for viewing after removal. At a fifth step, indicia32can be printed on the upper surface34of the substrate30, either on the identifier portion12or the tag portion14or on both the identifier portion12and the tag portion14. It will be understood that printing can occur at one or more stations, in one step or multiple steps, in one color or multiple colors, as needed. In the embodiments illustrated herein, one can see that color indicia32, including a graphic of a plant referred to in the label10, is printed on the upper surface34of the substrate30at the identifier portion12, indicia32in black is printed on the upper surface34of the substrate30at the tag portion14, and indicia32in black is printed on the lower surface38of the substrate30at the tag portion14. It is further contemplated by the line from the fifth step to a sixth step, that one option is for no printing to occur. At the sixth step, the release liner40is relaminated to the adhesive layer36. The release liner40will not adhere to the predetermined portions44where adhesive deadener42is applied, but there will be adequate exposed adhesive to enable relamination. It will be understood that the relamination sixth step can occur after or coincident with any one or more of the printing steps, such as the fourth step or the fifth step, and it may occur before the printing step, such as before the fifth step. At a seventh step, the protective layer35is optionally applied to the upper surface34of the substrate30. The protective layer35may a separate sheet or film, or a spray coating, and may be in the form of a thermal transfer varnish that enables all or a portion of the printing step, such as the fourth step after applying the protective layer35in the seventh step. At an eighth step, the substrate layer30is cut to define each label10, including the combined identifier portion12and tag portion14of each label10, on the sheet. The cutting may occur by a rotary die as the sheet is passed through the die, where a cutting edge of the die slices the substrate30and preferably the adhesive layer36, but not the release liner40. Alternatively, the cutting may occur by a stamping die that sequentially stamps the substrate30but not the release liner40to cut one or more labels10on the sheet. At a nineth step, the substrate layer30is scored at the detach line26so the tag portion14remains attached to the identifier portion12. Steps, such as the eighth step and the nineth step may be combined into a single operation, as for example, where a rotary die includes both a cutting edge and a scoring edge. After cutting at the eighth step and/or the nineth step, the release liner40carrying the cut labels10may be rerolled for later use as desired. At a tenth step, the labels10are removed from the release liner40for application to containers. Alternatively, the matrix of substrate30surrounding each label10on the sheet can be removed from the release liner40, leaving only each label10on the sheet of the release liner40. At this point, the release liner40carrying the labels40can also be rerolled for storage or transport. Removal of the labels10from the release liner40(or removal of the matrix from the release liner40) in the tenth step can occur simultaneously with application of the labels10to containers. For example, in automated operation, the sheet can be fed to a line of containers where each label10is detached from the release liner and applied to a container as each container in the line passes the sheet sequentially, whereupon the release liner40and the matrix of substrate30left behind by removal of the labels10is disposed as waste. As well, each label10can be removed manually from the sheet and applied manually to a container. FIG.5Aillustrates a roll of labels10on a sheet after the nineth step where the labels are fully printed at the fourth step and/or the fifth step, and the release liner40carrying the cut labels10has been rerolled for later use, such as for example, in the tenth step.FIG.5Billustrates a roll of labels10on a sheet after the nineth step where the labels are partially printed in the fourth step and/or the fifth step, and the release liner40carrying the cut labels10has been rerolled for later use, such as further printing in the fourth step and/or the fifth step, and/or removing in the tenth step. FIG.6illustrates printing indicia32in the fourth step and/or the fifth step after a roll of labels10has been partially printed and rerolled after the nineth step. Such printing may be done by or at the request of a grower with indicia unique to that grower. It will be understood that precut labels10with no indicia32can be made available for subsequent printing after the nineth step.FIG.7illustrates printing indicia32in the fourth step and/or the fifth step combined with the tenth step.FIG.8illustrates automated application of labels10from a sheet of release liner40in the tenth step with no printing.FIG.9illustrates manual application of a label10to a container after removal of the label10from the sheet in the tenth step. A major benefit of the label10as disclosed herein can be seen inFIG.10where a user, having obtained a container with a growing plant with a label10affixed to the container, can remove the tag portion14from the container and the label10. Because the tab28is not adhered to the container due to the adhesive deadener42coating the adhesive layer36beneath the tab, the use can grasp the tip46of the tab28and pull. The pulling will cause the tag portion14to detach from the identifier portion12along the scored detach line26. And because the tag portion14is adhered to the container only at the small strip45, the pulling is enough to overcome the minimal adherence of the small strip45, causing the tag portion14to release from the container. To the extent not already described, the different features and structures of the various embodiments of the present disclosure may be used in combination with each other as desired. For example, one or more of the features illustrated and/or described with respect to one aspect can be used with or combined with one or more features illustrated and/or described with respect to the other aspects described herein. That one feature may not be illustrated in all of the embodiments is not meant to be construed that it cannot be, but is done for brevity of description. Thus, the various features of the different embodiments may be mixed and matched as desired to form new embodiments, whether or not the new embodiments are expressly described. While aspects of the present disclosure have been specifically described in connection with certain specific embodiments thereof, it is to be understood that this is by way of illustration and not of limitation. Reasonable variation and modification are possible within the scope of the forgoing disclosure and drawings without departing from the spirit of the present disclosure which is defined in the appended claims. | 14,599 |
11862047 | DETAILED DESCRIPTION In this specification, it will also be understood that when one component (or region, layer, portion) is referred to as being ‘on’, ‘connected to’, or ‘coupled to’ another component, it can be directly disposed/connected/coupled on/to the one component, or an intervening third component may also be present. Like reference numerals refer to like elements throughout. Also, in the drawing figures, the thickness, ratio, and dimensions of components are exaggerated for clarity of illustration. The term “and/or” includes any and all combinations of one or more of the associated listed items. It will be understood that although the terms such as ‘first’ and ‘second’ are used herein to describe various elements, these elements should not be limited by these terms. The terms are only used to distinguish one component from other components. For example, a first element referred to as a first element in one embodiment can be referred to as a second element in another embodiment without departing from the scope of the appended claims. The terms of a singular form may include plural forms unless referred to the contrary. Also, “under”, “below”, “above”, “upper”, and the like are used for explaining relation association of components illustrated in the drawings. The terms may be a relative concept and described based on directions expressed in the drawings. “About” or “approximately” as used herein is inclusive of the stated value and means within an acceptable range of deviation for the particular value as determined by one of ordinary skill in the art, considering the measurement in question and the error associated with measurement of the particular quantity (i.e., the limitations of the measurement system). For example, “about” can mean within one or more standard deviations, or within ±30%, 20%, 10%, 5% of the stated value. Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as generally understood by those skilled in the art. Terms as defined in a commonly used dictionary should be construed as having the same meaning as in an associated technical context, and unless defined apparently in the description, the terms are not ideally or excessively construed as having formal meaning. The meaning of ‘include’ or ‘comprise’ specifies a property, a fixed number, a step, an operation, an element, a component or a combination thereof, but does not exclude other properties, fixed numbers, steps, operations, elements, components or combinations thereof. Hereinafter, embodiments of the invention will be described in detail with reference to the accompanying drawings. FIG.1is a perspective view illustrating an embodiment of a display device according to the invention.FIG.2is a view illustrating a folded state of the display device inFIG.1. Referring toFIG.1, a display device DD in an embodiment of the invention may have a quadrangular (e.g., rectangular) shape including long sides each extending in a first direction DR1and short sides each extending in a second direction DR2crossing the first direction DR1. However, the invention is not limited thereto. In an embodiment, the display device DD may have various shapes such as a circular shape or a polygonal shape, for example. The display device DD may be a flexible display device. Hereinafter, a direction that crosses a plane defined by the first and second directions DR1and DR2in a substantially perpendicular manner is defined as a third direction DR3. In this specification, an expression “in a plan view” may be defined as a state viewed in the third direction DR3. The display device DD may include a folding area FA and a plurality of non-folding areas NFA1and NFA2. The non-folding areas NFA1and NFA2may include a first non-folding area NFA1and a second non-folding area NFA2. The folding area FA may be disposed between the first non-folding area NFA1and the second non-folding area NFA2. The folding area FA, the first non-folding area NFA1, and the second non-folding area NFA2may be arranged in the second direction DR2. Although one folding area FA and two non-folding areas NFA1and NFA2are illustrated, the invention is not limited to the number of each of the folding area FA and the non-folding areas NFA1and NFA2. In an embodiment, the display device DD may include two or more plurality of non-folding areas with a plurality of folding areas disposed therebetween, for example. A top surface of the display device DD may be defined as a display surface DS, and the display surface DS have a plane defined by the first direction DR1and the second direction DR2. Images IM generated from the display device DD may be provided to a user through the display surface DS. An edge part EG may be disposed around the display surface DS. The edge part EG may not display an image. The edge part EG may surround the display surface DS and define an edge of the display device DD, which is printed by a predetermined color. The display device DD may include a plurality of sensors SN and at least one camera CM. Each of the sensors SN and the camera CM may be disposed adjacent to the edge of the display device DD. Each of the sensors SN and the camera CM may be disposed on the display surface DS adjacent to the edge part EG. Each of the sensors SN and the camera CM may be disposed on the first and second non-folding areas NFA1and NFA2. In an embodiment, each of the sensors SN may be a proximity sensor, for example. However, the invention is not limited to the kind of the sensors SN. The camera CM may photograph an external image. Referring toFIG.2, the display device DD may be a folding-type (foldable) display device DD that is folded or unfolded. In an embodiment, the display device DD may be folded such that the folding area FA is bent with respect to a folding axis FX parallel to the first direction DR1, for example. The folding axis FX may be defined as a major axis parallel to the long side of the display device DD. When the display device DD is folded, the first non-folding area NFA1and the second non-folding area NFA2may face each other, and the display device DD may be in-folded so that the display surface DS is not exposed to the outside. FIG.3is a plan view illustrating the display device inFIG.1. Referring toFIG.3, the display device DD may include a display panel DP, a scan driver SDV, a data driver DDV, and an emission driver EDV. The display panel DP may include a first area AA1, a second area AA2, and a bending area BA disposed between the first area AA1and the second area AA2. The bending area BA may extend in the first direction DR1, and the first area AA1, the bending area BA, and the second area AA2may be arranged in the second direction DR2. The first area AA1may include a display area DA and a non-display area NDA disposed around the display area DA. The non-display area NDA may surround the display area DA. The display area DA may display an image, and the non-display area NDA may not display an image. Each of the second area AA2and the bending area BA may not display an image. When viewed in the first direction DR1, the first area AA1may include a first non-folding area NFA1, a second non-folding area NFA2, and a folding area FA disposed between the first non-folding area NFA1and the second non-folding area NFA2. The display panel DP may include a plurality of pixels PX, a plurality of scan lines SL1to SLm, a plurality of data lines DL1to DLn, a plurality of emission lines EL1to ELm, first and second control lines CSL1and CSL2, a power line PL, connection lines CNL, and a plurality of pads PD. Here, m and n are natural numbers. The pixels PX may be disposed on the display area DA and connected to the scan lines SL1to SLm, the data lines DL1to DLn, and the emission lines EL1to ELm. The scan driver SDV and the emission driver EDV may be disposed on the non-display area NDA. Each of the scan driver SDV and the emission driver EDV may be disposed on the non-display area NDA disposed adjacent to each of both sides, which are opposite to each other in the first direction DR1, of the first area AA1. The data driver DDV may be disposed on the second area AA2. The data driver DDV may be manufactured in a form of an integrated circuit (“IC”) chip and disposed (e.g., mounted) on the second area AA2. The scan lines SL1to SLm may each extend in the first direction DR1and be connected to the scan driver SDV. The data lines DL1to DLn may each extend in the second direction DR2and be connected to the data driver DDV through the bending area BA. The emission lines EL1to Elm may each extend in the first direction DR1and be connected to the emission driver EDV. The power line PL may extend in the second direction DR2and be disposed on the non-display area NDA. Although the power line PL may be disposed between the display area DA and the emission driver EDV, the invention is not limited thereto. In an embodiment, the power line PL may be disposed between the display area DA and the scan driver SDV, for example. The power line PL may extend to the second area AA2through the bending area BA. In a plan view, the power line PL may extend toward a lower end of the second area AA2. The power line PL may receive a driving voltage. The connection lines CNL may each extend in the first direction DR1and be arranged in the second direction DR2. The connection lines CNL may be connected to the power line PL and the pixels PX. The driving voltage may be applied to the pixels PX through the power line PL and the connection lines CNL, which are connected to each other. The first control line CSL1may be connected to the scan driver SDV and extend toward the lower end of the second area AA2through the bending area BA. The second control line CSL2may be connected to the emission driver EDV and extend toward the lower end of the second area AA2through the bending area BA. The data driver DDV may be disposed between the first control line CSL1and the second control line CSL2. When viewed from the plane, the pads PD may be disposed adjacent to the lower end of the second area AA2. The data driver DDV, the power line PL, the first control line CSL1, and the second control line CSL2may be connected to the pads PD. The data lines DL1to DLn may be connected to the corresponding pads PD through the data driver DDV. In an embodiment, the data lines DL1to DLn may be connected to the data driver DDV, and the data driver DDV may be connected to the pads PD that correspond to the data lines DL1to DLn, respectively, for example. Although not shown, a printed circuit board (“PCB”) connected to the pads PD may be provided. A timing controller and a voltage generation part may be disposed on the PCB. The timing controller may be manufactured in a form of an IC chip and disposed (e.g., mounted) to the PCB. The timing controller and the voltage generation part may be connected to the corresponding pads PD through the PCB. The timing controller may control an operation of each of the scan driver SDV, the data driver DDV, and the emission driver EDV. The timing controller may generate a scan control signal, a data control signal, and an emission control signal in response to control signals received from the outside. The voltage generation part may generate the driving voltage. The scan control signal may be provided to the scan driver SDV through the first control line CSL1. The emission control signal may be provided to the emission driver EDV through the second control line CSL2. The data control signal may be provided to the data driver DDV. The timing controller may receive image signals from the outside and convert a data format of the image signals to match with interface specifications with the data driver DDV, thereby providing the converted image signals to the data driver DDV. The scan driver SDV may generate a plurality of scan signals in response to the scan control signal. The scan signals may be applied to the pixels PX through the scan lines SL1to SLm. The scan signals may be sequentially applied to the pixels PX. The data driver DDV may generate a plurality of data voltages corresponding to the image signals in response to the data control signal. The data voltages may be applied to the pixels PX through the data lines DL1to DLn. The emission driver EDV may generate a plurality of emission signals in response to the emission control signal. The emission signals may be applied to the pixels PX through the emission lines EL1to ELm. The pixels PX may receive the data voltages in response to the scan signals. The pixels PX may display an image by emitting light having luminance corresponding to the data voltages in response to the emission signals. The pixels PX may have an emission time that is controlled by the emission signals. Although not shown, as the bending area BA is bent, the second area AA2may be disposed below the first area AA1. Thus, the data driver DDV may be disposed below the first area AA1not to be recognized from the outside. FIG.4is a schematic cross-sectional view illustrating the display device inFIG.1. Although a cross-section of the display device DD in the first direction DR1is illustrated inFIG.4, a cross-section of each of the bending area BA and the second area AA2is omitted for convenience of description. Referring toFIG.4, the display device DD may include a display module DM. The display module DM may be a flexible display module. The display device DD may include a folding set for supporting and folding eh display module DM. A structure of the folding set will be illustrated inFIG.7below. The display module DM may include a first non-folding area NFA1, a folding area FA, and a second non-folding area NFA2, which are arranged in the second direction DR2, like the display device DD. The folding area FA may include a curved part CSP, a first extension part EX1disposed between the curved part CSP and the first non-folding area NFA1, and a second extension part EX2disposed between the curved part CSP and the second non-folding area NFA2. Each of the first extension part EX1and the second extension part EX2may extend from the curved part CSP. The display module DM may include a display panel DP, an anti-reflection layer RPL, a window WIN, a window protection layer WP, a panel protection layer PPL, a printed layer PIT, a support plate SPT, a cushion layer CUL, and a coating layer BCT. The display panel DP in an embodiment of the invention may be a light emitting display panel. In an embodiment, the display panel DP may be an organic light emitting display panel or a quantum dot light emitting display panel, for example. The organic light emitting display panel may include a light emitting layer including an organic light emitting material. The quantum dot light emitting display panel may include a light emitting layer including a quantum dot or a quantum rod. Hereinafter, the display panel DP will be described as the organic light emitting display panel. The display panel DP may be a flexible display panel. The display panel DP may include a first non-folding area NFA1, a folding area FA, and a second non-folding area NFA2, which are arranged in the second direction DR2, like the display module DM. Also, the folding area FA of the display panel DP may include a curved part CSP, a first extension part EX1, and a second extension part EX2like the display module DM. The display panel DP may include a plurality of pixels for displaying an image. The pixels may include organic light emitting devices. The anti-reflection layer RPL may be disposed on the display panel DP. The anti-reflection layer RPL may be disposed directly on a top surface of the display panel DP. However, the invention is not limited thereto. In an embodiment, the anti-reflection layer RPL may be manufactured as a separate panel and attached to the display panel DP by an adhesive, for example. The anti-reflection layer RPL may be defined as an external light reflection preventing film. The anti-reflection layer RPL may reduce a reflectance of external light incident to the display panel DP from above the display device DD. When the external light traveling toward the display panel DP is reflected by the display panel DP and re-provided to an external user, the user may recognize the external light like a mirror. In order to prevent the above-described phenomenon, the anti-reflection layer RPL may include a plurality of color filters displaying the same color as the pixels. The color filters may filter the external light into the same color as the pixels. In this case, the external light may not be recognized by the user. However, the invention is not limited thereto. In an embodiment, the anti-reflection layer RPL may include a phase retarder and/or a polarizer, for example. The window WIN may be disposed on the anti-reflection layer RPL. The window WIN may protect the display panel DP and the anti-reflection layer RPL from external scratches. The window WIN may have an optically clear property. The window WIN may include glass. In an embodiment, the window WIN may be defined as ultra-thin glass (“UTG”), for example. However, the invention is not limited thereto. In an embodiment, the window WIN may include a synthetic resin film, for example. The window protection layer WP may be disposed on the window WIN. The window protection layer WP may protect the window WIN. In an embodiment, the window protection layer WP may include a flexible plastic material such as polyimide (“PI”) or polyethylene terephthalate (“PET”), for example. Although not shown, a hard coating layer may be further disposed on the window protection layer WP. Also, an anti-fingerprint layer or an anti-scattering layer, which is defined as a functional layer, may be further disposed on the window protection layer WP. The panel protection layer PPL may be disposed below the display panel DP. The panel protection layer PPL may protect a lower portion of the display panel DP. The panel protection layer PPL may include a flexible plastic material. In an embodiment, the panel protection layer PPL may include PET, for example. The support plate SPT may be disposed below the panel protection layer PPL. The support plate SPT may include a metal material such as stainless steel. Although the support plate SPT may include STS316 as an example, the invention is not limited thereto. In an embodiment, the support plate SPT may include various metal materials, for example. The support plate SPT may support the display panel DP. In an embodiment, the support plate SPT may have a thickness of about 40 micrometers (μm) or less, for example. A heat dissipation performance of the display device DD may improve by the support plate SPT. The support plate SPT may include a first support plate SPT1disposed on the first non-folding area NFA1and a second support plate SPT2disposed on the second non-folding area NFA2. The support plate SPT may not be disposed on the folding area FA. The cushion layer CUL may be disposed below the support plate SPT. The cushion layer CUL may absorb an external impact applied to a lower portion of the display module DM to protect the display module DM. The cushion layer CUL may include a foam sheet having a predetermined elastic force. In an embodiment, the cushion layer CUL may include a foam, a sponge, polyurethane, or thermoplastic polyurethane, for example. The cushion layer CUL may include a first cushion layer CUL1disposed below the first support plate SPT1and a second cushion layer CUL2disposed below the second support plate SPT2. The cushion layer CUL may not be disposed on the folding area FA. The coating layer BCT may be disposed between the panel protection layer PPL and the support plate SPT. The coating layer BCT may be applied to a top surface of the first support plate SPT1and a top surface of the second support plate SPT2. The coating layer BCT may include a material having a black color. The coating layer BCT may prevent structures disposed therebelow from being recognized from thereabove. The display device DD may include first to fourth adhesive layers AL1to AL4. The first adhesive layer AL1may be disposed between the window protection layer WP and the window WIN. The second adhesive layer AL2may be disposed between the window WIN and the anti-reflection layer RPL. The third adhesive layer AL3may be disposed between the display panel DP and the panel protection layer PPL. The fourth adhesive layer AL4may be disposed between the panel protection layer PPL and the support plate SPT. Specifically, the fourth adhesive layer AL4may be disposed between the panel protection layer PPL and the coating layer BCT. In an embodiment, each of the first to fourth adhesive layers AL1to AL4may include a transparent adhesive such as a pressure sensitive adhesive (“PSA”) or an optically clear adhesive (“OCA”). The window protection layer WP and the window WIN may be bonded to each other by the first adhesive layer AL1. The window WIN and the anti-reflection layer RPL may be bonded to each other by the second adhesive layer AL2. The display panel DP and the panel protection layer PPL may be bonded to each other by the third adhesive layer AL3. The panel protection layer PPL and the support plate SPT may be bonded to each other by the fourth adhesive layer AL4. Specifically, the panel protection layer PPL may be bonded to the coating layer BCT by the fourth adhesive layer AL4. The printed layer PIT may be disposed on a bottom surface of the window protection layer WP. The printed layer PIT may overlap the non-display area NDA in the plan view. The first adhesive layer AL1may be disposed below the window protection layer WP to cover the printed layer PIT. Although the printed layer PIT may have a black color as an example, the invention is not limited thereto. In an embodiment, the printed layer PIT may have various colors, for example. When viewed in the plan view, the fourth adhesive layer AL4may overlap the first and second non-folding areas NFA1and NFA2. Also, in the plan view, the fourth adhesive layer AL4may overlap the first and second extension parts EX1and EX2and may not overlap the curved part CSP. Thus, the first and second support plates SPT1and SPT2may be attached to the first and second non-folding areas NFA1and NFA2and the first and second extension parts EX1and EX2and may not be attached to the curved part CSP. In an embodiment, in terms of the third direction DR3, the window WIN may have a thickness greater than about 30 μm and less than about 80 μm, and the window protection layer WP may have a thickness in a range from about 55 μm to about 100 μm, for example. In an embodiment, in terms of the third direction DR3, the support plate SPT may have a thickness in a range from about 80 μm to about 150 μm. In terms of the first direction DR1and the second direction DR2, the window protection layer WP may have a width greater than that of the window WIN. In terms of the first direction DR1and the second direction DR2, each of the display panel DP, the anti-reflection layer RPL, and the panel protection layer PPL may have a width greater than that of the window protection layer WP. In terms of the first direction DR1and the second direction DR2, the display panel DP, the anti-reflection layer RPL, and the panel protection layer PPL may have the same width as each other. In terms of the first direction DR1and the second direction DR2, the first adhesive layer AL1may have the same width as the window protection layer WP, and the second adhesive layer AL2may have a width less than that of the window WIN. Since the window WIN and the second adhesive layer AL2have different widths from each other, a stepped structure may be provided between the window protection layer WP and the display panel DP due to the different widths therebetween. The window protection layer WP may have a thickness enough to prevent the stepped structure from being recognized from the outside. In an embodiment, when the window protection layer WP has a thickness in a range from about 55 μm to about 100 μm, the stepped structure may not be recognized from the outside, for example. When viewed in the plan view, the first and second support plates SPT1and SPT2and the first and second cushion layers CUL1and CUL2may be disposed at inner side more than an edge of the display panel DP. FIG.5is a cross-sectional view illustrating the display panel inFIG.4. Referring toFIG.5, the display panel DP may include a substrate SUB, a circuit device layer DP-CL disposed on the substrate SUB, a display device layer DP-OLED disposed on the circuit device layer DP-CL, a thin-film encapsulation layer TFE disposed on the display device layer DP-OLED, and an input sensing part ISP disposed on the thin-film encapsulation layer TFE. The substrate SUB may include a display area DA and a non-display area NDA disposed around the display area DA. The substrate SUB may include a flexible plastic material. In an embodiment, the substrate SUB may include PI. The display device layer DP-OLED may be disposed on the display area DA, for example. The circuit device layer DP-CL may include an insulation layer, a semiconductor pattern, a conductive pattern, and a signal line. Each of an insulation layer, a semiconductor layer, and a conductive layer may be provided on the substrate SUB through a method such as coating and deposition. Thereafter, the insulation layer, the semiconductor layer, and the conductive layer may be selectively patterned through a plurality of photolithography processes to provide a semiconductor pattern, a conductive pattern, and a signal line. The circuit device layer DP-CL may include a transistor constituted by the semiconductor pattern, the conductive pattern, and the signal line. The display device layer DP-OLED may include light emitting devices connected to the transistors. The pixels PX may include the transistors and the light emitting devices. The thin-film encapsulation layer TFE may be disposed on the circuit device layer DP-CL to cover the display device layer DP-OLED. The thin-film encapsulation layer TFE may include an inorganic layer, an organic layer, and an inorganic layer, which are sequentially laminated with each other. The inorganic layers may include an inorganic material to protect the pixels PX from moisture/oxygen. The organic layer may include an organic material to protect the pixels PX from foreign substances such as dust particles. The input sensing part ISP may include a plurality of sensors (not shown) for sensing an external input. The sensors may sense the external input by a capacitive method. The external input may include various types of inputs such as a portion of a user's body, light, heat, a pen, or pressure. The input sensing part ISP may be manufactured directly on the thin-film encapsulation layer TFE when the display panel DP is manufactured. However, the invention is not limited thereto. In an embodiment, the input sensing part ISP may be manufactured as a panel separated from the display panel DP and then attached to the display panel DP by an adhesive layer, for example. FIG.6is a plan view illustrating the display device ofFIG.1in more detail.FIG.7is an exploded perspective view illustrating the display device inFIG.6. Referring toFIGS.6and7, the display device DD may include a display module DM, a bezel cover BZC disposed around the display module DM, and a folding set FST disposed below the display module DM and the bezel cover BZC. The bezel cover BZC may be disposed around first and second non-folding areas NFA1and NFA2of the display module DM. The bezel cover BZC may surround the first and second non-folding areas NFA1and NFA2of the display module DM. Although the bezel cover BZC may have a black color, the invention is not limited to the color of the bezel cover BZC. The edge part EG of the display device DD inFIG.1may include the bezel cover BZC. The folding set FST may be disposed below the display module DM and the bezel cover BZC to support the display module DM and the bezel cover BZC. The folding set FST may be parallel to the first direction DR1and folded with respect to a biaxial folding axis overlapping the folding area FA in the plan view to fold the display module DM. The above-described configuration will be described below in detail. Although not shown inFIG.7, the display module DM and the bezel cover BZC may be attached to the folding set FST by an adhesive. FIG.8is a plan view illustrating the folding set inFIG.7. Referring toFIG.8, the folding set FST may include a first body BD1, a second body BD2, a hinge module HGM, a first wing plate WPT1, and a second wing plate WPT2. The first body BD1and the second body BD2may be arranged in the second direction DR2. Each of the first body BD1and the second body BD2may have a flat surface defined by the first and second directions DR1and DR2. The first body BD1and the second body BD2may have shapes that are symmetrical to each other in the second direction DR2. The hinge module HGM may be disposed between the first body BD1and the second body BD2. The hinge module HGM may be connected to both sides of the first body BD1, which are opposite to each other in the first direction DR1, and both sides of the second body BD2, which are opposite to each other in the first direction DR1. The hinge module HGM may be connected to the first and second bodies BD1and BD2and provide biaxial rotation axes RX1and RX2to the first and second bodies BD1and BD2, respectively. The biaxial rotation axes RX1and RX2may each extend in the first direction DR1and be spaced apart from each other in the second direction DR2. The biaxial rotation axes RX1and RX2may include a first rotation axis RX1and a second rotation axis RX2, respectively, which are spaced apart from each other in the second direction DR2and extend in the first direction DR1. The first and second rotation axes RX1and RX2may define the folding axis FX inFIG.2. The first wing plate WPT1and the second wing plate WPT2may be arranged in the second direction DR2and extend in the first direction DR1. The first wing plate WPT1and the second wing plate WPT2may have shapes that are symmetrical to each other in the second direction DR2. Each of the first wing plate WPT1and the second wing plate WPT2may have a flat surface defined by the first and second directions DR1and DR2. The first wing plate WPT1may be disposed adjacent to the hinge module HGM and connected to the first body BD1. The second wing plate WPT2may be disposed adjacent to the hinge module HGM and connected to the second body BD2. FIG.9is an exploded perspective view illustrating the folding set inFIG.8. Referring toFIG.9, a top surface of the first body BD1, which is adjacent to a first side OS1of the first body BD1, may have a first inclined surface SLP1. The first inclined surface SLP1may have a height that gradually decreases in a direction toward the first side OS1of the first body BD1. The first inclined surface SLP1may be stepped with a top surface of the first body BD1around the first inclined surface SLP1. A top surface of the second body BD2, which is adjacent to a first side OS2of the second body BD2, may have a second inclined surface SLP2. The first side OS2of the second body BD2may face the first side OS1of the first body BD1. The second inclined surface SLP2may have a height that gradually decreases in a direction toward the first side OS2of the second body BD2. The second inclined surface SLP2may be stepped with a top surface of the second body BD2around the second inclined surface SLP2. The first wing plate WPT1may be disposed on the first body BD1and coupled to the first body BD1. The first wing plate WPT1may be disposed on the first inclined surface SLP1. The first wing plate WPT1may be rotatably coupled to a portion of the first body BD1, which is adjacent to the first side OS1of the first body BD1. In an embodiment, the first wing plate WPT1may be rotatably coupled to an upper side of the first inclined surface SLP1, which is farthest from the first side OS1of the first body BD1, for example. A plurality of first rotation surfaces RTS1may be defined at the upper side of the first inclined surface SLP1. The upper side of the first inclined surface SLP1may be defined as a first boundary BA1between the first inclined surface SLP1and the top surface of the first body BD1around the first inclined surface SLP1. Each of the of first rotation surfaces RTS1may have a recessed shape and be defined in the first body BD1. The first rotation surfaces RTS1may be arranged in the first direction DR1along the upper side of the first inclined surface SLP1. The first wing plate WPT1may include a plurality of first coupling parts CUP1protruding from the second side of the first wing plate WPT1, which is opposite to the first side, which faces the second wing plate WPT2, of the first wing plate WPT1. The first coupling parts CUP1may be arranged in the first direction DR1. The first coupling parts CUP1may be disposed on the first rotation surfaces RTS1, respectively. The first wing plate WPT1may rotate with respect to a wing rotation axis that is adjacent to the second side of the first wing plate WPT1and parallel to the first direction DR1. In an embodiment, the first coupling parts CUP1may be coupled to the first rotation surfaces RTS1and rotate with respect to the wing rotation axis, for example. The wing rotation axis will be illustrated inFIGS.23and24below. The second wing plate WPT2may be disposed on the second body BD2and coupled to the second body BD2. The second wing plate WPT2may be disposed on the second inclined surface SLP2. The second wing plate WPT2may be rotatably coupled to a portion of the second body BD2, which is adjacent to the first side OS2of the second body BD2. In an embodiment, the second wing plate WPT2may be rotatably coupled to an upper side of the second inclined surface SLP2, which is farthest from the first side OS2of the second body BD2, for example. A plurality of second rotation surfaces RTS2may be defined at the upper side of the second inclined surface SLP2. The upper side of the second inclined surface SLP2may be defined as a second boundary BA2between the second inclined surface SLP2and the top surface of the second body BD2around the second inclined surface SLP2. Each of the of second rotation surfaces RTS2may have a recessed shape and be defined in the second body BD2. The second rotation surfaces RTS2may be arranged in the first direction DR1along the upper side of the second inclined surface SLP2. The second wing plate WPT2may include a plurality of second coupling parts CUP2protruding from the second side of the second wing plate WPT2, which is opposite to the first side, which faces the first wing plate WPT1, of the second wing plate WPT2. The second coupling parts CUP2may be arranged in the first direction DR1. The second coupling parts CUP2may be disposed on the second rotation surfaces RTS2, respectively. The second wing plate WPT2may rotate with respect to a wing rotation axis that is adjacent to the second side of the second wing plate WPT2and parallel to the first direction DR1. In an embodiment, the second coupling parts CUP2may be coupled to the second rotation surfaces RTS2and rotate with respect to the wing rotation axis, for example. A hinge module HGM may include a first hinge HIG1, a second hinge HIG2, a central frame CFM, and a hinge cover HGC. The first hinge HIG1and the second hinge HIG2may be arranged in the first direction DR1. The first hinge HIG1and the second hinge HIG2may have shapes symmetrical to each other in the first direction DR1. The first hinge HIG1and the second hinge HIG2may be connected to the first and second bodies BD1and BD2and provide first and second rotation axes RX1and RX2to the first and second bodies BD1and BD2. The first hinge HIG1may be disposed between the first body BD1and the second body BD2. The first hinge HIG1may be connected to the first sides of the first and second bodies BD1and BD2among both first and second sides, which are opposite to each other in the first direction DR1, of the first and second bodies BD1and BD2. The second hinge HIG2may be disposed between the first body BD1and the second body BD2. The second hinge HIG2may be connected to second sides of the first and second bodies BD1and BD2among the both first and second sides, which are opposite to each other in the first direction DR1, of the first and second bodies BD1and BD2. A plurality of first holes H1may be defined in each of the first hinge HIG1and the second hinge HIG2. A plurality of first fastening grooves CG1may be defined in each of the first hinge HIG1and the second hinge HIG2. As a plurality of screws (not shown) passes through the first holes H1and is inserted to the first fastening grooves CG1, the first and second hinges HIG1and HIG2may be connected to the first and second bodies BD1and BD2. The central frame CFM may extend in the first direction DR1and be disposed between the first hinge HIG1and the second hinge HIG2. The central frame CFM may be disposed between the first body BD1and the second body BD2. The central frame CFM may be disposed between the first wing plate WPT1and the second wing plate WPT2. The hinge cover HGC may be disposed below the first hinge HIG1, the second hinge HIG2, and the central frame CFM. The first hinge HIG1, the second hinge HIG2, and the central frame CFM may be connected to the hinge cover HGC. In an embodiment, a plurality of second holes H2may be defined in each of the first hinge HIG1, the second hinge HIG2, and the central frame CFM, for example. A plurality of second fastening grooves CG2may be defined in the hinge cover HGC. As a plurality of screws (not shown) passes through the second holes H2and is inserted to the second fastening grooves CG2, the first hinge HIG1, the second hinge HIG2, and the central frame CFM may be connected to the hinge cover HGC. First grooves GV1may be defined in upper portions of both sides, which are opposite to each other in the second direction DR2, of the central frame CFM. Each of the first grooves GV1may extend in the first direction DR1. When the hinge module HGM is connected to the first and second bodies BD1and BD2, the first side of the first wing plate WPT1and the first side of the second wing plate WPT2may be disposed in the first grooves GV1, respectively. FIG.10is an exploded perspective view illustrating the first hinge inFIG.9.FIG.11is a front view illustrating a first frame inFIG.10when the first frame is viewed in the first direction.FIG.12is an internal transparent perspective view illustrating a second frame inFIG.10. Hereinafter, since the second hinge HIG2have the same constitution as the first hinge HIG1, the constitution of the first hinge HIG1will be described in detail, and the constitution of the second hinge HIG2will be omitted. Hereinafter,FIG.9will be described together as necessary. Referring toFIGS.9and10, the first hinge HIG1may include a plurality of bracket bodies BBD1and BBD2, a plurality of rotation pin units RPN1and RPN2, a plurality of bracket cams BCM1and BCM2, a first frame FM1, a plurality of gears GR1and GR2, a plurality of cams CAM1and CMA2, a plurality of springs SPR1and SPR2, a second frame FM2, and a plurality of ring units RG. Gears GR1of the gears GR1and GR2, the cams CAM1and CAM2, and the springs SPR1and SPR2may be defined as a torque control part TQC. The bracket bodies BBD1and BBD2may be connected to the first and second bodies BD1and BD2and the rotation pin units RPN1and RPN2. The rotation pin units RPN1and RPN2may be connected to the first and second bodies BD1and BD2through the bracket bodies BBD1and BBD2. The bracket bodies BBD1and BBD2may include a first bracket body BBD1connected to the first body BD1and a second bracket body BBD2connected to the second body BD2. The first bracket body BBD1and the second bracket body BBD2may be arranged in the second direction DR2and have shapes that are symmetrical to each other in the second direction DR2. The first holes H1may be defined in each of the first and second bracket bodies BBD1and BBD2. The rotation pin units RPN1and RPN2may include a first rotation pin unit RPN1connected to the first bracket body BBD1and a second rotation pin unit RPN2connected to the second bracket body BBD2. The first rotation pin unit RPN1and the second rotation pin unit RPN2may be spaced apart from each other in the second direction DR2and each extend in the first direction DR1. The first rotation pin unit RPN1and the second rotation pin unit RPN2may define a first rotation axis RX1and a second rotation axis RX2, respectively. The first rotation pin unit RPN1and the second rotation pin unit RPN2may be connected to a first side of the first bracket body BBD1and a first side of the second bracket body BBD2, which face each other in the second direction DR2, respectively. The first and second rotation pin units RPN1and RPN2may be separately manufactured and connected to the first and second bracket bodies BBD1and BBD2, respectively. However, the invention is not limited thereto. In an embodiment, the first and second rotation pin units RPN1and RPN2may be integrated with the first and second bracket bodies BBD1and BBD2and extend from the first and second bracket bodies BBD1and BBD2, respectively, for example. The first frame FM1, the second frame FM2, and the central frame CFM may be arranged in the first direction DR1. The second frame FM2may be disposed between the first frame FM1and the central frame CFM. The first frame FM1may be disposed between the first and second bracket bodies BBD1and BBD2and the second frame FM2. Referring toFIGS.10and11, the first and second rotation pin units RPN1and RPN2may be inserted to the first frame FM1and coupled to the first frame FM1. In an embodiment, third holes H3each extending in the first direction DR1may be defined in a portion of the first frame FM1, which is adjacent to an upper side of the first frame FM1, for example. The first and second rotation pin units RPN1and RPN2may be inserted to the third holes H3, respectively, and coupled to the first frame FM1. Each of the gears GR1and GR2may extend in the first direction DR1. The gears GR1and GR2may include a plurality of first gears GR1and a plurality of second gears GR2. Although two first gears GR1and two second gears GR2are illustrated, the invention is not limited to the number of each of the first and second gears GR1and GR2. The first gears GR1may each extend in the first direction DR1and be engaged with each other to rotate in the second direction DR2. The second gears GR2may each extend in the first direction DR1and be spaced apart from each other in the second direction DR2. The first gears GR1may be disposed between the second gears GR2. The second gears GR2may be engaged with the first gears GR1to rotate in the second direction DR2. The first and second gears GR1and GR2may rotate with respect to gear rotation shafts (not shown) parallel to the first direction DR1. The first gears GR1may include a plurality of first protruding parts PT1disposed on outer circumferential surfaces of the first gears GR1, which are adjacent to first sides of the first gears GR1among both first and second sides, which are opposite to each other in the first direction DR1, of the first gears GR1, to define shapes of the gears. As the first protruding parts PT1of the first gears GR1move while being engaged with each other, the first gears GR1may rotate together. First sides of the second gears GR2, which are opposite to each other in the first direction DR1among both first and second sides of the second gears GR2, may be adjacent to the first sides of the first gears GR1. The second gears GR2may include a plurality of second protruding parts PT2disposed on outer circumferential surfaces of the second gears GR2, which are adjacent to the second sides of the second gears GR2among both sides of the second gears GR2to define shapes of the gears. As the second protruding parts PT2move while engaged with the first protruding parts PT1, the second gears GR2may rotate in conjunction with the first gears GR1. As the first gears GR1are inserted to the cams CAM1and CAM2and the springs SPR1and SPR2, the cams CAM1and CAM2and the springs SPR1and SPR2may be disposed on the first gears GR1. The second sides of the first gears GR1may be inserted to the cams CAM1and CAM2and the springs SPR1and SPR2. The cams CAM1and CAM2and the springs SPR1and SPR2may be disposed between the first protruding parts PT1and the second sides of the first gears GR1. The first and second gears GR1and GR2may have first sides facing the first frame FM1and the second sides facing the second frame FM2. The first and second gears GR1and GR2may have first sides inserted to the first frame FM1and the second sides inserted to the second frame FM2. A plurality of fourth and fifth holes H4and H5each extending in the first direction DR1may be defined in a portion of the first frame FM1, which is adjacent to a lower side of the first frame FM1. The fourth and fifth holes H4and H5may be defined below the third holes H3. The fourth holes H4may be defined in correspondence to the first gears GR1. The fifth holes H5may be defined in correspondence to the second gears GR2. The first gears GR1may be coupled to the first frame FM1as the first sides of the first gears GR1are inserted to the fourth holes H4, respectively. The second gears GR2may be coupled to the first frame FM1as the first sides of the second gears GR2are inserted to the fifth holes H5, respectively. A portion of the first frame FM1between the third holes H3and the fourth holes H4may be defined as a flat portion PP and have a flat plate shape defined by the first and second directions DR1and DR2. Seated grooves SGV may be defined in upper portions of both sides, which face each other in the first direction DR1, of the second frame FM2. An end of the flat portion PP may be disposed in the seated groove SGV of the second frame FM2, which faces the first frame FM1. An upper portion of a first side of the central frame CFM may be disposed in the seated groove SGV of the second frame FM2, which faces the central frame CFM. Referring toFIGS.10and12, an inner space SPC and a plurality of insertion grooves IGV may be defined in the second frame FM2. The inner space SPC may be defined in correspondence to the first gears GR1. The insertion grooves IGV may be defined in correspondence to the second gears GR2. The second sides of the first gears GR1may be inserted to the inner space SPC. The second sides of the second gears GR2may be inserted to the insertion grooves IGV, respectively. Two holes (no reference numerals) may be defined at an end of the inner space SPC, and the second sides of the first gears GR1may be disposed in the two holes, respectively. The first and second cams CAM1and CAM2and the first and second springs SPR1and SPR2may be disposed in the inner space SPC so as to be disposed in the second frame FM2. Referring toFIG.10, the bracket cams BCM1and BCM2may include a first bracket cam BCM1coupled to the first bracket body BBD1and a second bracket cam BCM2coupled to the second bracket body BBD2. The first bracket cam BCM1and the second bracket cam BCM2may be arranged in the second direction DR2and have shapes symmetrical to each other in the second direction DR2. Grooves GV may be defined at both sides of the first frame FM1, which are opposite to each other in the second direction DR2. The first and second bracket cams BCM1and BCM2may be disposed in the grooves GV. First sides of the first and second bracket cams BCM1and BCM2, which face each other in the second direction DR2, may be disposed in the grooves GV. The first sides of the second gears GR2may be inserted to the first sides of the first and second bracket cams BCM1and BCM2. Thus, the first sides of the first and second bracket cams BCM1and BCM2may be coupled to the second gears GR2. The first and second bracket cams BCM1and BCM2may be coupled to the second gears GR2as the first sides of the second gears GR2are inserted to holes H defined at the first sides of the first and second bracket cams BCM1and BCM2. The second sides of the first and second bracket cams BCM1and BCM2may protrude in the first direction DR1and be disposed in guide grooves GG defined in the first and second bracket bodies BBD1and BBD2. Ring units RG may be disposed at the second sides of the first and second bracket cams BCM1and BCM2, which protrude in the first direction DR1. The guide grooves GG may be defined in first surfaces of the first and second bracket bodies BBD1and BBD2, which face the first and second bracket cams BCM1and BCM2. The guide groove GG may each extend in the second direction DR2. When the first and second rotation pin units RPN1and RPN2, the first and second bracket cams BCM1and BCM2may rotate in conjunction with the second gears GR2to move along the guide grooves GG. This operation will be described below in detail. Second grooves GV2may be defined in upper portions of both sides of the second frame FM2, which are opposite to each other in the second direction DR2. The second grooves GV2may each extend in the first direction DR1. When the hinge module HGM is connected to the first and second bodies BD1and BD2, the first side of the first wing plate WPT1and the first side of the second wing plate WPT2, which face each other, may be disposed in the second grooves GV2. FIG.13is an exploded perspective view illustrating the torque control part inFIG.10. Hereinafter,FIG.10will be described together as necessary. Referring toFIGS.10and13, the torque control part TQC may include a plurality of first gears GR1, a plurality of cams CAM1and CAM2, and a plurality of springs SPR1and SPR2. The cams CAM1and CAM2may include a first cam CAM1and a second cam CAM2, which are spaced apart from each other in the first direction DR1. The first cam CAM1may include a first moving cam MVC1and a first rotating cam RCM1. The second cam CAM2may include a second moving cam MVC2and a second rotating cam RCM2. The springs SPR1and SPR2may include a first spring SPR1and a second spring SPR2, each of which extends in the first direction DR1. The first gears GR1may be inserted to the first and second moving cams MVC1and MVC2. The first gears GR1may be inserted to each of the first and second moving cams MVC1and MVC2in common. The first gears GR1may be inserted to holes (no reference numerals) passing through each of the first and second moving cams MVC1and MVC2in the first direction DR1. As the first sides of the first gears GR1pass through the holes defined in the first and second moving cams MVC1and MVC2, the first and second moving cams MVC1and MVC2may be disposed on outer circumferential surfaces of portions of the first gears GR1. The first gears GR1may be inserted to the first and second rotating cams RCM1and RCM2. The first gears GR1may be inserted to the first and second rotating cams RCM1and RCM2, respectively, so that the first gears GR1one-to-one correspond to the first and second rotating cams RCM1and RCM2. The corresponding first gear GR1of the first gears GR1may be inserted to a hole (no reference numerals) passing through each of the first and second rotating cams RCM1and RCM2in the first direction DR1. As the first sides of the first gears GR1pass through the holes defined in the first and second rotating cams RCM1and RCM2, the first and second rotating cams RCM1and RCM2may be disposed on outer circumferential surfaces of portions of the first gears GR1. The first gears GR1may be inserted to the first and second springs SPR1and SPR2. The first gears GR1may be inserted to the first and second springs SPR1and SPR2, respectively, so that the first gears GR1one-to-one correspond to the first and second springs SPR1and SPR2. Each of the first and second moving cams MVC1and MVC2may be disposed between corresponding one pair of the rotating cam and the spring among the first and second rotating cams RCM1and RCM2and the first and second springs SPR1and SPR2. The corresponding one pair of the rotating cam and the spring may be disposed on the same first gear GR1. Thus, each of the first and second moving cams MVC1and MVC2may be disposed between one pair of the rotating cam and the spring disposed on the corresponding first gear GR1of the first gears GR1. The first moving cam MVC1may be disposed between the first rotating cam RCM1and the first spring SPR1, which are disposed on one first gear GR1. The second moving cam MVC2may be disposed between the second rotating cam RCM2and the second spring SPR2, which are disposed on another first gear GR1. One surface of the moving cam and one surface of the rotating cam, which are disposed on the same first gear GR1and face each other, may include protruding portions. The protruding portions of the one surface of the moving cam and the protruding portions of the one surface of the rotating cam, which are disposed on the same first gear GR1, may be alternately disposed to each other. In an embodiment, one surface of the first moving cam MVC1and one surface of the first rotating cam RCM1, which are disposed on one first gear GR1and face each other, may include protruding portions (no reference numerals), for example. The protruding portions of the one surface of the first moving cam MVC1and the protruding portions of the one surface of the first rotating cam RCM1may be alternately disposed to each other. One surface of the second moving cam MVC2and one surface of the second rotating cam RCM2, which are disposed on another first gear GR1and face each other, may include protruding portions (no reference numerals). The protruding portions of the one surface of the second moving cam MVC2and the protruding portions of the one surface of the second rotating cam RCM2may be alternately disposed to each other. FIG.14is a view illustrating a state in which the first hinge inFIGS.9and10is coupled to the first and second bodies.FIG.15is a view illustrating components disposed in the first and second frames inFIG.14. InFIG.15, the first and second frames FM1and FM2are omitted. Hereinafter,FIG.10will be described together as necessary. Referring toFIGS.10,14, and15, the first and second bracket bodies BBD1and BBD2may be connected to the first and second bodies BD1and BD2through screws inserted to the first holes H1. The first and second rotation pin units RPN1and RPN2may be inserted to the first frame FM1and rotatably coupled to the first frame FM1. The first rotation pin unit RPN1may define the first rotation axis RX1, and the second rotation pin unit RPN2may define the second rotation axis RX2. The first and second bracket cams BCM1and BCM2may be coupled to the first and second bracket bodies BBD1and BBD2, respectively. As the first and second bracket cams BCM1and BCM2may be disposed on the first frame FM1, and the second gears GR2are inserted to the first and second bracket cams BCM1and BCM2, the first and second bracket cams BCM1and BCM2may be coupled to the first frame FM1. The first and second bracket cams BCM1and BCM2may be coupled to the second gears GR2to rotate in conjunction with the second gears GR2. An end of the flat part PP is disposed in the seated groove SGV defined at a first side of the second frame FM2, and the first and second frames FM1and FM2may be connected to each other by fastening units (not shown) such as screws. The first and second gears GR1and GR2may be inserted and coupled to the first and second frames FM1and FM2. The first and second protruding parts PT1and PT2may be engaged with each other and coupled to rotate each other. The first and second moving cams MVC1and MVC2, the first and second rotating cams RCM1and RCM2, and the first and second springs SPR1and SPR2may be coupled to the first gears GR1and disposed in the second frame FM2. The first and second rotating cams RCM1and RCM2may be coupled to the first gears GR1to rotate in conjunction with the first gears GR1. A first side of the first wing plate WPT1and a first side of the second wing plate WPT2may be disposed in the first grooves GV1and the second grooves GV2. The first and second coupling parts CUP1and CUP2of the first and second wing plates WPT1and WPT2may be rotatably coupled to the first and second rotation surfaces RTS1and RTS2defined in the first and second bodies BD1and BD2. FIGS.16A and16Bare views for explaining an operation of the first rotating cam and the first moving cam inFIG.15. Although an operation of the first rotating cam RCM1and the first moving cam MVC1will be described, an operation of the second rotating cam RCM2and the second moving cam MVC2may be the same as that of the first rotating cam RCM1and the first moving cam MVC1. Referring toFIG.16A, first protruding portions PRT1of the first rotating cam RCM1may be disposed between second protruding portions PRT2of the first moving cam MVC1. A state in which the first protruding portions PRT1are disposed between the second protruding portions PRT2may be maintained by an elastic force applied by the first spring SPR1. In an embodiment, inFIG.16A, the display device may be in an unfolded state, for example. As the state in which the first protruding portions PRT1are disposed between the second protruding portions PRT2is maintained, the unfolded state of the display device DD may be further easily maintained. Referring toFIG.16B, the display device DD may be folded by an external force (e.g., a force of a user). When the first rotating cam RCM1rotates by an external force, the first protruding portions PRT1may move in a counterclockwise direction through protruding top surfaces of the second protruding portions PRT2. When the force of the user is greater than a force for maintaining the state in which the first protruding portions PRT1are disposed between the second protruding portions PRT2, the first protruding portions PRT1may move through the top surfaces of the second protruding portions PRT2, and the display device DD may be folded. By the above-described operation, when the display device DD is unfolded, the unfolded state is easily maintained, and when the user is intended to fold the display device DD, the display device DD may be folded by applying a predetermined force to the display device DD. For the above-described operation, the torque control part TQC including the first and second cams CAM1and CAM2may be provided to the hinge module HGM. FIG.17Ais a view illustrating an unfolded state of the folding set inFIG.8.FIG.17Bis a view illustrating a folded state of the folding set inFIG.17A. Referring toFIGS.17A and17B, the folding set FST may be folded by rotating with respect to the first and second rotation axes RX1and RX2that are defined by the first and second rotation pin units RPN1and RPN2, respectively. The display module DM disposed on the folding set FST may be folded as the folding set FST is folded. FIG.18Ais a cross-sectional view taken along line I-I′ ofFIG.14.FIGS.18B and18Care views for explaining a folded state of the folding set inFIG.18A. Hereinafter,FIG.14will be described together as necessary. Referring toFIGS.14,18A,18B, and18C, the folding set FST may be folded by rotating with respect to the first and second rotation axes RX1and RX2that are defined by the first and second rotation pin units RPN1and RPN2, respectively. As the first and second rotation pin units RPN1and RPN2rotate, the first and second bracket bodies BBD1and BBD2may move by rotating with respect to the first and second rotation axes RX1and RX2. As the first and second bracket bodies BBD1and BBD2rotate, the first and second bodies BD1and BD2connected to the first and second bracket bodies BBD1and BBD2may move by rotating with respect to the first and second rotation axes RX1and RX2. That is, the first and second rotation pin units RPN1and RPN2may provide the first and second rotation axes RX1and RX2to the first and second bodies BD1and BD2, and the first and second bodies BD1and BD2may rotate with respect to the first and second rotation axes RX1and RX2. As the first body BD1and the second body BD2are disposed to face each other, the folding set FST may be in-folded. The first and second gears GR1and GR2may be disposed below the first and second rotation pin units RPN1and RPN2. When the first and second rotation pin units RPN1and RPN2rotate, the first and second gears GR1and GR2may rotate in conjunction with the first and second rotation pin units RPN1and RPN2. Specifically, as the first and second bracket bodies BBD1and BBD2rotating in conjunction with the first and second rotation pin units RPN1and RPN2move, the first and second bracket cams BCM1and BCM2coupled to the first and second bracket bodies BBD1and BBD2may move. As the first and second bracket cams BCM1and BCM2move, the second gears GR2coupled to the first and second bracket cams BCM1and BCM2may rotate. As the second gears GR2rotate, the first gears GR1engaged with the second gears GR2may rotate. That is, as the first and second bracket cams BCM1and BCM2rotate and move, the first and second gears GR1and GR2may rotate in conjunction with the first and second bracket cams BCM1and BCM2. The first and second gears GR1and GR2may rotate with respect to gear rotation axes GRX, respectively, which are parallel to the first direction DR1and defined in central portions of the first and second gears GR1and GR2in the first direction DR1. When the first and second bracket cams BCM1and BCM2rotate, one ends of the first and second bracket cams BCM1and BCM2may move along the guide grooves GG defined in the first and second bracket bodies BBD1and BBD2. When the folding set FST is folded, the first and second bracket cams BCM1and BCM2and the first and second bracket bodies BBD1and BBD2may move relatively away from each other. As the first and second bracket cams BCM1and BCM2move along the guide grooves GG, the first and second bracket bodies BBD1and BBD2may further easily move. FIG.19Ais a cross-sectional view taken along line II-II′ ofFIG.14.FIGS.19B and19Care views for explaining a folded state of the folding set inFIG.19A. In an embodiment, inFIGS.19A,19B, and19C, the display module DM is illustrated together with the folding set FST to explain a folded state of the display module DM, for example. Referring toFIG.19A, the display module DM may be disposed on the folding set FST. The first body BD1may be disposed below the first non-folding area NFA1, and the second body BD2may be disposed below the second non-folding area NFA2. The first rotation axis RX1and the second rotation axis RX2may be disposed below a top surface of the display module DM. In a plan view, the first and second rotation axes RX1and RX2may overlap the folding area FA. A length L of the folding area FA may be defined as a length in the second direction DR2in an unfolded state of the display module DM. The central frame CFM may be disposed below the folding area FA. Although not shown, the first and second frames FM1and FM2, which are arranged with the central frame CFM in the first direction DR1, may be disposed below the folding area FA. The first body BD1may extend below the first extension part EX1and the curved part CSP, and the second body BD2may extend below the second extension part EX2and the curved part CSP. The first body BD1and the second body BD2may be adjacent to each other in the second direction DR2below the curved part CSP. The top surface of the first body BD1, which faces the first extension part EX1, may be defined as the first inclined surface SLP1. The top surface of the first body BD1below the first wing plate WPT1may be provided as the first inclined surface SLP1. The top surface of the second body BD2, which faces the second extension part EX2, may be defined as the second inclined surface SLP2. The top surface of the second body BD2below the second wing plate WPT2may be provided as the second inclined surface SLP2. Heights of the first and second inclined surfaces SLP1and SLP2may decrease in a direction toward the first sides OS1and OS2of the first and second bodies BD1and BD2. The first inclined surface SLP1and the second inclined surface SLP2may be stepped with the top surfaces of the first and second bodies BD1and BD2disposed below the first and second non-folding areas NFA1and NFA2. A boundary between the first inclined surfaces SLP1and the first body BD1disposed below the first non-folding areas NFA1may be defined as the first boundary BA1. A boundary between the second inclined surfaces SLP2and the second body BD2disposed below the second non-folding areas NFA2may be defined as the second boundary BA2. The first wing plate WPT1may be disposed between the first extension part EX1and the first inclined surface SLP1. The first wing plate WPT1may be adjacent to the first boundary BA1. The second wing plate WPT2may be disposed between the second extension part EX2and the second inclined surface SLP2. The second wing plate WPT2may be adjacent to the second boundary BA2. A first side of the first wing plate WPT1and a first side of the second wing plate WPT2, which face each other, may be disposed on both sides of the central frame CFM. Specifically, the first side of the first wing plate WPT1and the first side of the second wing plate WPT2, which face each other, may be disposed in the first grooves GV1defined at the both sides of the central frame CFM. Although not shown inFIG.19A, the first side of the first wing plate WPT1and the first side of the second wing plate WPT2may be disposed in the second grooves GV2defined in the second frame FM2. The display device DD may further include an adhesive layer ADH. The adhesive layer ADH may be disposed between the first non-folding area NFA1and the first body BD1and between the second non-folding area NFA2and the second body BD2. Also, the adhesive layer ADH may be disposed between the first extension part EX1and the first wing plate WPT1and disposed between the second extension part EX2and the second wing plate WPT2. The display module DM may be attached to the first and second bodies BD1and BD2and the first and second wing plates WPT1and WPT2by the adhesive layer ADH. Although the adhesive layer ADH may be a double sided tape as an example, the invention is not limited to the kind of the adhesive layer ADH. Referring toFIGS.19B and19C, as the folding set FST is folded with respect to the first and second rotation axes RX1and RX2, the display module DM may be folded. As the folding area FA is bent, the display module DM may be folded. The display module DM may be in-folded so that the first non-folding area NFA1and the second non-folding area NFA2face each other. When the display module DM is folded, the curved part CSP may be bent to have a predetermined curvature. That is, the curved part CSP may be bent to have a predetermined radius of curvature. In an embodiment, the radius of curvature may be set in a range from about 1.5 millimeters (mm) to about 5.0 mm, and more preferably set to about 2.5 mm, for example. A portion of the display module DM between the first extension part EX1and the first non-folding area NFA1may be bent. The first extension part EX1may be bent from the first non-folding area NFA1and extend to the curved part CSP. The first extension part EX1attached to the flat first wing plate WPT1may maintain a flat state. A portion of the display module DM between the second extension part EX2and the second non-folding area NFA2may be bent. The second extension part EX2may be bent from the second non-folding area NFA2and extend to the curved part CSP. The second extension part EX2attached to the flat second wing plate WPT2may maintain a flat state. The bent portion of the display module DM between the first extension part EX1and the first non-folding area NFA1may be defined as a first reverse curvature part ICV1. The bent portion of the display module DM between the second extension part EX2and the second non-folding area NFA2may be defined as a second reverse curvature part ICV2. When the display module DM is folded, the first reverse curvature part ICV1and the second reverse curvature part ICV2may be bent in a direction opposite to the curved part CSP. The adhesive layer ADH may not be disposed on a bottom surface of the curved part CSP and bottom surfaces of the first and second reverse curvature parts ICV1and ICV2. Each of the bottom surface of the curved part CSP and the bottom surfaces of the first and second reverse curvature parts ICV1and ICV2may be a bottom surface of the display module DM, which is opposite surface of a front surface (e.g., a display surface) of the display module DM. Since the adhesive layer ADH is not disposed on the curved part CSP, the curved part CSP may be further easily bent. Also, since the adhesive layer ADH is not disposed on the first and second reverse curvature parts ICV1and ICV2, the first and second reverse curvature parts ICV1and ICV2may be further easily bent. When the display module DM is folded, the first wing plate WPT1may move toward the first inclined surface SLP1to contact the first inclined surface SLP1according to a stress of the folding area FA. When the display module DM is folded, the second wing plate WPT2may move toward the second inclined surface SLP2to contact the second inclined surface SLP2according to the stress of the folding area FA. According to the above-described folding structure, when the display module DM is folded, a gap GP between the first non-folding area NFA1and the second non-folding area NFA2may be less than a gap EGP between the first extension part EX1and the second extension part EX2. The gap EGP between the first extension part EX1and the second extension part EX2may gradually increase in a direction toward the curved part CSP. Thus, when the display module DM is folded, the display module DM may be folded into a shape like a dumbbell. Referring toFIG.19C, when the display device DD is folded from the unfolded state such that the first and second bodies BD1and BD2rotate by about 90 degrees (°) in clockwise and counterclockwise directions, respectively, the folding area FA may not contact the central frame CFM. In an embodiment, when the display device DD is folded, the curved part CSP of the folding area FA may not contact the central frame CFM, for example. Referring toFIGS.18C and19C, since the first and second gears GR1and GR2are disposed below the rotation pin units RPN1and RPN2, the first and second rotation axes RX1and RX2may be disposed above the gear rotation axes GRX. A position of the curved part CSP may be varied according to positions of the first and second rotation axes RX1and RX2. When the folding set FST is disposed below the first and second rotation axes RX1and RX2and adjacent to the gear rotation axes GRX or overlaps the gear rotation axes GRX, the curved part CSP may move further downward to contact the central frame CFM. In this case, the curved part CSP may be damaged when the display device DD is repeatedly folded and unfolded. However, in an embodiment of the invention, as the first and second rotation axes RX1and RX2are disposed higher than the gear rotation axes GRX, the curved part CSP may not contact the central frame CFM when the display device DD is folded. As a result, the curved part CSP may be prevented from being damaged. FIG.20is an enlarged view illustrating the display module inFIG.19C. Referring toFIG.20, when the display module DM is folded, the length L (refer toFIG.19A) of the folding area FA may be defined as a length of a neutral surface NTL of the folding area FA. In an embodiment, a bottom surface of the folding area FA may be further expanded and a top surface of the folding area FA is further contracted when the folding area FA is bent than when the folding area FA is flat, for example. Thus, a tensile stress may be generated in the bottom surface of the folding area FA, and a compressive stress may be generated in the top surface of the folding area FA. A portion at which the tensile stress and the compressive stress are cancelled out each other, and each of the tensile stress and the compressive stress is about zero may exist inside the folding area FA. The portion of the folding area FA, in which a stress is about zero, may be defined as the neutral surface NTL. In the neutral surface NTL, a length from a boundary between the folded folding area FA and the first non-folding area NFA1to a boundary between the folded folding area FA and the second non-folding area NFA2may be defined as the length L of the folding area FA. More specifically, the length L of the folding area FA may be defined as a sum of a length of the neutral surface NTL of the curved part CSP and lengths of portions, which correspond to the neutral surface NTL of the curved part CSP, of the first and second extension parts EX1and EX2. The neutral surface NTL of the curved part CSP may be defined as a portion of the curved part CSP, in which the tensile stress and the compressive stress of the curved part CSP are cancelled out each other, and each of the tensile stress and the compressive stress is about zero. FIG.21is view obtained by adding an X-axis and a Y-axis toFIG.19C. In an embodiment, some reference symbols are omitted inFIG.21, for example. Referring toFIG.21, an X-axis X and a Y-axis Y are defined in the display device DD. The X-axis X may be parallel to the second direction DR2. The X-axis X may overlap the top surface of the unfolded display module DM. In an embodiment, the X-axis X may overlap the top surface of the display module DM inFIG.19A, for example. The Y-axis Y may extend from a center of the folding set FST in the third direction DR3. The top surface of the unfolded display module DM may have a plane defined by the first and second directions DR1and DR2, and the Y-axis Y may be perpendicular to the top surface of the unfolded display module DM. The first rotation axis RX1and the second rotation axis RX2may be disposed at positions symmetrical with respect to the Y-axis Y. When the X-axis X and the Y-axis Y are defined as described above, a coordinate of the second rotation axis RX2may be determined. In an embodiment of the invention, the positions of the first rotation axis RX1and the second rotation axis RX2may be optimized. Hereinafter, the optimized positions of the first rotation axis RX1and the second rotation axis RX2will be explained. FIG.22is a graph showing X and Y coordinates of the first and second rotation axes with respect to the X-axis and the Y-axis inFIG.21. Referring toFIG.22, the positions of the first rotation axis RX1and the second rotation axis RX2may be varied with a predetermined range. In an embodiment, an X coordinate of the second rotation axis RX2may be determined based on a mathematical equation below, for example. (G/2)+T≤X≤(L/2) [Mathematical equation 1] Also, a Y coordinate of the second rotation axis RX2may be determined based on a mathematical equation below. Y=−X+(G/2) [Mathematical equation 2] In mathematical equation 1 and 2, G denotes a distance between the first non-folding area and the second non-folding area when the display module is folded, T denotes a thickness of the display module measured with respect to the Y-axis Y. The X coordinate and the Y coordinate of the second rotation axis RX2may be determined according to the above equation, and the X coordinate and the Y coordinate of the first rotation axis RX1may be symmetric to those of the second rotation axis RX2. The X coordinate and the Y coordinate of the second rotation axis RX2may have integer values. When the positions of the first and second rotation axes RX1and RX2are determined according to the above equations, the display module DM may be normally folded to have a dumbbell shape. Hereinafter, positions of the first and second rotation axes RX1and RX2, which satisfy conditions according to the above equations, are defined as normal positions. When the rotation axes are disposed at positions deviated from the positions determined by the above equations, the display module DM may not be normally folded. Hereinafter, the abnormally folded structure will be described inFIGS.23to27. First and second rotation axes RX1′, RX2′, RX1″, and RX2″ may be defined as rotations axes deviated from the normal positions. FIG.23is a view illustrating the first and second wing plates rotating along the first and second rotation axes disposed at the normal positions. Referring toFIG.23, the first wing plate WPT1may rotate with respect to the first rotation axis RX1, and the second wing plate WPT2may rotate with respect to the second rotation axis RX2. The second rotation axis RX2inFIG.23may be adjacent to a left boundary of the X coordinate condition of the second rotation axis RX2inFIG.22. A distance between one portion of the first wing plate WPT1and one portion of the second wing plate WPT2in the second direction DR2may be defined as a first distance DT1. In an embodiment,FIG.23illustrates the first wing plate WPT1rotating by about 45° with respect to the first rotation axis RX1and the second wing plate WPT2rotating by about 45° with respect to the second rotation axis RX2, for example. In this case, the display device DD may be folded by 45° as illustrated inFIG.19B. FIGS.24and25are views illustrating the first and second wing plates rotating along the first and second rotation axes deviated from the normal positions.FIGS.26and27are views illustrating a state of the display device when the first and second wing plates rotating along the first and second rotation axes deviated from the normal positions inFIGS.24and25. In an embodiment, only the first and second bodies BD1and BD2, the first and second wing plates WPT1and WPT2, for example, and the display module DM are illustrated inFIGS.26and27for simple illustration. Referring toFIG.24, a first rotation axis RX1′ and a second rotation axis RX2′ may be disposed closer to the Y-axis Y than the first rotation axis RX1and the second rotation axis RX2are to the Y-axis Y. The first wing plate WPT1may rotate with respect to the first rotation axis RX1′, and the second wing plate WPT2may rotate with respect to the second rotation axis RX2′. A distance between one portion of the first wing plate WPT1and one portion of the second wing plate WPT2in the second direction DR2may be defined as a second distance DT2. The second distance DT2may be greater than the first distance DT1. In an embodiment,FIG.24illustrates the first wing plate WPT1rotating by about 45° with respect to the first rotation axis RX1′ and the second wing plate WPT2rotating by about 45° with respect to the second rotation axis RX2′, for example. Referring toFIG.25, a first rotation axis RX1″ and a second rotation axis RX2″ may be disposed farther from the Y-axis Y than the first rotation axis RX1and the second rotation axis RX2are from the Y-axis Y. The first wing plate WPT1may rotate with respect to the first rotation axis RX1″, and the second wing plate WPT2may rotate with respect to the second rotation axis RX2″. A distance between one portion of the first wing plate WPT1and one portion of the second wing plate WPT2in the second direction DR2may be defined as a third distance DT3. The third distance DT3may be less than the first distance DT1. In an embodiment,FIG.25illustrates the first wing plate WPT1rotating by about 45° with respect to the first rotation axis RX1″ and the second wing plate WPT2rotating by about 45° with respect to the second rotation axis RX2″, for example. Referring toFIGS.24and26, as described above, the second distance DT2may be greater than the first distance DT1. Thus, when the first and second wing plates WPT1and WPT2rotate with respect to the first and second rotation axes RX1′ and RX2′, the first and second wing plates WPT1and WPT2may be further spaced from each other. When the first and second wing plates WPT1and WPT2are further spaced from each other, the folding area FA may be further stretched, and a bending phenomenon of the display module DM may be generated as illustrated inFIG.26. Referring toFIGS.25and27, as described above, the third distance DT3may be less than the first distance DT1. Thus, when the first and second wing plates WPT1and WPT2rotate further than about 45° with respect to the first and second rotation axes RX1′ and RX2′, the first and second wing plates WPT1and WPT2may be further adjacent to each other. When the first and second wing plates WPT1and WPT2are further adjacent to each other, the folding area FA may be further folded, and portions of the folding area FA may contact each other as illustrated inFIG.27. Thus, when the display device DD is folded with respect to the first and second rotation axes RX1′ and RX2′ and the first and second rotation axes RX1″ and RX2″ that are deviated from the normal range, the display module DM may not be normally folded to have the dumbbell shape. When the display device DD is folded with respect to the first and second rotation axes RX1and RX2, the display module DM may be normally folded to have the dumbbell shape. Resultantly, in an embodiment of the invention, as the first rotation axis RX1and the second rotation axis RX2of the hinge module HGM is optimized to fold the display module DM into the dumbbell shape, the display module DM may be further easily folded into the dumbbell shape. FIG.28is an enlarged view illustrating a first area A1ofFIG.19C.FIG.29is a view illustrating an unfolded state of the second reverse curvature part inFIG.28. FIG.28is a view illustrating peripheral components of the second reverse curvature part ICV2when the display device DD is folded, andFIG.29is a view illustrating the peripheral components of the second reverse curvature part ICV2when the display device DD is unfolded. Although the peripheral components of the second reverse curvature part ICV2are illustrated inFIGS.28and29, peripheral components of the first reverse curvature part ICV1, which are not illustrated, may be substantially the same as the peripheral components of the second reverse curvature part ICV2. Referring toFIGS.28and29, each of a top surface of the second body BD2and a top surface of the second wing plate WPT2, which are adjacent to each other, may have a curved surface. In an embodiment, each of a first top surface US1of the second wing plate WPT2, which is adjacent to the second boundary BA2, and a second top surface US2of the second body BD2, which is disposed below the second non-folding area NFA2and adjacent to the second boundary BA2, may have a curved surface, for example. Although not shown, each of a top surface of the first body BD1and a top surface of the first wing plate WPT1, which are adjacent to each other, may have a curved surface. In an embodiment, each of a first top surface of the first wing plate WPT1, which is adjacent to the first boundary BA1, and a second top surface of the first body BD1, which is disposed below the first non-folding area NFA1and adjacent to the first boundary BA1, may have a curved surface, for example. The second boundary BA2and a side WOS of the second wing plate WPT2, which is adjacent to the second boundary BA2, may be disposed adjacent to a central portion of the second reverse curvature part ICV2. Although not shown, the first boundary BA1and a side of the first wing plate WPT1, which is adjacent to the first boundary BA1, may be disposed adjacent to a central portion of the first reverse curvature part ICV1. As illustrated inFIG.28, when the display module DM is folded, each of the curved surface of the first top surface US1and the curved surface of the second top surface US2may correspond to a bent curved surface of the second reverse curvature part ICV2. Each of the curved surface of the first top surface US1and the curved surface of the second top surface US2may have the substantially same curvature as the curved surface of the second reverse curvature part ICV2. Each of the curved surface of the first top surface US1and the curved surface of the second top surface US2may have the substantially same curvature as a bottom surface of the second reverse curvature part ICV2. Although not shown, when the display module DM is folded, each of the first top surface of the first wing plate WPT1, which is adjacent to the first boundary BA1, and the second top surface of the first body BD1, which is adjacent to the first boundary BA1, may correspond to a bent curved surface of the first reverse curvature part ICV1. Since the first and second top surfaces US1and US2have the curved surfaces, the second reverse curvature part ICV2may be further easily bent along the first and second top surfaces US1and US2. FIG.30is a cross-sectional view taken along line III-III′ ofFIG.14.FIG.31is a view illustrating an unfolded state of the second reverse curvature part inFIG.30. FIG.30is a view illustrating the peripheral components of the second reverse curvature part ICV2when the display device DD is unfolded, andFIG.31is a view illustrating the peripheral components of the second reverse curvature part ICV2when the display device DD is folded. InFIGS.30and31, the display module DM is also illustrated. Also, the peripheral components of the second reverse curvature part ICV2inFIG.30are simply illustrated in a circular dotted line inFIG.31. Although the peripheral components of the second reverse curvature part ICV2are illustrated inFIGS.30and31, the peripheral components of the first reverse curvature part ICV1, which are not illustrated, may be substantially the same as the peripheral components of the second reverse curvature part ICV2. Hereinafter,FIGS.9and14will be described together as necessary. Referring toFIGS.9,14,23, and24, the second rotation surface RTS2defined in the second body BD2may have a concave curved shape. Although not shown, the first rotation surface RTS1defined in the first body BD1may also have a concave curved shape. The second coupling part CUP2of the second wing plate WPT2may have a convex curved shape and contact the second rotation surface RTS2. The second coupling part CUP2may have the substantially same curvature as the second rotation surface RTS2. Although not shown, the first coupling part CUP1of the first wing plate WPT1may also have a convex curved shape and contact the first rotation surface RTS1. A center point of a circle defined by a curved surface of the second coupling part CUP2may be defined as a wing rotation axis WRX. When the display module DM is folded, the second coupling part CUP2may move and rotate along a curved surface of the second rotation surface RTS2. That is, when the display module DM is folded, the second coupling part CUP2may rotate along the wing rotation axis WRX. Although not shown, when the display module DM is folded, the first coupling part CUP1may also rotate along the wing rotation axis WRX adjacent to the first rotation surface RTS1to move and rotate along a curved surface of the first rotation surface RTS1. The display module DM and the above-described bezel cover BZC may be disposed on the first and second wing plates WPT1and WPT2to fix the first and second wing plates WPT1and WPT2. Thus, the first and second coupling parts CUP1and CUP2may be easily disposed on the first and second rotation surfaces RTS1and RTS2instead of being separated from the first and second rotation surfaces RTS1and RTS2. According to the above-described structure, the first and second coupling parts CUP1and CUP2may rotate by easily contacting the first and second rotation surfaces RTS1and RTS2instead of using pins for coupling the first and second coupling parts CUP1and CUP2to the first and second bodies BD1and BD2. According to the embodiment of the invention, as the first rotation axis and the second rotation axis are optimized to fold the display module into the dumbbell shape, the display module may be further easily folded into the dumbbell shape. Although the embodiments of the invention have been described, it is understood that the invention should not be limited to these embodiments but various changes and modifications may be made by one ordinary skilled in the art within the spirit and scope of the invention as hereinafter claimed. Thus, to the maximum extent allowed by law, the scope of the invention is to be determined by the broadest permissible interpretation of the following claims and their equivalents, and shall not be restricted or limited by the foregoing detailed description. | 89,069 |
11862048 | DETAILED DESCRIPTION The features and exemplary embodiments of various aspects of the present disclosure are described in detail hereinafter. In the following detailed description, specific details are provided in order to facilitate a comprehensive understanding of the present disclosure. However, it is obvious to those skilled in the art that the present disclosure can be implemented without some of these specific details. The following embodiment description is only to provide a better understanding of the present disclosure by showing examples of the present disclosure. It should also be noted that in the present disclosure, relational terms such as first, second and the like may be merely used to distinguish one entity or operation from another entity or operation and may not necessarily require or imply any such actual relationship or order between these entities or operations. Moreover, the terms “include”, “contain” or any other variations thereof may be intended to cover non-exclusive inclusion, so that a process, method, article or equipment that includes a series of elements includes not only those elements, but also other elements that are not explicitly listed, or also includes elements inherent to the process, method, article, or equipment. If there are no more restrictions, the elements defined by the sentence “include . . . ” does not exclude the existence of other same elements in the process, method, article or equipment that includes the elements. Referring toFIG.1, a bendable flexible display apparatus may include two display states, one state is a rolled state where a flexible display panel2is rolled in a hollow space S1, and the other state is the unfolded state where the flexible display panel2extends out of the hollow space S1. In the existing technology, the unfolding of the flexible display panel2is normally achieved through a telescopic structure including a plurality of connecting rods hinged with each other. One end of the telescopic structure is connected to the ending portion of the flexible display panel2. The plurality of connecting rods can be far away from or adjacent to each other by rotating the plurality of connecting rods, such that the flexible display panel2can be unfolded or retracted. However, such arrangement structure has the following disadvantages. During the extending process of the telescopic structure, that is, when the plurality of connecting rods are far away from each other, it may not only cause an increase in the size of the telescopic structure along the extending direction, but also cause a decrease in the size of the telescopic structure along the direction perpendicular to the extending direction, which may reduce the stability of the telescopic structure. Meantime, the telescopic structure may not achieve stable support of the flexible display panel2, which may reduce the reliability of the flexible display apparatus. Therefore, the existing telescopic structure may only realize the rolling and unfolding of the flexible display panel2and may not realize stable support of the flexible display panel2. In order to realize the support of the flexible display panel2in the unfolded state, the flexible display apparatus often needs to be disposed with a supporting part. Due to the limited space of the flexible display apparatus, it is necessary to roll the supporting part in the hollow space S1in the rolled state, that is, it is necessary to ensure that the supporting part needs to have a certain degree of both flexibility and rigidity. On the one hand, the requirement for manufacturing materials, manufacturing technology and design may increase, resulting in an increase in production costs. On the other hand, because the supporting part has the rolling characteristics. In such case, when the flexible display panel2is unfolded, once the supporting part is subjected to an external force, the supporting part may also be bent. Therefore, it is difficult to provide desirable support for the flexible display panel2, such that the flexible display panel2may be bent when being unfolded, thereby affecting the display effect of the flexible display panel2. In order to solve above-mentioned problems, embodiments of the present disclosure provide a flexible display apparatus and an electronic device. The flexible display apparatus and the electronic device are be described in detail below with reference to the accompanying drawings. Referring toFIGS.1-3,FIG.1illustrates a structural schematic of a flexible display apparatus100according to various embodiments of the present disclosure;FIG.2illustrates a cross-sectional view of a flexible display apparatus100along an A-A direction inFIG.1; andFIG.3illustrates another cross-sectional view of a flexible display apparatus100along an A-A direction inFIG.1. Embodiments of the present disclosure provide the flexible display apparatus100, which may include a main body structure1having the hollow space S1; the flexible display panel2, a supporting structure3, and a locking part4. Both the flexible display panel2and the supporting structure3may be rolled in the hollow space S1, the supporting structure3may include a plurality of supporting plates31, and at least the first degree of rotation freedom may be included between two adjacent supporting plates31. The supporting structure3may have an unfolded state. In the unfolded state, both the supporting structure3and the flexible display panel2may at least partially extend out of the main body structure1. The locking part4may act on the supporting structure3extending out of the main body structure1to limit the first degree of rotation freedom of the supporting plates31, such that the supporting structure3may support the flexible display panel2flatly. Embodiments of the present disclosure provide the flexible display apparatus100, which may include the main body structure1, the flexible display panel2, and the supporting structure3. The main structure1may have the hollow space S1. Both the flexible display panel2and the supporting structure3may be rolled in the hollow space S1. The supporting structure3may have the unfolded state. In the unfolded state, both the flexible display panel2and the supporting structure3may at least partially extend out of the main body structure1, thereby realizing the unfolded display of the flexible display panel2under the support of the supporting structure3. In order to realize that the supporting structure3can be rolled in the hollow space S1, the supporting structure3may include the plurality of supporting plates31, and at least the first degree of rotation freedom may be included between two adjacent supporting plates31. The first degree of rotation freedom may indicate that two adjacent supporting plates31can rotate around the joint between the two adjacent supporting plates. In order to avoid relative rotation of the plurality of supporting plates31in the unfolded state, the flexible display apparatus100may further include the locking part4. The locking part4may act on the supporting structure3extending out of the main body structure1to limit the first degree of rotation freedom of the supporting plate31. Therefore, in the rolled state, the supporting plates31can rotate relative to each other to be rolled in the hollow space S1, which may reduce the space occupied by the supporting structure3; and in the unfolded state, when the flexible display panel2and the supporting structure3extend out of the main body structure1, the locking part4may limit the relative rotation between the adjacent supporting plates31. In such way, it may ensure that the supporting plates31can support the flexible display panel2flatly, so that the flexible display panel2may not bend when being unfolded and always maintain the flat state, thereby improving the display effect of the flexible display apparatus100. It should be noted that, for ease of understanding, the drawings provided in embodiments of the present disclosure may not be drawn according to actual scale. For example, the proportional relationship between the supporting structure3, the flexible display panel2and the rolling axle may not be an actual proportional relationship and may be merely exemplary. Considering that the telescopic structure in the existing technology can only realize the rolling and unfolding of the flexible display panel2and may not realize the stable support of the flexible display panel2, the flexible display apparatus100provided in embodiments of the present disclosure may be disposed with the supporting structure3to realize the stable support of the flexible display panel2in the unfolded state. Furthermore, in order to solve different requirements on the supporting structure3of the flexible display panel2in the rolled state and the unfolded state, it may adopt a technical means capable of dynamically adjusting the degree of rotation freedom of the supporting structure3. That is, on the one hand, the requirement that the supporting structure3can be rolled in the hollow space S1in the rolled state may be realized by configuring the supporting structure3as the plurality of relatively rotatable supporting plates31; on the other hand, the locking part4may be disposed, such that the locking part4may be gradually coupled with a locking groove32as the supporting structure3unfolds. Therefore, the relative rotation of adjacent supporting plates31in the unfolded state may be limited, thereby realizing the requirement that the supporting structure3can stably support the flexible display panel2in the unfolded state. Therefore, compared with the telescopic structure in the existing technology, the flexible display apparatus provided in embodiments of the present disclosure may not only realize the telescopic function, but also realize the stable support of the flexible display panel2in the unfolded state. In addition, the supporting structure3may be configured as the plurality of relatively rotatable supporting plates31, and the locking part4that can dynamically lock the relative rotation of the supporting plates31may be disposed, which may satisfy different requirements of the flexible display panel2for the supporting structure3in the rolled state and the unfolded state, and also simplify the structure of the flexible display apparatus and reduce the production cost. The main body structure1may be the housing of the flexible display apparatus100to accommodate display devices such as the flexible display panel2and the like. The flexible display panel2may be an organic light-emitting diode (OLED) display panel, a liquid crystal panel, a micro flat display panel (micro-OLED or micro-LED), or the like. In the rolled state, the flexible display panel2may be located in the hollow space S1of the main body structure1; one part of the flexible display panel2may be in the rolled state, and the other part may be in the flat state for display, thereby realizing the display in the rolled state of the flexible display apparatus100. In the unfolded state, the end of the flat side of the flexible display panel2may extend from the main body structure1and drive the rolled flexible display panel2to gradually unfold into a flat surface. Therefore, the display area of the flexible display apparatus100may be increased, and the display of the flexible display apparatus100in the unfolded state may be realized. It can be understood that, in order to make the flexible display panel2in the rolled state unfold into a flat surface, the supporting structure3may need to realize that the supporting plates31may rotate relative to each other in the rolled state, and the supporting plates31may not rotate relative to each other under the limitation of the locking part4in the unfolded state. Referring toFIGS.4-5,FIG.4illustrates an exploded view of the flexible display panel2and the supporting structure3according to various embodiments of the present disclosure; andFIG.5illustrates a top view of the supporting structure3according to various embodiments of the present disclosure. In order to realize the relative rotation between adjacent supporting plates31in the rolled state, one of the two adjacent supporting plates31may be disposed with a rotating axle311, the other one of the two adjacent supporting plates31may be disposed with a sleeve312, and the two adjacent supporting plates31may be correspondingly connected through the rotating axle311and the sleeve312. That is, the two adjacent supporting plates31may be rotatably connected through the rotating axle311and the sleeve312. As a result, in the rolled state, the plurality of supporting plates31may be curled plate by plate to be rolled in the hollow space S1, thereby reducing the space occupied by the supporting structure3. Optionally, in order to prevent excessive friction from affecting the rotation of adjacent supporting plates31, lubricating oil may be coated on the mating surfaces of adjacent supporting plates31. In addition, in order to prevent the usage of lubricating oil from polluting the flexible display panel2, the sleeve312may be configured as a Teflon sleeve. Furthermore, disposing a self-lubricating coating on the outer surface of the rotating axle311or the inner surface of the sleeve312may also be a mean to avoid contamination while using lubricant. Referring toFIG.6,FIG.6illustrates a cross-sectional view of the supporting structure3along a C-C direction inFIG.5. Two opposite sides of two adjacent supporting plates31are the first surface H1and the second surface H2, respectively, where the first surface H1is a side surface of the supporting plate31provided with the rotating axle311, and the second surface H2is a side surface opposite to the first surface H1. When two adjacent supporting plates31relatively rotate around the rotating axle311, the distance between each point on the first surface H1and the center of the rotating axle311is the rotation radius of the point. To ensure that there is no mutual interference between two adjacent supporting plates31, the rotation radius of each point on the first surface H1should be less than the distance between the second surface H2and the center of the rotating axle311along the first direction X. It can be understood that when the first surface H1and the second surface H2are both configured as flat surfaces, the position with the largest rotation radius on the first surface H1may be located at two ends of the first surface H1along the third direction Z, the distance between a point (i.e., one end of the first surface) and the center of the rotating axle311is d1, and the distance between the second surface H2and the center of the rotating axle311along the first direction X is d2. At this point, in order to avoid the interference between two adjacent supporting plates31, it is necessary to satisfy that d2is greater than d1to ensure sufficient rotation space. However, when d2is greater than d1, the tolerance between two adjacent supporting plates31may also increase. In the present disclosure, since the supporting structure3is unfolded, the two adjacent supporting plates31also need to maintain a certain level. Therefore, the tolerance requirement is relatively high, and it is difficult to solve the interference problem by increasing the distance between two adjacent supporting plates31, that is, by increasing the magnitude of d2. Meanwhile, in order to improve the strength of the supporting plates31, the length of the sleeve312along the first direction X may not be too long, which may cause the distance between the rotating axle311and the first surface H1along the first direction X to be shorter, thereby further increasing the interference problem of two adjacent supporting plates31. Referring toFIG.7,FIG.7illustrates another cross-sectional view of the supporting structure3along a C-C direction inFIG.5. In order to avoid interference between two adjacent supporting plates31when the tolerance requirement is high, one of two opposite sides of the two adjacent supporting plates31may be a convex arc surface, the other one of the two opposite sides may be a concave arc surface, and the arcs of the convex arc surface and the concave arc surface may be equal to each other. By setting one of the first surface H1and the second surface H2as a convex arc surface and the other one as a concave arc surface, when two adjacent supporting plates31rotate, the distance between each point on the first surface H1and the center of the rotating axle311during rotation may be less than the distance d2between the second surface H2and the center of the rotating axle311through the coordinated rotation of the convex arc surface and the concave arc surface, without increasing the magnitude of d2. It can ensure higher tolerance requirement, avoid the interference between two adjacent supporting plates31, and improve the stability of the rolling of the supporting structure3. The centers of the convex arc surface H1and the concave arc surface H2may be the center of the rotating axle311, and the arcs of the convex arc surface H1and the concave arc surface H2may be equal to each other. Therefore, when two adjacent supporting plates31are relatively rotated through the rotating axle311and the sleeve312, the convex arc surface H1and the concave arc surface H2may also be rotated in a coordinated manner, which may further improve the stability of the rolling of the supporting structure3. In addition, when two adjacent supporting plates31rotate relatively, their opposite contact surfaces may also provide a certain supporting force to reduce the force between the rotating axle311and the sleeve312, which may further improve the reliability of the supporting structure3. Referring toFIGS.4-5, considering that the supporting structure3is configured as the plurality of relatively rotatable supporting plates31in embodiments of the present disclosure, in order to realize that the locking part4can dynamically lock the relative rotation of the supporting plates31with the unfolding of the supporting structure3in the unfolded state, the locking groove32may be formed between two adjacent supporting plates31in one embodiment. In the unfolded state, the locking groove32may be coupled with the corresponding locking part4to flatly support the flexible display panel2. That is, in the unfolded state, the plurality of supporting plates31may be unfolded plate by plate, and the relative rotation between two adjacent supporting plates31may be limited by coupling the locking part4to the locking groove32between two adjacent supporting plates31. Therefore, in the unfolded state, the portion of the supporting structure3extending out of the main body structure1may not be bent, which ensures that the supporting structure3can flatly support the flexible display panel2. Referring toFIGS.4-5, for example, a first groove portion321and a second groove portion322may be respectively formed on the opposite edges of two adjacent supporting plates31; and the first groove portion321and the second groove portion322may be matched to form the locking groove32. The relative rotation of the first groove portion321and the second groove portion322may be limited by coupling the locking part4to the first groove portion321and the second groove portion322, which may limit the relative rotation between two adjacent supporting plates31and ensure that the supporting structure3can flatly support the flexible display panel2in the unfolded state. It should be noted that the locking part4is disposed to be coupled with the locking groove32to limit the relative rotation between adjacent supporting plates31. The locking part4may be a gas bag disposed in the locking groove32. The gas bag may be inflated when the supporting structure3extends out of the hollow space S1, and the gas bag may be deflated when the supporting structure3is rolled in the hollow space S1. The locking part4may also be configured as a filler filled in the locking groove32, and a heating device may be disposed at the opening of the main body structure1; when the supporting structure3extends out of the hollow space S1, the filler may be heated to be expanded; and when the supporting structure3is rolled in the hollow space S1, the filler may be cooled to be shrunk. The locking part4may also be configured as a plurality of blocks spaced apart; when the supporting structure3extends out of the hollow space S1, the blocks may be coupled with the locking grooves32; and when the supporting structure3is rolled in the hollow space S1, the blocks may be removed from the locking grooves32, thereby realizing the switch of the supporting structure3between two states. It should be noted that embodiments of the present disclosure may only show the drawing in which the locking parts4are configured as blocks. However, the locking parts4may also be configured in various forms according to the actual structure of the flexible display apparatus, which may not be limited herein according to various embodiments of the present disclosure. Referring toFIGS.3-4, the locking parts4may be exemplarily configured as blocks as the following. The locking parts4may include protrusion portions41formed by freely extending along the direction from the back side of the flexible display panel2to the display surface away from the flexible display panel2, and the protrusion portions41may be matched with the locking grooves32. In the unfolded state, the flexible display panel2and the supporting structure3may simultaneously extend out of the main body structure1, and the flexible display panel2may drive the protrusion portions41to be coupled with the locking grooves32, such that it may ensure that the flat supporting structure3is formed when the flexible display panel2extends out of the main body structure1, thereby realizing the stable support of the flexible display panel2. In the rolled state, the flexible display panel2and the supporting structure3may be synchronously rolled in the main body structure1. At this point, the flexible display panel2may drive the protrusion portions41to disengage from the locking grooves32. In such way, the supporting plates31may be relatively rotated, and the supporting structure3may be rolled in the hollow space S1, thereby realizing the switch of the supporting structure3between two states. Referring toFIG.8, it should be understood that when the supporting structure3is unfolded, although the locking part4can limit the relative rotation of adjacent supporting plates31, the positions of the supporting plates31in the horizontal plane may have a certain deviation when a certain matching error is between the rotating axle311and the sleeve312of two adjacent supporting plates31, which may be inconvenient to realize the coupling of the locking part4and the locking groove32and the stable support of the supporting structure3. Referring toFIG.5, in some optional embodiments, when an interference fit is adopted between the locking parts4and the locking grooves32, in order to realize the stable coupling of the locking parts4and the locking grooves32, all locking grooves32may be arranged in a same straight line along the first direction X, and the straight line may pass through the center points of all supporting plates31. That is, the locking parts4and all locking grooves32may be arranged along a same straight line, and the straight line may pass through the center points of all supporting plates31. On the one hand, since the center position of each supporting plate31is relatively stable, when the flexible display panel2and the supporting structure3are unfolded, it may ensure that the locking part4can be stably coupled with the locking groove32when the locking part4and the locking groove32adopt an interference fit, which may avoid the case where the locking part4and the locking groove32have a large position deviation and cannot be coupled with each other when the position of the supporting plate31is deviated in the horizontal plane. On the other hand, after the coupling of the locking part4and the locking groove32is realized, since the locking part4and the locking groove32adopt an interference fit, the interference may limit the deviation between the rotating axle311and the sleeve312of two adjacent supporting plates32, thereby ensuring the stable support of the supporting structure3. Referring toFIG.9,FIG.9illustrates a top view of another supporting structure3according to various embodiments of the present disclosure. In other optional embodiments, when the locking part4and the locking groove32adopt a clearance fit, in order to realize the stable coupling of the locking part4and the locking groove32, the locking groove32may be disposed adjacent to any side of two opposite sides of the supporting plate31along the second direction Y; and two adjacent locking grooves32may be disposed adjacent to different sides of the supporting plate31along the first direction X, where the second direction Y may be perpendicular to the first direction X. That is, two adjacent locking grooves32may be respectively disposed on two sides of the supporting plate31along the second direction Y. Therefore, on the one hand, when the rotating axle311and the sleeve312of two adjacent supporting plates31have a certain matching deviation, since the locking part4and the locking groove32adopt a clearance fit, the clearance fit may ensure that the locking part4can still be coupled with the locking groove32; on the other hand, after the coupling of the locking part4and the locking groove32is realized, since two adjacent locking grooves32are respectively disposed on two sides of the supporting plate31along the second direction Y, once there is an deviation between the rotating axle311and the sleeve312, the coupling between the locking groove32and the locking part4disposed on two sides may also resist partial deviation force in the horizontal plane, which may limit the deviation of the supporting plate31in the horizontal plane and ensure the stable coupling of the locking part4and the locking groove32. In addition, as shown inFIG.9, when the centers of the locking grooves32are not located on a same straight line, the centers of the protrusion portions41may not located on the same straight line along the first direction X, thereby achieving the matching with the locking groove32. Meanwhile, when the protrusion portions41are disposed on the back side of the flexible display panel2and rolled together with the flexible display panel2in the hollow space S1, in order to avoid stress aggregation of the flexible display panel2caused by the protrusion portions41when being rolled, the centers of the protrusion portions41may not be located on the same straight line to avoid damage to the flexible display panel2. Referring toFIG.10,FIG.10illustrates a top view of another supporting structure3according to various embodiments of the present disclosure. In some other optional embodiments, when the locking part4and the locking groove32adopt a clearance fit, the locking grooves32may be disposed on two opposite sides of the supporting plate31along the second direction Y. By disposing the locking grooves32on two sides of the supporting plate31along the second direction Y, even if the locking groove32on one side is damaged by the deviation force in the horizontal plane, the locking groove32and the locking part4on the other side may be coupled to limit the deviation of adjacent supporting plate31, thereby improving the reliability of the flexible display apparatus. Optionally, since the displacement of the supporting plate31in the horizontal plane mainly depends on the assembly accuracy of the rotating axle311and the sleeve312, under the condition that the assembly gap between the rotating axle311and the sleeve312remains unchanged, the longer the matching size of the rotating axle311and the sleeve312along the second direction Y is, the smaller the deviation of the supporting plate31along the first direction X is. Therefore, the matching size of the rotating axle311and the sleeve312along the second direction Y may also be appropriately increased to ensure the stable engagement of the locking part4and the locking groove32. Referring toFIGS.3-4, in order to facilitate the coupling of the protrusion portion41with the locking groove32, the cross-sectional area of the protrusion portion41along its extending direction may be gradually tapered from the connecting end connected to the back surface to its free end. That is, the cross-sectional area of the protrusion portion41along the third direction Z may gradually decrease from the end adjacent to the flexible display panel2to the end away from the flexible display panel2. On the one hand, the alignment area of the protrusion portion41at the end away from the flexible display panel2may be smaller than the alignment area of the end adjacent to the flexible display panel2, thereby facilitating the alignment of the protrusion portion41and the locking groove32. On the other hand, when aligning the protrusion portion41with the locking groove32, the tapered surface of the protrusion portion41may play a certain guiding role, thereby facilitating the coupling of the protrusion portion41and the locking groove32. Referring toFIG.11,FIG.11illustrates a cross-sectional view of the supporting structure3along a B-B direction inFIG.5. Corresponding to the protrusion portion41, the opening area of the locking groove32along the third direction Z may gradually decrease from the end adjacent to the flexible display panel2to the end away from the flexible display panel2. The shape and size of the locking groove32may be configured according to the shape and size of the protrusion portion41. Therefore, it may avoid that after the protrusion portion41and the locking groove32are coupled, the gap between the protrusion portion41and the locking groove32may be excessively large to cause the rotation of adjacent supporting plates31. As shown inFIG.11, optionally, the cross section of the protrusion portion41along the extending direction may include at least one of a triangle, a trapezoid, and a semicircle. Corresponding to the protrusion portion41, the opening section of the locking groove32along the extending direction may include at least one of a triangle, a trapezoid, and a semicircle, thereby ensuring the coupling of the protrusion portion41and the locking groove32. It should be understood that the shapes of the protrusion portion41and the locking groove32may be adjusted according to the actual structure of the flexible display panel2and the supporting structure3. Embodiments of the present disclosure may only show the drawings where the protrusion portion41is a trapezoidal body, which may not be understood as the limitation of the protection scope of the present disclosure. Referring toFIG.11, the protrusion portion41as a trapezoidal body may be taken as an example for description. Along the third direction Z, the cross section of the locking groove32may be an inverted trapezoid, and the third direction Z may be perpendicular to the first direction X. Corresponding to the locking groove32, along the third direction Z, the cross section of the protrusion portion41may also be an inverted trapezoid. When the protrusion portion41is coupled with the locking groove32, the supporting plates31on adjacent two sides may bear the gravity G of the protrusion portion41to generate the first supporting force F1and the second supporting force F2for supporting the protrusion portion41. The cross section of each of the protrusion portion41and the locking groove32is a trapezoid, the directions of the first supporting force F1and the second supporting force F2are perpendicular to the inclined surfaces on both sides of the inverted trapezoid. Therefore, the sum of the components of the first supporting force F1and the second supporting force F2along the third direction Z may be substantially equal to the gravity G of the protrusion portion41, and the protrusion portion41may reach a force balance. When the gravity G of the protrusion portion41is constant, the inclined angles of the inclined surfaces may affect the magnitudes of the first supporting force F1and the second supporting force F2. Therefore, the inclined angle of the inclined surface can be used to ensure that the locking groove32can stably support the protrusion portion41. Optionally, the bottom angle of the inverted trapezoidal cross section of the locking groove32and the protrusion portion41may be 45°˜85°. It should be understood that if the bottom angle is excessively large, the first supporting force F1and the second supporting force F2may be large, resulting in that excessive force may be between the protrusion portion41and the locking groove32and the end of the protrusion portion41away from the flexible display panel2may bear a greater force, so that the protrusion portion41may be not uniformly stressed to be easily damaged; if the bottom angle is excessively small, the force between the protrusion portion41and the locking groove32may be small, which may easily cause the protrusion portion41to separate from the locking groove32and affect the supporting effect of the supporting structure3. Therefore, the cross-sectional shape and angle of the locking groove32and the protrusion portion41can be adjusted according to the actual situation, which may not be limited herein. Furthermore, according to the above analysis, when the supporting structure3is pushed and pulled along the first direction X, the supporting plates31on both sides of the protrusion portion41may generate a pressing force on the protrusion portion41. The direction of the pressing force may be same as the first supporting force F1and the second supporting force F2, respectively. When the component of the pressing force along the third direction Z is greater than the gravity G of the protrusion portion41, the protrusion portion41may move along the third direction Z. In addition, when the flexible display apparatus100is turned upside down, or when the flexible display apparatus100is moved at various angles, the protrusion portion41may easily move relative to the locking groove32along the third direction Z, which may result in that the protrusion portion41is separated from the locking groove32and the flexible display panel2is separated from the supporting structure3. In order to solve the above-mentioned problems, on the one hand, the contact surface of at least one of the locking groove32and the protrusion portion41can be set as a surface with a relatively high friction coefficient. For example, the surface of the protrusion portion41may be sandblasted, or the protrusion portion41may be made of a material with a relatively high friction coefficient such as rubber. In addition, the inside of the locking groove32may also be disposed with an element to-be-attracted, and the protrusion portion41may have magnetism, such that the protrusion portion41can be attracted in the locking groove32. Therefore, it may avoid that in the unfolded state or when the flexible display apparatus100is moved, the protrusion portion41may move relative to the locking groove32along the third direction Z, which affects the supporting effect of the supporting structure3on the flexible display panel2. On the other hand, a stopper (not shown) may be disposed on at least one of two adjacent supporting plates31, which is used to limit the rotation of the supporting plate31along the side facing away from the rolling direction. In such way, it may avoid that the protrusion portion41is pressed to cause the protrusion portion41to be separated from the locking groove32, thereby preventing the flexible display panel2from being separated from the supporting structure3. It should be understood that when the flexible display panel2and the supporting structure3are unfolded along the first direction X, if the fit tolerance of the protrusion portion41and the locking groove32is excessively large, it may cause a certain relative rotation between adjacent supporting plates31. As a result, as the flexible display panel2and the supporting structure3are pulled out, the accumulated displacement deviation may gradually increase, such that the protrusion portion41at the rear thereof may not be coupled with the locking groove32. Therefore, when the protrusion portion41is configured as a trapezoidal body, in order to ensure that each protrusion portion41can be coupled to the locking groove32between two adjacent supporting plates31, on the one hand, the protrusion portion41can have an interference fit with the locking groove32. For example, the locking part4can be made of hard rubber, and the relative rotation between adjacent supporting plates31may be limited by the interference fit of the locking part4and the locking groove32, thereby reducing the displacement deviation as possible. On the other hand, a high-precision carving process may be used to modify at least a part of the locking groove32on the side of the supporting structure3adjacent to the main structure1. Therefore, even if the accumulated position deviation of the flexible display panel2and the supporting structure3gradually increases as the flexible display panel2and the supporting structure3are pulled out, a part of locking grooves32on the side adjacent to the main body structure1may still be stably coupled with the protrusion portions41and the position deviation of such part may be corrected, which may ensure that the supporting structure3may still support the flexible display panel2flatly. Referring toFIG.12,FIG.12illustrates a cross-sectional view of another supporting structure3along the B-B direction inFIG.5. Another solution is to dispose a gap filling portion411on at least a part of the outer surface of the protrusion portion41. When the supporting structure3extends out of the hollow space S1, the gap filling portion411may be expanded to fill the matching gap between the protrusion portion41and the locking groove32to ensure the coupling of the protrusion portion41and the locking groove32. When the supporting structure3is rolled in the hollow space S1, the gap filling portion411may be contracted to ensure that the protrusion portion41can be separated from the locking groove32, thereby realizing the switch of the supporting structure3between two states. Optionally, the gap filling portion411may include at least one of an inflatable gas bag, silica gel, and sponge. The protrusion portion41may be disposed with air holes, and the gap filling portion411may be expanded or contracted by inflation. By disposing the gap filling portion411on at least the outer surface of the protrusion portion41, on the one hand, it can ensure that the supporting structure3can flatly support the flexible display panel2and avoid wrinkling or bending of the flexible display panel2; on the other hand, the gap filling portion411can also achieve a certain buffer effect, thereby avoiding the protrusion portion41and the locking groove32from impacting each other and being damaged, and improving the service life of the flexible display apparatus100. Referring toFIG.3, in order to ensure that the supporting structure3and the flexible display panel2can be stably rolled in the hollow space S1, the flexible display apparatus100provided in embodiments of the present disclosure may further include a rolling axle assembly5including the first rolling axle51and the second rolling axle52. The rolling axle assembly5may be disposed in the hollow space S1, and the first rolling axle51and the second rolling axle52may be respectively rotatably connected with the main body structure1. In the unfolded state, the supporting structure3and the flexible display panel2may partially extend out of the hollow space S1, and the locking groove32at least partially extending out of the hollow space S1may be correspondingly coupled with the locking part4. The first rolling axle51and the second rolling axle52may be rotatably connected to the main body structure1; and the flexible display panel2and the supporting structure3may be respectively rolled on the first rolling axle51and the second rolling axle52; such that it may ensure that the flexible display panel2and the supporting structure3can be switched stably in the rolled state and the unfolded state. As shown inFIG.3, for example, in order to ensure that the locking groove32at least partially extending out of the hollow space S1can be coupled with the locking part4, a height difference may be between the first rolling axle51and the second rolling axle52along the third direction Z, which may reserve a coupling space between the locking groove32and the locking part4. It should be understood that since the supporting structure3is disposed on one side of the flexible display panel2facing away from the display surface, the rolling axle of the flexible display panel2should be higher than the rolling axle of the supporting structure3along the third direction Z. Taking the flexible display panel2connected to the first rolling axle51and the supporting structure3connected to the second rolling axle52as an example for description, at this point, along the third direction Z, the first rolling axle51should be higher than the second rolling axle52, which may ensure that the locking part4can be coupled with the locking groove32. Referring toFIG.3, in some optional embodiments, the radius of the first rolling axle51may be equal to the radius of the second rolling axle52. Therefore, when the flexible display panel2and the supporting structure3are stretched in the unfolded state, the first rolling axle51and the second rolling axle52may rotate synchronously and stably. On the one hand, it can facilitate the size design of the rolling axle assembly5; and on the other hand, it can also facilitate the unfolding and rolling of the flexible display panel2and the supporting structure3. It should be understood that when the locking part4is disposed on one side of the flexible display panel2facing away from the display surface and is rolled on the first rolling axle51together with the flexible display panel2, the locking part4may increase the thickness of the first rolling axle51after being rolled up, and may be easy to cause stress concentration of the flexible display panel2which may damage the flexible display panel2. Referring toFIG.13,FIG.13illustrates a cross-sectional view of another flexible display apparatus100along the A-A direction inFIG.1. In order to solve the above-mentioned problems, in addition to above-mentioned embodiments, the centers of the locking parts4may not be located on a same straight line, and the locking parts4and the flexible display panel2may be detachably connected. When the locking parts4and the flexible display panel2are detachably connected, the flexible display panel2may only be rolled on the first rolling axle51during the rolling process of the flexible display panel2; and during the unfolding process, the locking parts4may be connected to the side of the flexible display panel2away from the display surface, such that the locking groove32at least partially extending out of the hollow space S1may be coupled to the locking parts4correspondingly. As shown inFIG.13, in order to realize the detachable connection of the locking parts4and the flexible display panel2, in some embodiments, a plurality of to-be-attracted elements21may be arranged spaced apart along the first direction X on the back side of the flexible display panel2. The locking parts4may include a plurality of magnetic protrusion portions41, and the protrusion portions41may correspondingly be attracted to the to-be-attracted elements21. The to-be-attracted element21may be set as an iron sheet with a relatively thin thickness, and the protrusion portion41may be set as a permanent magnet. Therefore, when the flexible display panel2is rolled on the first rolling axle51, the thickness of the first rolling axle51after rolling may not be increased. Meanwhile, when the flexible display panel2is stretched along the first direction X, the protrusion portions41may be attracted to the to-be-attracted elements21on the back side of the flexible display panel2. Furthermore, when the flexible display panel2and the supporting structure3are unfolded, the protrusion portion41on the back side of the flexible display panel2may be coupled with the locking groove32correspondingly. Referring toFIG.14,FIG.14illustrates a cross-sectional view of another flexible display apparatus100along the A-A direction inFIG.1. In other optional embodiments, the rolling axle assembly5may further include a third rolling axle53, and the locking part4may further include a connecting plate42. The connecting plate42may be connected to the third rolling axle53, and the protrusion portion41may be disposed on one side of the connecting plate42facing the supporting structure3. The back side of the flexible display panel2may be arranged with a permanent magnet, and the connecting plate42may be arranged as a magnetic plate. When the flexible display panel2is stretched along the first direction X, the connecting plate42may be attracted to the back side of the flexible display panel2, thereby driving the third rolling axle53to rotate; such that the connecting board42may move in the first direction X along with the flexible display panel2. The connecting plate42is disposed with the protrusion portion41on the side facing the supporting structure3, such that, when the flexible display panel2is pulled, the protrusion portion41on the connecting plate42may be driven to be coupled with the locking groove32on the supporting structure3. Similarly, when rolling the flexible display panel2, the connecting plate42may be separated from the flexible display panel2by rolling the connecting plate42on the third rolling axle53, which may avoid that the protrusion portion41is rolled on the first rolling axle51along with the flexible display panel2and avoid increasing the rolling thickness of the first rolling axle51. In addition, due to actual use, some dust or impurities may enter the hollow space S1, which affects the normal operation of the supporting structure3, the flexible display panel2, and other components. In order to avoid the above problems, in some optional embodiments, the surface of the supporting structure3facing away from the flexible display panel2may be provided with a dustproof film. The dustproof film may be directly attached to the surface of the supporting structure3facing away from the flexible display panel2to save cost. The dustproof film can also be rolled in the hollow space S1, that is, the rolling axle assembly5may also be disposed with the fourth rolling axle. In the rolled state, the dustproof film may be rolled on the fourth rolling axle, and in the unfolded state, the dustproof film may be pulled out synchronously with the supporting structure3. By setting the dustproof film in a separate rolling form, the dustproof film can first be removed by the dust removal device when it is rolled in the hollow space S1. For example, the sundries adsorbed on the surface of the dustproof film may be removed by blowing air or the like to prevent sundries from entering the flexible display apparatus100. Referring toFIG.15,FIG.15illustrates a schematic of rolling supporting plates31according to various embodiments of the present disclosure. When the supporting structure3is rolled by the second rolling axle52, since the supporting structure3includes the plurality of rotatably connected supporting plates31, in order to neatly roll the supporting plates31on the second rolling axle52, the second rolling axle52may be rolled with N rounds of supporting plates31in some optional embodiments. The length Lkof the supporting plate31at the k-th round may be less than the length Lk+1of the supporting plate31at the k+1-th round; and the distance between two adjacent locking parts4fitted in the supporting plate31located at the k-th round may be equal to the length Lkof the supporting plate31, where 0<k≤N. The rolling radius of each round supporting plate31is different; therefore, when the supporting structure3is rolled on the second rolling axle52, if the lengths of the supporting plates31are all equal, the joints of adjacent supporting plates31between different rounds may be misaligned. On the one hand, the supporting structure3may be easy to wear after repeated rolling, which reduces the service life of the supporting structure3; and on the other hand, it is not beneficial to the neat rolling of the supporting structure3. Therefore, by making the length Lk+1of the supporting plate31at the k+1 round greater than the length Lkof the supporting plate31at the k-th round, and setting the lengths of the supporting plates31in different rounds reasonably, it may ensure that when the supporting structure3is rolled on the second rolling axle52, the joints of adjacent supporting plates31between different rounds may be located on a same straight line, which may ensure the neat rolling of the supporting plates31and improve the service life of the supporting structure3. For example, the calculation for the length Lkof the supporting plate31at the k-th round is as follows: Lk=2*(R+kh-h/2)*tan(180°g) Where, R is the radius of the second rolling axle52, h is the thickness of the supporting plate31, and g is the number of the supporting plates31in one round. It can be understood that when the joints of adjacent supporting plates31between different rounds are located on a same straight line, the number of supporting plates31provided in each round may be same. Therefore, the length Lkof the supporting plate31at the k-th round may be proportional to the distance from the center of the second rolling axle52to the supporting plate31. When the number of rounds k where the supporting plate31is located, the radius R of the second rolling axle52, the number g of the supporting plates31in one round, and the thickness h of the supporting plate31are determined, the length Lkof the supporting plates31at the k-th round may be calculated. Optionally, the length of the supporting plate31may be 0.01 mm to 3.0 mm, and its value may be designed according to the structural size of the flexible display apparatus, which may not be limited herein. Referring toFIG.16,FIG.16illustrates a cross-sectional view of a pulling part6according to various embodiments of the present disclosure. In order to facilitate the movement of the ends of the flexible display panel2and the supporting structure3along the first direction X in the unfolded state, the flexible display apparatus100may further includes the pulling part6. The pulling part6may include a housing61and a sliding part62. The housing61may have a sliding rail, and the sliding part62may slide along the sliding rail. One end of each of the supporting structure3and the flexible display panel2extending out of the main body structure1may be connected to the sliding part62respectively. The pulling part6may move along the first direction X under the drive of an automatic control system, thereby driving the supporting structure3and the flexible display panel2connected to the pulling part6to move along the first direction X. As shown inFIG.16, furthermore, when using the pulling part6to pull the supporting structure3, the supporting structure3or the flexible display panel2may be damaged due to excessive force. In order to avoid excessive pulling force of the pulling part6on the supporting structure3and the flexible display panel2, in some optional embodiments, the flexible display apparatus100may further include a pulling force sensor7. The pulling force sensor7may be fixed to the housing61; and the pulling force sensor7may be connected to the sliding part62through an elastic part8to measure the pulling force received by the sliding part62. The pulling part6includes the relatively slidable housing61and the sliding part62, and an elastic part8is disposed between the housing61and the sliding part62. Therefore, when the pulling part6drives the flexible display panel2and the supporting structure3to move along the first direction X through the sliding part62, relative sliding may occur between the housing61and the sliding part62to drive the elastic part8to be deformed. The pulling force sensor7may measure the elastic force of the elastic part8to obtain the pulling force of the pulling part6on the flexible display panel2and the supporting structure3. When the pulling force is excessively large, the pulling force sensor7may feedback an electrical signal to the automatic control system to adjust the rotation speed of the first rolling axle51and the second rolling axle52, and the pulling force received by the flexible display panel2and the supporting structure3may be further reduced. Referring toFIG.17, in some optional embodiments, the end of the flexible display panel2connected with the pulling part6and the end of the supporting structure3connected with the pulling part6may be fixedly connected. That is, the sliding part62may include an accommodating cavity S2; the accommodating cavity S2may include an adhesive621extending along the sliding direction of the sliding part62; and the parts of the supporting structure3and the flexible display panel2extending into the accommodating cavity S2may be respectively adhered to the opposite sides of the adhesive621. The ends of the supporting structure3and the flexible display panel2extend into the accommodating cavity S2, and the parts extending into the accommodating cavity S2are fixedly connected by the adhesive621, which may ensure that the pulling part6may pull out the supporting structure3and the flexible display panel2simultaneously; meanwhile, the supporting structure3is attached to the end of the flexible display panel2, such that it is also more convenient for the locking part4on the back side of the flexible display panel2to be coupled with the locking groove32. As shown inFIG.16, furthermore, the sliding part62may further include a buffering part622, the buffering part622may be disposed at least on the first surface of the accommodating cavity S2, and the first surface may be the surface opposite to the surface where the adhesive621adheres the supporting structure3and the flexible display panel2in the accommodating cavity S2. By providing the buffering part622on the surface where the supporting structure3and the flexible display panel2are connected in the accommodating cavity S2, the supporting structure3and the flexible display panel2may be prevented from being crushed. Optionally, the buffering part622may be configured as foam. Referring toFIG.17,FIG.17illustrates an enlarged view of a flexible display apparatus at a location D inFIG.3. The supporting structure3includes the plurality of relatively rotatable supporting plates31. Therefore, in order to facilitate the coupling of the locking groove32and the locking part4in the hollow space S1, the flexible display apparatus100may further include a fitting auxiliary part9. The fitting auxiliary part9may be arranged in the hollow space S1, and the fitting auxiliary part9may at least be abutted against the side surface of the supporting structure3opposite to the flexible display panel2, such that the locking part4and the locking groove32may be correspondingly fitted in the abutting position. As shown inFIG.17, meanwhile, the main body structure1may include an opening K1for extending out the supporting structure3and the flexible display panel2, and the fitting auxiliary part9may be disposed adjacent to the opening K1. That is, by disposing the fitting auxiliary part9, it may ensure that the locking part4and the locking groove32can be coupled stably; and it also be convenient for the flexible display panel2and the supporting structure3to be pulled out from the opening K1of the main body structure1after coupling. Referring toFIG.18,FIG.18illustrates another enlarged view of a flexible display apparatus at the location D inFIG.3. In order to further ensure that the locking part4can be stably coupled with the locking groove32, the fitting auxiliary part9may include a first roller91and a second roller92. Along the third direction Z, the first roller91and the second roller92may be respectively disposed on both sides of the opening K1. The first roller91may be in contact with the display surface of the flexible display panel2, and the second roller92may be in contact with the side surface of the supporting structure3facing away from the flexible display panel2. The second roller92may be abutted against the side of the supporting structure3away from the flexible display panel2to limit the movement of the locking groove32along the third direction Z, and the first roller91may be abutted against the side of the flexible display panel2away from the supporting structure3to limit the movement of the locking part4along the third direction Z, thereby being easier for coupling the locking part4with the locking groove32stably. In addition, by setting the fitting auxiliary part9as a roller, the frictional force between the second roller92and the supporting structure3, and between the first roller91and the flexible display panel2may also be reduced, such that it may be easier to unfold the flexible display panel2and the supporting structure3. Referring toFIG.19,FIG.19illustrates another enlarged view of a flexible display apparatus at a location D inFIG.3. It should be understood that, in order to ensure that the supporting structure3can support the flexible display panel2flatly, the supporting plate31should be set as a rigid part; and the rigid supporting plate31may jump along the third direction Z when passing through the second roller92, affecting the flexible display panel2and the supporting structure3to be pulled out from the opening K1. To solve the above problem, firstly, the size of the opening K1along the third direction Z may be increased to facilitate the pulling out of the flexible display panel2and the supporting structure3; secondly, the length of a single supporting plate31may also be reduced, such that, the vibration amplitude of the supporting plate31along the third direction Z may be reduced. However, reducing the length of a single supporting plate31may lead to an increase in the processing accuracy of the supporting plate31and increase the assembly difficulty between adjacent supporting plates31, thereby increasing the cost of the flexible display apparatus. Referring toFIG.20,FIG.20illustrates another enlarged view of a flexible display apparatus at a location D inFIG.3. In order to solve the above problem, in some optional embodiments, the fitting auxiliary part9may further include a third roller93. The third roller93and the first roller91may be arranged on the same side of the opening, the third roller93may be at least partially located in the opening, and the third roller93may be in contact with the display surface of the flexible display panel2. That is, the supporting plate31may be pressed to be in contact with the flexible display panel2through the second roller92; and the positions of both sides of the flexible display panel2and the supporting structure3may be simultaneously limited by the first roller91, the second roller92, and the third roller93, such that the position limitation of the supporting plate31may be realized, and the supporting plate31may be prevented from jumping along the third direction Z. Referring toFIG.21,FIG.21illustrates a local cross-sectional view along a E-E direction inFIG.20. Furthermore, along the second direction Y, at least one of two opposite sides of the supporting plate31may be disposed with a sliding groove33, the main body structure1may be disposed with a limiting part11that is matched with the sliding groove33, and the limiting part11may partially extend into the sliding groove33to limit the movement path of the supporting plate31in the opening. When the supporting structure3passes through the opening, the limiting part11disposed on the main body structure1may extend into the sliding groove33disposed on the side of the supporting plate31, which may further limit the movement path of the supporting plate31in the opening and prevent the supporting plate31from jumping along the third direction Z. Referring toFIG.22, embodiments of the present disclosure also provide an electronic device200.FIG.22illustrates a schematic of an electronic device200according to various embodiments of the present disclosure. The electronic device200may include a flexible display apparatus100, which is the flexible display apparatus100in the above-mentioned embodiments. Therefore, the electronic device200provided by embodiments of the present disclosure has the technical effects of the technical solution of the flexible display apparatus100in any one of the above-mentioned embodiments; and the structure and the explanation of terms that are same as or corresponding to the above-mentioned embodiments may not be repeated in detail herein. The electronic device200provided in embodiments of the present disclosure may be a mobile phone or any electronic product with display function, including but not limited to the following categories such as televisions, laptops, desktop displays, tablet computers, digital cameras, smart hands rings, smart glasses, car monitors, medical equipment, industrial control equipment, touch interactive terminals, or the like, which may not be limited according to various embodiments of the present disclosure. The above are merely some embodiments of the present disclosure. Those skilled in the art can clearly understand that, for the convenience and conciseness of description, the working process of the system, module and unit described above may refer to the corresponding process in the above-mentioned method embodiments, which may not be described in detail herein. It should be understood that the scope of protection of the present disclosure may not be limited to those embodiments. Those skilled in the art may easily think of various equivalent modifications or substitutions within the technical scope disclosed in the present disclosure, and these modifications or substitutions should be covered by the protection scope of the present disclosure. It should also be noted that exemplary embodiments mentioned in the present disclosure describe certain methods or systems based on a series of steps or devices. However, the present disclosure may not be limited to the order of the above-mentioned steps; that is, the steps may be executed in the order mentioned in embodiments or may be different from the order in embodiments, or several steps may be executed at the same time. | 62,892 |
11862049 | DETAILED DESCRIPTION The present disclosure may be understood by reference to the following detailed description, taken in conjunction with the drawings as described below. It is noted that, for purposes of illustrative clarity and being easily understood by the readers, various drawings of this disclosure show a portion of the display device, and certain elements in various drawings may not be drawn to scale. In addition, the number and dimension of each element shown in drawings are only illustrative and are not intended to limit the scope of the present disclosure. Certain terms are used throughout the description and following claims to refer to particular components. As one skilled in the art will understand, electronic equipment manufacturers may refer to a component by different names. This document does not intend to distinguish between components that differ in name but not function. In the following description and in the claims, the terms “include” and “comprise” are used in an open-ended fashion, and thus should be interpreted to mean “include, but not limited to . . . ”. It should be understood that when an element or layer is referred to as being “on” or “connected to” another element or layer, it may be directly on or directly connected to the other element or layer, or intervening elements or layers may be presented (indirect condition). In contrast, when an element is referred to as being “directly on” or “directly connected to” another element or layer, there are no intervening elements or layers presented. Although terms such as first, second, third, etc., may be used to describe diverse constituent elements, such constituent elements are not limited by the terms. The terms are used only to discriminate a constituent element from other constituent elements in the specification. The claims may not use the same terms, but instead may use the terms first, second, third, etc. with respect to the order in which an element is claimed. Accordingly, in the following description, a first constituent element may be a second constituent element in a claim. The terms “substantially” typically mean +/−20% of the stated value, more typically +/−10% of the stated value, more typically +/−5% of the stated value, more typically +/−3% of the stated value, more typically +/−2% of the stated value, more typically +/−1% of the stated value and even more typically +/−0.5% of the stated value. The stated value of the present disclosure is an approximate value. When there is no specific description, the stated value includes the meaning of “substantially”. Moreover, when considering the deviation or the fluctuation of the manufacturing process, the term “same” may also include the meaning of “substantially”. It should be noted that the technical features in different embodiments described in the following can be replaced, recombined, or mixed with one another to constitute another embodiment without departing from the spirit of the present disclosure. Please refer toFIG.1.FIG.1is a schematic diagram of the relative relationship between two material layers of an electronic device according to the present invention. As shown inFIG.1, an electronic device ED of the present disclosure may be a flexible electronic device, wherein the term “flexible” means that the electronic device ED may be curved, bent, folded, rolled, flexed, stretched, and/or other similarly deformed, and the deformation described above is represented by “flexible” hereinafter. The electronic device ED may include a display device, an antenna device, a sensor device or a tiled device, but not limited herein. The electronic device may for example include liquid crystal, fluorescence, phosphor, light emitting diode, other suitable display medium or any combination thereof, but not limited herein. The light emitting diodes (LEDs) may for example include organic light emitting diodes (OLEDs), mini LEDs, micro LEDs, nano wire LEDs, bar type LEDs, quantum dot LEDs (QLEDs, QDLEDs) or LEDs with any other suitable materials, and these materials may be disposed in any arrangement or combination, but not limited herein. The antenna device may be a liquid crystal antenna, but not limited herein. The tiled device may be a display tiled device or an antenna tiled device, but not limited herein. It is noted that the electronic device may be any arrangements or combinations of the devices described above, but not limited herein. In the following description, a display device is used as the electronic device to illustrate the content of the present disclosure. That is to say, the electronic device ED in the following description for example has display function and includes a flexible display device100having display elements, but the present disclosure is not limited herein. The flexible display device100includes a first layer LR1and a second layer LR2. The second layer LR2overlaps with the first layer LR1in a vertical direction Dz, that is, at least a part of the second layer LR2overlapping with the first layer LR1. In the structure ofFIG.1, the first layer LR1is disposed on the second layer LR2in the direction Dz. For example, when the flexible display device100includes a flexible substrate102and the first layer LR1and the second layer LR2are both disposed on the flexible substrate102(i.e., the flexible display device100includes a flexible substrate102thereon disposed the first layer LR1and the second layer LR2), the second layer LR2is disposed between the first layer LR1and the flexible substrate102. However, in a variant embodiment, the first layer LR1may also be disposed below the second layer LR2. The first layer LR1includes a plurality of first patterns LP1extending along a direction Dy and being arranged side by side along a direction Dx, and each first pattern LP1has a first pitch Pt1and a first width (which is the line width of the first pattern LP1). The second layer LR2includes a plurality of second patterns LP2, and each second pattern LP2has a second pitch Pt2and a second width (which is the line width of the second pattern LP2), wherein the first pitch Pt1is greater than the second pitch Pt2, and a ratio of the first pitch Pt1to the second pitch Pt2is greater than or equal to 2 and less than or equal to 200. In the range described above, the optical ripple interference between the first layer LR1and the second layer LR2may be reduced, and better display performance may be provided. In some embodiments, the range of the first pitch Pt1may for example be greater than 40 micrometers (μm) and less than or equal to 4000 μm, but not limited herein. In some embodiments, the first layer LR1may be used as an assisting layer BSL. The term “assisting layer” in the present disclosure refers to that it is helpful to make the flexible display device100be flexed or bent toward a direction perpendicular to the extending direction (e.g., the direction Dy) of the patterns (referred to as assisting patterns, or first patterns LP1for example) of the assisting layer BSL. The second patterns LP2may be any conducting lines in the display layer or the touch layer, such as (but not limited to) scan lines or a data lines, or may be any one of power supply lines, common voltage lines, data lines, scan lines, signal reference lines or touch signal lines. In some embodiments, one of the plurality of first patterns LP1has a first width (or referred to as first line width), one of the plurality of second patterns LP2has a second width (or referred to as second line width), and a ratio of the first width to the first pitch Pt1is less than a ratio of the second width to the second pitch Pt2. For example, the ratio of the second width to the second pitch Pt2may be greater than or equal to 0.02 and less than or equal to 0.2. Please refer toFIG.2.FIG.2is a schematic diagram of the relative relationship between two material layers of another embodiment of an electronic device according to the present invention. According to the present disclosure, the pattern pitch of the upper material layer may also be less than the pattern pitch of the lower material layer. In the structure shown inFIG.2, the second layer LR2is disposed on the first layer LR1. That is to say, the pitch of the upper material layer (the second pitch Pt2) is less than the pitch of the lower material layer (the first pitch Pt2), but the ratio of the first pitch Pt1to the second pitch Pt2is still greater than or equal to 2 and less than or equal to 200. The second pitch Pt2may range from 0.1 μm to 40 μm, for example, from 0.1 μm to 10 μm, and the first pitch Pt1may range from 10 μm to 20 μm, for example, 20 μm, but not limited herein. In some embodiments, the second layer LR2may be used as the assisting layer BSL, which is helpful to make the flexible display device100be flexed or bent toward a direction perpendicular to the extending direction (e.g., the direction Dy) of the patterns (referred to as assisting patterns, or second patterns LP2for example) of the assisting layer BSL. In some embodiments, the second layer LR2may be used as a raster element. The raster element for example may provide functions such as polarization, collimation and/or privacy protection, but not limited herein. In some embodiments, the raster element may be, for example, a wire grid polarizer (WGP). The first patterns LP1may be any conducting lines in the display layer or the touch layer, such as (but not limited to) scan lines or a data lines, or may be any one of power supply lines, common voltage lines, data lines, scan lines, signal reference lines or touch signal lines. In some embodiments, one of the plurality of first patterns LP1has a first width, one of the plurality of second patterns LP2has a second width, and a ratio of the first width to the first pitch Pt1is less than a ratio of the second width to the second pitch Pt2. For example, the ratio of the second width to the second pitch Pt2may be greater than or equal to 0.3 and less than or equal to 3, for example, greater than or equal to 0.3 and less than or equal to 0.8, and the ratio of the first width to the first pitch Pt1may be greater than or equal to 0.02 and less than or equal to 0.2. In the ranges described above, the effect of using the second layer LR2as the assisting layer BSL may be more obvious, the optical ripple interference between the first layer LR1and the second layer LR2may also be reduced, and better display effects may be provided. The applications, structures and materials of the assisting layer BSL and the assisting patterns BSP in the electronic devices or the flexible display devices will be described in various embodiments in the following. Please refer toFIG.3toFIG.6.FIG.3is a partial exterior schematic diagram of a first embodiment of an electronic device according to the present disclosure.FIG.4is a partial exterior schematic diagram of a variation of the first embodiment of the electronic device according to the present invention.FIG.5is a partial top-view schematic diagram of the first embodiment of an electronic device according to the present disclosure.FIG.6is a partial sectional-view schematic diagram of the first embodiment of an electronic device according to the present disclosure. As shown inFIG.3, the flexible display device100may have a flexible substrate102and an assisting layer BSL. The assisting layer BSL may be disposed on a surface of the flexible substrate102and include a plurality of assisting patterns BSP. The flexible substrate102has at least one flexing axis FX substantially parallel to the extending direction of the assisting patterns BSP (e.g., the direction Dy), and a part of the flexible substrate102may be bent or rolled at least toward a direction perpendicular to the direction Dy (e.g., the direction Dz), for example curved along the direction of the arrow AR. Alternatively, a part of the flexible substrate102for example may be rolled by taking the flexing axis FX as the axis center, but the deformation and flexing manner of the flexible substrate102are not limited to the above. Please refer toFIG.4, in a variation of the first embodiment of the present disclosure, a part of the flexible substrate102may be bent along at least one direction, and the bent portion can define a flexing axis FX. Please refer toFIG.3andFIG.5. The flexible substrate102may be transparent or opaque, and the material of the flexible substrate may include polymer materials such as polyimide (PI), polycarbonate (PC), polyethylene terephthalate (PET) and/or adhesive materials, but not limited herein. The flexible substrate102may also include thin glass or any suitable materials. The assisting layer BSL may provide suitable support for the flexible substrate102without affecting the flexibility of the flexible substrate102. In detail, the assisting patterns BSP of the assisting layer BSL may provide the flexible display device100with a function like a bracket. The function of the assisting patterns BSP of the assisting layer BSL may assist in adjusting the bending direction or the flexing direction of the flexible substrate102, so that the flexible substrate102may be bent toward a predetermined direction, and the stress in the non-bending direction can be reduced so as to mitigate the abnormal display problems. The assisting patterns BSP may be composed of any material suitable for being integrated into the flexible display device100, for example using materials that may be cooperated with the processes of the light emitting layer, the circuit layer, the bonding layer, the light shielding layer, the light adjusting layer, the touch layer, the insulating layer and/or the protecting layer in the flexible display device100. For example, the material of the assisting layer BSL may include, but not limited to, metals (e.g., copper or aluminum), black matrix (BM) materials or organic polymer materials. The assisting patterns BSP may be manufactured by printing, coating or other suitable methods. In the assisting layer BSL shown inFIG.3(orFIG.4), the assisting patterns BSP are generally uniformly distributed on the surface of the flexible substrate102and arranged side by side along the direction Dx, wherein the direction Dx intersects with the extending direction of the flexing axis FX (i.e., the direction Dy). For example the direction Dx perpendicular to the extending direction of the flexing axis FX. In the direction Dx, the same line spacings Ds may exist between the adjacent assisting patterns BSP, and all of the assisting patterns BSP may have the same line width Ws, wherein the pitch Ps is the sum of one line spacing Ds and one line width Ws. The definition of the pitch Ps may be measured from the center of an assisting pattern BSP to the center of another adjacent assisting pattern BSP, or from the edge at a side of an assisting pattern BSP to the edge at the same side of another adjacent assisting pattern BSP. It should be noted that, the design for the assisting patterns BSP of the present disclosure is not limited to those shown inFIG.3andFIG.4. The plurality of assisting patterns BSP in one assisting layer BSL may have different patterns, different line widths Ws and/or different line spacings Ds. Furthermore, the assisting patterns BSP may not be uniformly distributed on the flexible substrate102. For example (but not limited to), the assisting patterns BSP in one region on the flexible substrate102(for example, but not limited to, the region farther away from the flexing axis FX) may be densely distributed, while the assisting patterns BSP in another region on the flexible substrate102(for example, but not limited to, the region closer to the flexing axis FX) may be loosely distributed. When the distribution of the assisting patterns BSP is not uniform, the pitch Ps of the assisting patterns BSP may be obtained by averaging the pitches Ps of all of the assisting patterns BSP in the assisting layer BSL, or by averaging the pitches Ps of five of the assisting patterns BSP, but not limited herein. In addition, according to the present disclosure, the distribution region of the assisting layer BSL may be defined by connecting the outermost edges of each assisting pattern BSP. Please refer toFIG.5. The flexible display device100may further include another material layer CLL disposed on the flexible substrate102. The material layer CLL may be disposed on the assisting layer BSL (i.e., the assisting layer BSL is disposed between the material layer CLL and the flexible substrate102), or the assisting layer BSL may be disposed on the material layer CLL (i.e., the material layer CLL is disposed between the assisting layer BSL and the flexible substrate102). In other words, the present disclosure does not limit the relative positions of the material layer CLL and the assisting layer BSL on the surface of the flexible substrate102or in the direction DZ. InFIG.5, the material layer CLL is illustrated on the assisting layer BSL as an example, but the present disclosure is not limited to that shown inFIG.5. The material layer CLL may include a plurality of wire patterns108substantially extending along the direction Dy and having a pitch Pc, wherein the pitch Pc of the wire patterns108may be defined in a method similar to the pitch Ps of the assisting patterns BSP, which will not be redundantly described. In the structure shown inFIG.5, the pitch Pc is different from the pitch Ps, and a ratio of the pitch Ps to the pitch Pc is greater than or equal to 2 and less than or equal to 200. This design may provide both of the flexibility and the support and reduce optical ripple interference, so as to provide better display effects. Precisely speaking, the material layer CLL may be included in the display layer114(shown inFIG.6), and the wire patterns108may be used as a plurality of wires in the display layer114, for example, for transmitting signals or providing voltages. The display layer114may further include a light emitting layer104formed of a plurality of light emitting units106, and the corresponding region of each light emitting unit106may be regarded as a sub-pixel to define a display region110of the flexible display device100. According to the present disclosure, the distribution region (or referred to as the distribution area) of the assisting layer BSL may be greater than the area of the display region110. As shown inFIG.5, in the direction Dx, the minimum distance between an edge102E of the flexible substrate102generally parallel to the flexing axis FX and an outermost side of the assisting patterns BSP closest to the edge102E is defined as an edge distance E1. The minimum distance between the light emitting layer104or the display region110and the edge102E is defined as an edge distance E2, and the edge distance E2is greater than the edge distance E1. In other words, in the direction Dx, the outermost assisting pattern BSP may be closer to the edge102E than the display region110. For example, in the direction (e.g., the direction Dz) perpendicular to the surface of the flexible substrate102, at least a part of the assisting patterns BSP may be disposed outside the display region110, or at least a part of the assisting layer BSL may not overlap with the display region110or the light emitting layer104. Please refer toFIG.6, the lower side of the flexible substrate102may further include an adhesive layer122and a supporting film124, wherein the flexible substrate102may be attached to the surface of the supporting film124through the adhesive layer122, so that the flexible substrate102, the adhesive layer122and the supporting film124form a substrate structure. In some embodiments, the flexible substrate102and the supporting film124may respectively include materials such as polyethylene terephthalate (PET), polyimide (PI) or polyethylene naphthalate (PEN), but not limited herein. A display layer114may be disposed on the flexible substrate102, and the display layer114may include a circuit layer112and a light emitting layer104. The circuit layer112may include electronic elements such as wires, driving elements, switch elements, reset elements, compensation elements, operation control elements and capacitors, so as to drive the light emitting layer104to emit light. For example, the circuit layer112includes a plurality of driving elements132arranged in a matrix. The driving elements132inFIG.6are represented by thin film transistors, but are not limited herein. The light emitting layer104includes a plurality of light emitting units106, and each driving element132may be electrically connected to a corresponding light emitting unit106to drive the corresponding light emitting unit106.FIG.6shows that the driving element132may at least partially overlap with the corresponding light emitting unit106in the vertical direction (the direction Dz) of the surface of the flexible substrate102, but not limited herein. The light emitting element106may include any kind of display medium or light emitting elements such as organic light-emitting diodes (OLEDs), micro light-emitting diodes (micro-LEDs), mini light-emitting diodes (mini-LEDs), quantum dot LEDs (QLEDs), nano wire LEDs or bar type LEDs, but not limited herein. For example, the light emitting unit106includes a first electrode106a, a second electrode106cand a display medium layer106bdisposed between the first electrode106aand the second electrode106c. For example, the first electrode106amay be the anode of the light emitting unit106and the second electrode106cmay be the cathode of the light emitting unit106, but not limited herein. The light emitting region of each light emitting unit106may be defined by an insulating layer134used as a pixel defining layer (PDL). The display medium layer106bmay include one or more than one layers of emissive materials, and the emissive materials may be organic or inorganic materials. The different light emitting units106may emit light of different colors, such as red, green and blue. For example, the display medium layers106bof different light emitting units106may be made of different materials so as to emit red light, green light and blue light respectively. In some embodiments, the display medium layers106bof different light emitting units106may be made of the same material to emit the same light. The first electrode106aand the second electrode106cmay include metals or transparent conductive materials, but not limited herein. The metal material of the electrodes may include, but not limited to, magnesium, calcium, aluminum, silver, tungsten, copper, nickel, chromium, or combinations of the materials described above or alloys of one or more of the materials described above. The transparent conductive material may include, for example, indium tin oxide, indium zinc oxide, zinc oxide, indium oxide or combinations of any materials described above, but not limited herein. In addition, the surface of the light emitting unit106may be covered with an insulating layer140as a protecting layer. In some embodiments, the display medium layer106bmay be, for example, liquid crystal materials. In other embodiments, the flexible display device100may further include a color filter layer (not shown in the drawings) and a black matrix (not shown in the drawings) disposed on the light emitting unit106, but not limited herein. In the present embodiment, the driving element132may be a top-gate type thin film transistor (TFT), but not limited herein. The bottom-gate type thin film transistors or other suitable electronic elements may be used in other embodiments, and in the flexible display device100, the structures of the thin film transistors may not be limited to only one type. The driving device132may include a semiconductor layer132C, a dielectric layer1321, a gate132G, a dielectric layer136, a drain132D and a source132S. The semiconductor layer132C may be formed of semiconductor materials, such as silicon or metal oxide, but not limited herein. For example, the semiconductor layer132C may be an amorphous silicon layer, a polysilicon layer or an indium gallium zinc oxide (IGZO) layer. Furthermore, in a driving device132, the semiconductor layer132C includes a source contact, a drain contact and a channel disposed between the source contact and the drain contact. The source132S is electrically connected to the corresponding source contact through an interlayer hole of the dielectric layer136and the dielectric layer1321. The drain electrode132D is electrically connected to the corresponding drain contact through another interlayer hole of the dielectric layer136and the dielectric layer1321. The gate132G is isolated from the semiconductor layer132C through the dielectric layer1321as a gate insulating layer in the driving element132. The gate132G, the source132S and the drain132D may be formed of conductive materials (e.g., metals), but not limited herein. The materials suitable for forming the gate132G, the source132S and the drain132D may refer to the materials for forming the first electrode106aand the second electrode106cdescribed above. In the present disclosure, a driving element132may be electrically connected to the corresponding light emitting unit106through the drain132D to drive the light emitting unit106. Precisely speaking, the drain132D may be directly connected to the first electrode106aof the light emitting unit106. In addition, the dielectric layer138may be disposed between the first electrode106aof the light emitting unit106and the conductive layer forming the source132S and the drain132D. Furthermore, a buffer layer148may be disposed between the flexible substrate102and the display layer114. The buffer layer148may include, for example, an oxide layer, a nitride layer or other suitable insulating layer, but not limited herein. Moreover, an encapsulating layer142may be disposed on the display layer114. The encapsulating layer142may provide protection, encapsulation and/or planarization functions for the display layer114, and the encapsulating layer142may include organic materials, inorganic materials, the arrangement combinations or mixtures of the above, but not limited herein. For example, the encapsulating layer142may be a multi-layer structure including an inorganic layer, an organic layer and an inorganic layer. In some embodiments, the encapsulating layer142may be replaced by another flexible substrate (not shown in the drawings), and a color filter layer and/or a black matrix may be disposed on this flexible substrate, but not limited herein. In another aspect, the flexible display device100may further have touch function, such as selectively including a touch layer120. The conductive layer116in the touch layer120may be used to form touch elements116aand/or touch signal lines, and the insulating layer118may cover the conductive layer116. In the direction Dz, the arrangement of the touch elements116aand the touch signal lines may not cover the light emitting regions of the light emitting units106, or at least apart of the touch elements116aand the touch signal lines may not be overlapped with the light emitting units106, but not limited herein. In addition, a polarizing layer126may be selectively disposed on the touch layer120, wherein the polarizing layer126for example includes organic material, and a transparent covering layer128may further be selectively disposed on the polarizing layer126, wherein the transparent covering layer128for example includes such glass or organic material, but the present disclosure is not limited to the above. InFIG.6, the assisting layer BSL is disposed on the display layer114and the touch layer120and disposed below the polarizing layer126, that is, the assisting layer BSL is disposed between the touch layer120and the polarizing layer126. In addition, an insulating layer130having the function of a planar layer may be disposed on the assisting layer BSL. In the direction Dz, each assisting pattern BSP may correspondingly overlap with a touch element116a, and each assisting pattern BSP may correspondingly overlap with the PDL insulating layer134to expose the light emitting unit106, so as to increase the aperture ratio. The distribution density of the assisting patterns BSP may be smaller than the distribution density of the touch elements116a, but the present disclosure is not limited herein. For example, in some embodiments, each touch element116amay respectively correspond to an assisting pattern BSP. In addition, in the structure shown inFIG.6, the conductive layer used to form the gate132G may be regarded as the material layer CLL mentioned inFIG.5, and the wire patterns108(shown inFIG.5) included in the material layer CLL for example serve as scan lines of the display layer114, but not limited herein. In other embodiments, the wire patterns108may also be one of power supply lines, common voltage lines, data lines, signal reference lines and touch signal lines. In this embodiment, the assisting layer BSL may correspond to the first layer LR1inFIG.1, the assisting patterns BSP may correspond to the first patterns LP1inFIG.1, the pitch Ps may correspond to the first pitch Pt1inFIG.1, the material layer CLL may correspond to the second layer LR2inFIG.1, the wire patterns108may correspond to the second patterns LP2inFIG.1, and the pitch Pc may correspond to the second pitch Pt2inFIG.1. In addition, the ratio of the first pitch Pt1to the second pitch Pt2is greater than or equal to 2 and less than or equal to 200. This design may provide both of the flexibility and the support and reduce optical ripple interference, so as to provide better display effects. Furthermore, the line width Ws of one of the first patterns is defined as a first width, and the line width Wc of one of the second patterns is defined as a second width. The ratio of the first width (i.e., the first width Ws) to the first pitch Ps (i.e., the pitch Ps) is less than the ratio of the second width (i.e., the line width Wc) to the second pitch (i.e., the pitch Pc). In some embodiments, the ratio of the second width to the second pitch is greater than or equal to 0.02 and less than or equal to 0.2, but not limited herein. From the above description, it should be understand that the first layer is disposed on the second layer in this embodiment, that is, the second layer is disposed between the flexible substrate and the first layer. For example, the first pitch Ps may range from 40 μm to 4000 μm, and the second pitch Pc may range from 1 μm to 20 μm. It should be noted that, the structure of the flexible display device100of the present disclosure is not limited to the above description, and the assisting layer BSL may be disposed at other suitable positions of the sectional structure. In the following description, other embodiments or variant embodiments of the present disclosure will be described. For simplifying the illustration, the same films or elements in the following embodiments will be represented with the same symbols, and the features thereof will not be described redundantly. The differences between various embodiments will be described in detail below. Please refer toFIG.7.FIG.7is a partial sectional-view schematic diagram of a second embodiment of an electronic device according to the present disclosure. In some embodiments, the assisting patterns BSP of the assisting layer BSL are disposed on the polarizing layer126and the display layer114. In addition, in the structure shown inFIG.7, the touch layer120may not include the insulating layer118, and the conductive layer116forming the touch elements116ais directly disposed on the encapsulating layer142. The flexible display device100further includes another encapsulating layer144directly covering and contacting the touch elements116a, and the encapsulating layer144is disposed between the assisting layer BSL and the touch elements116a. The encapsulating layer144is, for example (but not limited to), an organic material layer with a greater thickness, so that the distance between the assisting layer BSL and the touch layer120may be greater in the direction Dz. When the assisting layer BSL includes metal materials, this design may reduce the effect of the assisting layer BSL on sensing signals by the touch elements116aor transmitting signals by the touch signal lines. The distribution density of the assisting patterns BSP inFIG.7is greater than the distribution density of the assisting patterns BSP inFIG.6, and the pitch Ps inFIG.7is less than the pitch Ps inFIG.6. For example, an assisting pattern BSP may correspond to a touch element116aor a touch wire, but the present disclosure is not limited herein. For example, the assisting layer BSL, the assisting patterns BSP and the pitch Ps may correspond to the first layer, the first patterns and the first pitch inFIG.1orFIG.2. Furthermore, any kind of wires in the flexible display device100that are substantially parallel to the assisting patterns BSP may be regarded as the second patterns inFIG.1orFIG.2, and the material layer forming these wires and the pitch of these wires may correspond to the second layer and the second pitch inFIG.1orFIG.2, wherein the first pitch is different from the second pitch, and the ratio of the first pitch to the second pitch is greater than or equal to 2 and less than or equal to 200. In another example, the assisting layer BSL, the assisting patterns BSP and the pitch Ps may correspond to the second layer, the second patterns and the second pitch inFIG.1orFIG.2. Furthermore, any kind of wires in the flexible display device100that are substantially parallel to the assisting patterns BSP may be regarded as the first patterns inFIG.1orFIG.2, and the material layer forming these wires and the pitch of these wires may correspond to the first layer and the first pitch inFIG.1orFIG.2, wherein the first pitch is different from the second pitch, and the ratio of the first pitch to the second pitch is greater than or equal to 2 and less than or equal to 200. For example, when the ratio of the pitch Ps of the assisting patterns to the wire pitch formed by the material layer forming these wires is greater than or equal to 2 and less than or equal to 200, the assisting layer BSL may be regarded as the first layer inFIG.1orFIG.2, and the material layer may be regarded as the second layer inFIG.1orFIG.2. On the other hand, when the ratio of the wire pitch formed by the material layer forming these wires to the pitch Ps is greater than or equal to 2 and less than or equal to 200, the assisting layer BSL may be regarded as the second layer inFIG.1orFIG.2, and the material layer may be regarded as the first layer inFIG.1orFIG.2. Under the conditions described above, the assisting layer BSL may provide the support function as mentioned above, which is helpful to make the flexible display device100be flexed toward a predetermined direction, and reduce the stress effects generated during bending in other directions. If the assisting patterns BSP have a smaller pitch Ps and a larger line width Ws, good support effects may be provided. However, if the pitch Ps is too small and the line width Ws is too large, the overall flexibility of the flexible display device100may be reduced. Therefore, the design of the present disclosure makes the pitch of the assisting patterns BSP satisfy the above design, so as to provide the desired flexibility and support effects. Please refer toFIG.8.FIG.8is a partial sectional-view schematic diagram of a third embodiment of an electronic device according to the present disclosure. In some embodiments, the assisting patterns BSP of the assisting layer BSL are disposed on the upper surface1021of the flexible substrate102, that is, the assisting layer BSL is disposed between the flexible substrate102and the display layer114. Furthermore, the polarizing layer126may directly contact the touch layer120, for example, disposed on the upper surface of the insulating layer118in the touch layer120. The assisting layer BSL, the assisting patterns BSP and the pitch Ps may respectfully correspond to the first layer, the first patterns and the first pitch inFIG.1orFIG.2, or may respectfully correspond to the second layer, the second patterns and the second pitch inFIG.1orFIG.2. Furthermore, a certain material layer CLL in the flexible display device100used to form wires may be regarded as the other one inFIG.1orFIG.2(regarded as the second layer or the first layer), such that the wires formed by the material layer CLL and the pitch of the wire may be regarded as the other one inFIG.1orFIG.2(regarded as the second patterns and the second pitch or as the first patterns and the first pitch) relative to the assisting patterns BSP and the pitch Ps. InFIG.8, the conductive layer forming the data lines is regarded as the material layer CLL, wherein the source132S and the drain132D are both formed by the conductive layer, but the present disclosure is not limited herein. In the embodiment shown inFIG.8, the ratio of the first pitch to the second pitch is also designed as being greater than or equal to 2 and less than or equal to 200. For example, in this embodiment, when the ratio of the wire pitch formed by the material layer CLL to the pitch Ps is greater than or equal to 2 and less than or equal to 200, the assisting layer BSL may be regarded as the second layer inFIG.1orFIG.2, and the material layer may be regarded as the first layer inFIG.1orFIG.2. In other embodiments, the pitch Ps of the assisting patterns BSP may be greater than the pitch of the wires formed by the material layer CLL. That is to say, when the ratio of the pitch Ps of the assisting patterns BSP to the pitch of the wires formed by the material layer CLL is greater than or equal to 2 and less than or equal to 200, the assisting layer BSL may be regarded as the first layer inFIG.1orFIG.2, and the material layer CLL may be regarded as the second layer inFIG.1orFIG.2. Please refer toFIG.9.FIG.9is a partial sectional-view schematic diagram of a fourth embodiment of an electronic device according to the present disclosure. The main difference between the structure shown inFIG.9andFIG.8is that the assisting layer BSL and the assisting patterns BSP are disposed on the lower side of the flexible substrate102and formed on the lower surface1022of the flexible substrate102, and the adhesive layer122may directly cover the assisting patterns BSP. In other words, the assisting layer BSL is disposed between the flexible substrate102and the adhesive layer122. In this design, the assisting layer BSL, the assisting patterns BSP and the pitch of the assisting patterns may respectively correspond to the first layer, the first patterns and the first pitch inFIG.1orFIG.2, or may respectively correspond to the second layer, the second patterns and the second pitch inFIG.1orFIG.2. In addition, the selection of another material layer may be referred to the description ofFIG.8andFIG.7, and will not be redundantly described. Please refer toFIG.10.FIG.10is a partial sectional-view schematic diagram of a fifth of an electronic device according to the present disclosure. In the structure shown inFIG.10, the assisting layer BSL may be used as a raster element. InFIG.10, the assisting layer BSL is used as a wire grid polarizer (WGP) as an example, which may replace the polarizing layer126in the previous embodiment. The assisting layer BSL is disposed on the display layer114and the touch layer120, and the material of the assisting layer BSL may for example include molybdenum (Mo), titanium (Ti), tantalum (Ta), niobium (Nb), hafnium (Hf), nickel (Ni), chromium (Cr), cobalt (Co), zirconium (Zr), tungsten (W), aluminum (Al), copper (Cu) and so on or the alloys or combinations of the materials described above, but not limited herein. An insulating layer146may be disposed between the assisting layer BSL and the encapsulating layer142and the encapsulating layer144, wherein the insulating layer146may also be a quarter-wavelength phase retarder, but not limited herein. In this embodiment, the assisting patterns BSP have a smaller pitch Ps, for example, smaller than the pitch of the data lines. Furthermore, the assisting layer BSL, the assisting patterns BSP and the pitch Ps may be regarded as the second layer, the second patterns and the second pitch inFIG.2, and another material layer CLL in the flexible display device100may be regarded as the first layer inFIG.2. For example, the conductive layer used to form the data lines (not symbolized) may be regarded as the first layer, the data lines may be regarded as the first patterns, and the pitch of the data lines may be regarded as the first pitch. In this embodiment, the first pitch is greater than the second pitch, and the ratio of the first pitch to the second pitch is greater than or equal to 2 and less than or equal to 200. In this embodiment, the second layer is disposed on the first layer, the line width of the first pattern is regarded as the first width (e.g., the line width of the data line), and the ratio of the first width to the first pitch may be less than the ratio of the second width (the line width Ws) to the second pitch (the pitch Ps). For example, the ratio of the first width to the first pitch is greater than or equal to 0.02 and less than or equal to 0.2, and the ratio of the second width to the second pitch is greater than or equal to 0.3 and less than or equal to 3, for example, the ratio of the second width to the second pitch ranging from 0.3 to 0.8, but the present disclosure is not limited herein. In an example of the present disclosure, the line width Ws may be 150 nanometers (nm) for example, and the pitch Ps may be 300 nm for example, so that the ratio of the second width to the second pitch is 0.5. In another example of the present disclosure, the line width Ws may be 80 nm for example, and the pitch Ps may be 200 nm for example, so that the ratio of the second width to the second pitch is 0.4. Please refer toFIG.11.FIG.11is a partial top-view schematic diagram of examples of assisting patterns of an electronic device according to the present disclosure. According to the present disclosure, although the assisting patterns BSP extend along a direction (e.g., the direction Dy inFIG.1), the practical patterns of the assisting patterns BSP may have different designs, for example having unsmooth sides or concave-convex edges. In the example (I), the assisting pattern BSP may have a linear pattern with substantially smooth two sides. In the example (II), the assisting pattern BSP may be formed of a plurality of portions, such as a portion P1with a shape similar to a rectangular or square, a portion P2with a twist shape and a portion P3with a long strip shape or a long rectangular shape, and these three portions are disposed alternately. For example, one portion P2is disposed between two portions P1, and two portions P1are disposed between two portions P3. In the example (III), the assisting pattern BSP may be formed of a plurality of portions with different shapes, such as portions P4and portions P6respectively with a triangle-like shape and a portion P5with a shape similar to a rhombic or an inclined rectangular, wherein the sharp corner of each portion P4faces to the right and is disposed on the right side of the portion P5, the sharp corner of each portion P6faces to the left and is disposed on the left side of the portion P5, and the portion P5is disposed between the portions P4and the portions P6. In the example (IV), the sharp corner of each portion P4faces to the right and is disposed on the left side of the portion P5, the sharp corner of each portion P6faces to the left and is disposed on the right side of the portion P5, and the portion P5is disposed between the portions P4and the portions P6. In the example (V), the assisting pattern BSP may be formed of a plurality of portions with different shapes, such as a portion P7with a circle pattern, a portion P8with an elliptical shape and a portion P9with a long elliptical shape, wherein the portion P7may be disposed between adjacent two or a plurality of portions P8, and the portions P8may be disposed between adjacent two or a plurality of portions P9. The assisting patterns BSP of the present disclosure is not limited to those shown inFIG.8, and any suitable pattern designs may be applied to the assisting patterns BSP of the present disclosure. It should be noted that, the pitch, the width and the line spacing of the assisting patterns BSP of the present disclosure may be designed according to the requirements. For example, in an electronic device, the assisting patterns BSP may have the same pitch, but the line width and/or the line spacing of each assisting pattern BSP are not completely the same. In another embodiment, the pitch, the line width and/or the line spacing of each assisting pattern BSP may not be completely the same. Please refer toFIG.12.FIG.12is a partial sectional-view schematic diagram of a first variant embodiment of an assisting layer of an electronic device according to the present disclosure.FIG.12illustrates a cross-sectional shape of an assisting pattern BSP of the assisting layer BSL, wherein the assisting pattern BSP is disposed on the surface of the substrate102′, and the substrate102′ inFIG.12may be represented to include the flexible substrate102inFIG.1toFIG.10and any other films on the surface of the flexible substrate. For example, the substrate102′ may selectively include the flexible substrate102, the circuit layer112, the light emitting layer104, the encapsulating layer142, the encapsulating layer144and the insulating layer146, but not limited herein. According to the present disclosure, the assisting layer BSL may be a composite structure. For example, the assisting layer BSL (or the assisting patterns BSP) may be a double-layer structure or a multi-layer structure including a first sub-assisting layer BSP1and a second sub-assisting layer BSP2sequentially disposed on the surface of the substrate102′, wherein the first sub-assisting layer BSP1has a thickness TH1, and the second sub-assisting layer BSP2has a thickness TH2. The thickness TH2may be different from the thickness TH1, and for example, the thickness TH2is greater than the thickness TH1. In some embodiments, the ratio of the total thickness Hp of the assisting layer BSL to the maximum line width Wp of the composite structure may be greater than or equal to 0.2 and less than or equal to 2. For example, the material of the first sub-assisting layer BSP1may include a material with better adhesion including titanium, molybdenum and so on, and the material of the second sub-assisting layer BSP2may include a material with better wire extension including aluminum, copper and so on. Please refer toFIG.13.FIG.13is a partial sectional-view schematic diagram of a second variant embodiment of an assisting layer of an electronic device according to the present disclosure. InFIG.13, the assisting layer BSL (or the assisting pattern BSP) may include a three-layer structure. For example, the assisting layer BSL (or the assisting pattern BSP) may include a first sub-assisting layer BSP1, a second sub-assisting layer BSP2and a third sub-assisting layer BSP3are sequentially disposed on the substrate102′. A thickness TH2of the second sub-assisting layer BSP2may be greater than a thickness TH1of the first sub-assisting layer BSP1, and/or the thickness TH2of the second sub-assisting layer BSP2may be greater than a thickness TH3of the third sub-assisting layer BSP3. For example, the material of the first sub-assisting layer BSP1and the third sub-assisting layer BSP3may include the materials with better adhesion including titanium, molybdenum and so on, and the material of the second sub-assisting layer BSP2may include a material with better wire extension including aluminum, copper and so on, but not limited herein. Please refer toFIG.14.FIG.14is a partial sectional-view schematic diagram of a third variant embodiment of an assisting layer of an electronic device according to the present disclosure. The structure of this variant embodiment is similar to the structure ofFIG.13. The assisting layer BSL (or the assisting pattern BSP) may include a first sub-assisting layer BSP1, a second sub-assisting layer BSP2and a third sub-assisting layer BSP3sequentially disposed on the substrate102′, but the difference is that the assisting layer BSL may include an undulating surface BSa with higher heights and lower heights. For example, the surface of the second sub-assisting layer BSP2may have cavities BSb, so the third sub-assisting layer BSP3disposed on the second sub-assisting layer BSP2forms an uneven surface BSa. In other words, the three-layer composite structure of the assisting layer BSL includes at least two thicknesses, such as a thickness Hp1and a thickness Hp2, wherein the ratio of the maximum thickness Hp1to the maximum line width Wp of the assisting pattern BSP may be greater than or equal to 0.2 and less than or equal to 2, but the present disclosure is not limited herein. In some embodiments, the assisting layer BSL can also be, for example, a multilayer structure, and the multilayer structure includes at least two or more than two thicknesses, but not limited herein. From the above description, the flexible display device of the present disclosure at least includes two material layers, one of the material layers may be designed as an assisting layer including a plurality of assisting patterns extending along a direction (e.g., the direction Dy), and the other material layer may select any wire material layer in the display layer or the touch layer of the flexible display device, wherein the wire material layer includes a plurality of repeated wires of a plurality of parallel assisting patterns, and the pitch and the line width of the assisting patterns have a specific relationship with the pitch and the line width of the wires of the wire material layer. The two material layers describe above may be regarded as the first layer or the second layer inFIG.1orFIG.2according to the design requirements, and the ratio of the first pitch to the second pitch is greater than or equal to 2 and less than or equal to 200. This design may reduce optical ripple interference, make the assisting patterns to achieve the function of assisting the flexure of the display device, and provide better display effects. Please refer toFIG.15.FIG.15is a top-view schematic diagram of an assisting layer of another embodiment of an electronic device according to the present disclosure. In the embodiment shown inFIG.15, the assisting layer BSL may be integrated to the touch layer120, that is, the touch elements116a(represented by thin lines), the touch elements116a′ (represented by thick lines) or wires in the touch layer120may be used as the assisting patterns BSP. For example, the touch layer120may include the touch elements116aand the touch elements116a′ that are extending along the direction Dy and the touch elements116bextending along the direction Dx, wherein the touch elements116aand the touch elements116a′ have a pitch Ptx in the direction Dx, and the touch elements116bhave a pitch Pty in the direction Dy. According to the design of the present disclosure, the pitch Ptx is different from the pitch Pty. For example, the pitch Pty is greater than the pitch Ptx, or for example (but not limited to), the pitch Pty is twice the pitch Ptx. That is to say, the distribution density of the touch elements116aand the touch elements116a′ is greater than the distribution density of the touch elements116b. Since the pitch Ptx is different from the pitch Pty, when the flexible display device100is flexed, the flexure effect of the touch elements116aand the touch elements116a′ on the flexible display device100in the direction Dy is different from the flexure effect of the touch elements116bon the flexible display device100in the direction Dx. For example, since the pitch Ptx is smaller, it is helpful to make the flexible display device100bend in a direction perpendicular to the extending direction of the touch elements116a′ (e.g., the direction Dy), for example, being bent in a direction Dx perpendicular to the direction Dy, and the bent portion may define a flexing axis FX, which is generally parallel to the extending direction of the touch elements116a′ (e.g., the direction Dy). The flexible display device100may be flexed or bent by taking the flexing axis FX as the axis center. Precisely speaking, the number of the touch elements116ahaving the same distance and the number of the touch elements116bhaving the same distance are generally the same, so the effect on flexure stress of the touch elements116amay offset against the effect on flexure stress of the touch elements116b, and both of the touch elements116aand the touch elements116bmay not provide the function of assisting flexure, but the effect on flexure stress of the touch elements116bmay not offset against the effect on flexure stress of the touch elements116a′. Therefore, the touch elements116a′ may be regarded as the assisting patterns BSP having a pitch Ps, and the touch layer120may be regarded as the assisting layer BSL of the present disclosure. In brief, when the material layer manufactured by the same process has patterns extending in different directions, the patterns in the two directions may be designed to have different pitches. For example, the patterns parallel to the flexing axis FX has a smaller pitch or a larger average distribution density, so that the patterns may provide the function of the assisting patterns BSP of the present disclosure. In other words, the principle described above may also be applied to two material layers manufactured by different processes, so as to make the pattern density of one of the material layers parallel to the flexing axis FX be larger, and this material layer may be used as the assisting layer BSL. It should be noted that, although the linear patterns of the touch layer120are referred to as the touch elements116a, the touch elements116a′ and the touch elements116b, the linear patterns may also be wires in the touch layer120for electrically connecting the touch elements in different embodiments. According to the present disclosure, an electronic device including a flexible display device may have an assisting layer, and the assisting layer may include a plurality of assisting patterns, so as to be helpful to make the flexible display device be flexed toward a predetermined direction, and reduce the adverse stress effects in other directions. The difference between the pitch of the assisting patterns and the pitch of other patterns (e.g., the scanning lines or the data lines) in the display device that are generally parallel to the assisting patterns is at least more than twice, and the ratio of the pitch of the assisting patterns to the pitch of the other patterns described above is greater than or equal to 2 and less than or equal to 200. This design may reduce the optical ripple interference and provide better display effects. Those skilled in the art will readily observe that numerous modifications and alterations of the device and method may be made while retaining the teachings of the disclosure. Accordingly, the above disclosure should be construed as limited only by the metes and bounds of the appended claims. | 55,568 |
11862050 | DESCRIPTION OF THE EMBODIMENTS Multiple implementations of the disclosure are disclosed below with figures. Many practical details are explained in the following statements for clarification. However, it should be understood that the practical details should not be used to limit the disclosure. That is, the practical details are unnecessary in some implementations of the disclosure. In addition, in order to simplify the figures, some conventional structures and elements are omitted or simply illustrated in the figures. Identical reference numerals represent identical or similar elements throughout the specification. In the accompanying drawings, thicknesses of layers, films, panels, regions, and so on are enlarged for clarity. It should be understood that when an element such as a layer, a film, a region, or a substrate is referred to as being “on” or “connected to” another element, it may be directly on or connected to the another element, or there may be other elements between the element and the another element. In contrast, when an element is referred to as being “directly on another element” or “directly connected to” the another element, there are no other elements between the element and the another element. As used herein, “connected” may mean being physically and/or electrically connected. Furthermore, two elements being “electrically connected” or “coupled” may mean that there are other elements between the two elements. The terms used herein are only intended to describe specific embodiments of the disclosure, but are not intended to limit the disclosure. For example, “a/an,” “one,” and “the” used herein are not intended to limit an element to a singular or plural form. The term “or” used herein indicates “and/or.” As used herein, the term “and/or” includes any and all combinations of one or more associated items listed. It should also be understood that, when used in the specification, the term “include” or “comprise” specifies the existence of the feature, the region, the whole, the step, the operation, and the element and/or member, but does not exclude the existence or addition of one or more other features, regions, wholes, steps, operations, elements, members and/or combinations thereof. In addition, relative terms such as “lower” or “bottom” and “upper” or “top” may be used herein to describe a relationship between an element and another element, as shown in the figures. It should be understood that the relative terms are intended to include different orientations of a device in addition to the orientations shown in the figures. For example, if a device in a figure is flipped, an element described as being on a “lower” side of another element is oriented to an “upper” side of the another element. Therefore, the exemplary term “lower” may include “lower” and “upper” orientations, depending on a specific orientation of the figure. Similarly, if a device in a figure is flipped, an element described as being “below” another element or “below” is oriented to being “above” the another element. Therefore, the exemplary term “above” or “below” may include up and down orientations. The term “about” or “substantially” used herein includes the value and an average value within an acceptable deviation range of specific values determined by a person of ordinary skill in the art, taking into account discussed measurements and a specific number of measurement-related errors (i.e., limitations of a measuring system). For example, the term “about” may mean being within one or more standard deviations of the value, or within, for example, ±30%, ±20%, ±10%, and ±5%. Moreover, the term “about” or “substantially” used herein may mean selecting a more acceptable deviation range or standard deviations according to measurement properties, cutting properties or other properties, without applying a single standard deviation to all properties. Unless otherwise defined, all the terms used herein (including technical and scientific terms) have the same meaning as is commonly understood by a person of ordinary skill in the art. It should further be understood that terms such as those defined in commonly used dictionaries shall be interpreted as having meanings consistent with their meanings in the related art and the context of the disclosure and shall not be interpreted as having an idealized or overly formal meaning, unless so defined explicitly herein. FIG.1AtoFIG.1Dare partial cross-sectional views of a flexible display device at different stages of a manufacturing process according to an embodiment of the disclosure. In the present embodiment, a manufacturing method of a flexible display device100may include the following steps. Referring toFIG.1A, a display panel120is disposed on a base film110. It should be noted that an area of the base film110inFIG.1Amay be substantially identical to an area of the display panel120. Nevertheless, the disclosure is not limited thereto. Before a subsequent laser cutting process is performed, the area of the base film110may be different from that of the display panel120. In the present embodiment, the base film110and the display panel120may be flexible. For example, a material of the base film110may be a plastic film, a metal film, or a combination thereof. The plastic film may have a support function, and the metal film may improve flatness of the base film110. The plastic film is, for example, a plastic material such as polyethylene terephthalate (PET), polyimide (PI), or polyethylene naphthalate (PEN), or other flexible polymers, but the disclosure is not limited thereto. The metal film is, for example, a stainless steel foil, a copper foil, or an aluminum foil, but the disclosure is not limited thereto. The display panel120may be composed of a flexible substrate, a plurality of elements formed on the flexible substrate, a color filter, and a polarizer, but the disclosure is not limited thereto. For example, the flexible substrate is, for example, a glass substrate, a plastic substrate, a metallic soft substrate, or a multi-layer composite substrate composed of the above materials. The plurality of elements may include a driving element that may be, for example, a plurality of thin film transistors (TFTs), a passive element, a touch element, or a corresponding wire (e.g., a scan line, a data line, or other similar signal lines), and a light-emitting element such as an organic light emitting diode (OLED), a thin film encapsulation (TFE), a micro LED, or a mini LED. The color filter includes a flexible transparent substrate, a black matrix, an RGB color layer, etc. The polarizer is, for example, a circular polarizer. Referring toFIG.1B, a protective film130is disposed on a surface120aof the display panel120away from the base film110, where an area of the protective film130is less than that of the display panel120. In other words, the display panel120is located between the base film110and the protective film130, the protective film130is indented towards the display panel120and exposes part of the display panel120to form a region G not covered by the protective film130between a side edge120sof the display panel120and a side edge130sof the protective film130. In this way, the protective film130is ensured not to be cut during subsequent cutting of the display panel120. Further, formation of micro cracks at the side edge130sof the protective film130due to the cutting and further the micro cracks propagation under repeated bending are prevented from occurring. Moreover, in the case that the protective film130is expensive, the region G not covered by the protective film130is a cutting tolerance range, so that manufacturing costs generated by losses of the protective film130, the display panel120, and the base film110caused by the cutting may be further reduced. In the present embodiment, in order to effectively protect the display panel120, a thickness d1 of the protective film130may be greater than a thickness d2 of the display panel120, and surface hardness of the protective film130may be greater than surface hardness of the display panel120and surface hardness of the base film110, but the disclosure is not limited thereto. Herein, a thickness direction is a stack direction R in which the base film110, the display panel120, and the protective film130are sequentially stacked. A method for disposing the display panel120and the protective film130is not limited in the disclosure and may be determined according to actual design needs. Referring toFIG.1C, an adhesive layer140is formed on the surface120aof the display panel120away from the base film110, and the adhesive layer140and the protective film130include an overlapping portion P. For example, the adhesive layer140and the protective film130include the overlapping portion P in the stack direction R. In other words, the overlapping portion P between the adhesive layer140and the protective film130covers part of a top surface130aof the protective film130, and the other part of the top surface130aof the protective film130is exposed. The adhesive layer140may extend from the side edge130sof the protective film130to the top surface130aof the protective film130. In this way, the side edge130sof the protective film130may be well protected, and an edge may not be damaged by a collision. The adhesive layer140is formed by, for example, coating. In the present embodiment, a thickness d3 of the adhesive layer140is between 10 μm and 500 μm, and a Young's modulus of the adhesive layer140is between 0.1 GPa and 10 GPa. Therefore, through parameter setting of the thickness and the Young's modulus of the adhesive layer140, protection of a wiring line in the region G not covered by the protective film130of the flexible display device100is enhanced, and a position of a neural axis of the region G not covered by the protective film130of the flexible display device100may also be dynamically adjusted. In this way, a wire line on an edge of the display panel120may not be fractured when being affected by repeated bending, adverse effects of a bending stress on the flexible display device100may be further reduced, and the service life of the flexible display device100is thereby prolonged. For example, the parameter setting of the thickness and the Young's modulus of the adhesive layer140may be adjusted according to overall stiffness of the display panel120and the base film110. The neural axis of the flexible display device100may thereby be located in a region of the display panel120closer to the driving element and the light-emitting element to prevent the driving element and the light-emitting element from being damaged due to an excessive bending stress. As such, adverse effects of the bending stress on the flexible display device100are reduced, and the service life of the flexible display device100is prolonged. Herein, the stiffness is a product of the Young's modulus and the thickness. Further, if the display panel120is composed of a flexible lower substrate, a driving element, a light-emitting element, a touch element, and a color filter (including a flexible transparent upper substrate), stiffness of the flexible lower substrate in the display panel120plus stiffness of the base film110may approximately equal to stiffness of the flexible transparent upper substrate in the display panel120plus stiffness of the adhesive layer140. The stiffness of the flexible lower substrate in the display panel120may be designed to be the same as that of the flexible transparent upper substrate, so that the driving element and the light-emitting element are closer to the neutral axis of the display panel120. Moreover, the thickness d3 of the adhesive layer140may be approximately greater than or equal to a thickness d4 of the base film110. Therefore, the Young's modulus of the adhesive layer140may be less than or equal to the thickness d4 of the base film110, so that the neural axis of the flexible display device100may fall within a region close to a region provided with a plurality of elements (such as the driving element and the light-emitting element) in the display panel120. Adverse effects (e.g., a fracture in the wire line in the region due to bending) of the bending stress on the elements are reduced, and the service life of the flexible display device100is prolonged. In an embodiment, if the base film is a composite material, for example, PET and a stainless steel foil bonded by an adhesive, the position of the neutral axis may not be calculated by linear addition and subtraction of stiffness, and the neutral axis needs to be calculated by simulation, so design is still required to be made according to actual needs. In the present embodiment, the overlapping portion P may include the side edge130sof the protective film130. In other words, the overlapping portion P may extend from the side edge130sof the protective film130towards the middle. A width W of the overlapping portion P may be between 10 μm and 1,000 μm, and a distance L between the side edge130sof the protective film130and the side edge140sof the adhesive layer140may be between 100 μm and 1,000 μm. With the setting of the above parameters, the protective film130is not cut when the display panel120and the base film110are cut. The side edge130sof the protective film130is thereby effectively protected, and the formation of micro cracks on the protective film130due to lateral impact and the micro cracks propagation under repeated bending are prevented. Besides, in addition to effectively adjusting the position of the neural axis of the flexible display device100and further reducing adverse effects of the bending stress on the flexible display device100, the adhesive layer140may also provide protection over the wire line in the region G not covered by the protective film130of the flexible display device100. In particular, the flexible display device100may also be designed to have a narrow border in the trend concerning the screen-to-body ratio of a narrow border to a large screen in the future. The border may be a region defined by the distance L. Referring toFIG.1D, after the adhesive layer140is formed, a laser cutting process is performed to make a side edge110sof the base film110, a side edge120sof the display panel120, and a side edge140sof the adhesive layer140substantially aligned. In this way, dust is prevented from accumulating on the region G as shown inFIG.1Bto damage the display panel120. In other words, after the laser cutting process is performed, the area of the base film110may be substantially the same as that of the display panel120. In the present embodiment, the top surface130aof the protective film130acts as a boundary. After the laser cutting process is performed, the adhesive layer140may be divided into a vertical portion1401and a horizontal portion1402. The vertical portion1401is close to the display panel120, and the vertical portion1401is connected to the display panel120and the horizontal portion1402. The vertical portion1401and the horizontal portion1402have different thicknesses and widths. As shown inFIG.1D, the width of the vertical portion1401may be equal to the distance L, and the thickness of the vertical portion1401may be equal to the thickness d1 of the protective film130. The width of the horizontal portion1402may be equal to a sum of the distance L and the width W of the overlapping portion P, and the thickness of the horizontal portion1402may be equal to a difference value between the thickness d3 of the adhesive layer140and the thickness d1 of the protective film130. In an embodiment, the adhesive layer140may be a black adhesive layer. In this way, a user is prevented from tearing the protective film130by mistake, and that the protective film130may not be stripped from the display panel120, and the service life of the flexible display device100is thereby prolonged. On the other hand, the protective film130may include an inorganic material layer. The inorganic material layer may include thin glass. For example, the thin glass may be reinforced thin glass, but the disclosure is not limited thereto. In other embodiments, the protective film130may have be implemented differently. It should be noted herein that the following embodiment follows the element reference numerals and some content in the embodiment ofFIG.1AtoFIG.1D, where identical or similar reference numerals are used to represent identical or similar elements, and the description of identical technical content is omitted. Reference can be made to the above embodiment for the description of the omitted content, and the description thereof is not repeated in the following embodiment. FIG.2Ais a partial top view of a flexible display device according to another embodiment of the disclosure.FIG.2Bis a partial cross-sectional view along a line A-A′ inFIG.2A.FIG.2Cis a partial cross-sectional view along a line B-B′ inFIG.2A. Referring toFIG.2AtoFIG.2Ctogether, a flexible display device200of the present embodiment includes a bendable region10and a non-bending region20adjacent to the bendable region10, and the thickness d3 of the adhesive layer140in the bendable region10is different from a thickness d5 of the adhesive layer142in the non-bending region20. The thickness d3 of the adhesive layer140in the bendable region10may be less than the thickness d5 of the adhesive layer142in the non-bending region20. Since the thickness d3 of the adhesive layer140in the bendable region10is smaller, a larger bending space may be provided for the flexible display device200. Since the thickness d5 of the adhesive layer142in the non-bending region20is g larger, the flexible display device200may be well protected. FIG.3AtoFIG.3Dare partial cross-sectional views of a flexible display device at different stages of a manufacturing process according to a yet another embodiment of the disclosure. Referring toFIG.3AtoFIG.3Dtogether, a difference between a flexible display device300of the present embodiment and the flexible display device200in the embodiment ofFIG.1AtoFIG.1Dlies in that: a protective film330may be a combination of a plastic material layer332and a hard coating layer334. The plastic material layer332may be a transparent plastic material with light penetrability. For example, a material of the plastic material layer332may be transparent polyimide (CPI), PET, PEN, or polymethyl methacrylate (PMMA), but the disclosure is not limited thereto. A material of the hard coating layer334is, for example, polymerized siloxanes or polysiloxanes that may include organic and inorganic polymers. In an embodiment, a chemical formula is [—R2SiO—]n, where R is an organic functional group of methyl, phenyl, etc. The materials are composed of inorganic siloxane bond skeletons ( . . . —Si—O—Si—O—Si—O— . . . ) and branched organic groups bound to silicon atoms by covalent bonds. For example, by controlling lengths of the skeletons, types of organic groups, and crosslinking of the skeletons, the polymerized siloxanes or polysiloxanes with different properties of organic and inorganic mixtures may be obtained. The higher a proportion of inorganic materials, the more the crosslinking of the skeletons, and the higher the hardness. It should be noted that in embodiments that are not shown, the protective film may be a combination of an inorganic material layer (e.g., thin glass) and a combination of a plastic material layer (e.g., PI) and a hard coating (e.g., polymerized siloxanes or polysiloxanes). In other words, the protective film may include an inorganic material layer (e.g., thin glass), a plastic material layer (e.g., PI), and a hard coating (e.g., polymerized siloxanes or polysiloxanes) sequentially stacked on the display panel120. FIG.4AtoFIG.4Care partial cross-sectional views of a flexible display device at different stages of a manufacturing process according to a still another embodiment of the disclosure. Referring toFIG.4AandFIG.4Ctogether, a difference between a flexible display device400of the present embodiment and the flexible display device300in the embodiment ofFIG.3AtoFIG.3Dlies in that: an optical structure layer450is formed on the surface120aof the display panel120away from the base film110before the protective film330is formed. In other words, the optical structure layer450is located between the display panel120and the protective film330. An area of the protective film330may be less than an area of the optical structure layer450, and the area of the optical structure layer450may be less than the area of the display panel120. In other words, the optical structure layer450is indented towards the display panel120, and the protective film330is indented towards the optical structure layer450. On the other hand, an adhesive layer440of the flexible display device400may cover a side edge450sof the optical structure layer450and a side edge330sof the protective film330(a side edge332sof the plastic material layer332and a side edge334sof the hard coating layer334). Based on the above, in the disclosure, thanks to the overlapping portion provided between the adhesive layer and the protective film and the parameter setting of the thickness and the Young's modulus of the adhesive layer, the position of the neural axis of the flexible display device may be dynamically adjusted. Therefore, formation and lengthening of cracks at the edge due to repeated bending are prevented from occurring, adverse effects of a bending stress on the flexible display device are reduced, and the service life of the flexible display device is prolonged. | 21,627 |
11862051 | DETAILED DESCRIPTION The technical solution of the present application embodiment will be clarified and completely described with reference accompanying drawings in embodiments of the present application embodiment. Obviously, the present application described parts of embodiments instead of all of the embodiments. Based on the embodiments of the present application, other embodiments which can be obtained by a skilled in the art without creative efforts fall into the protected scope of the present application. In the description of the present application, it should be explained that the terms “center”, “portrait”, “transverse”, “length”, “width”, “thickness”, “upper”, “lower”, “front”, the directions or positional relationships indicated by “back”, “left”, “right”, “vertical”, “horizontal”, “top”, “bottom”, “inside”, “outside”, etc. are based on the drawings. The orientation or positional relationship is only for the convenience of describing the present application and simplifying the description, and does not indicate or imply that the device or element referred to must have a specific orientation, structure and operation in a specific orientation, and should not be viewed as limitations of the present application. In addition, terms “first” and “second” are used for descriptive purposes only, and cannot be understood as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Therefore, the features defined as “first” and “second” may explicitly or implicitly include one or more of the features. In the description of the present application, the meaning of “multiple” is two or more, unless specifically defined otherwise. In the prior art, a light-emitting diode (LED) display screen and a LED box body are formed by a plurality of LED unit boards that are spliced together. Generally, the LED unit boards in rectangular shapes are adopted, and for the sake of convenience of mounting, a certain splicing space is usually reserved on edges of mounting portions of display surfaces of the LED unit boards. Therefore, when the LED display screen or the LED box body formed through splicing displays video images, splicing lines are very obvious, which easily affect a display effect. Therefore, in order to solve above-mentioned problems, the present application proposes a display panel and a light board. The following describes the present application in detail with reference to the accompanying drawings and implementations. Referring toFIGS.1-2,FIG.1is a first structural schematic view of a display panel provided by an embodiment of the present application, andFIG.2is a structural schematic view of a light board in the display panel provided inFIG.1. An embodiment of the present application provides a display panel100. The display panel100includes a back board10and a plurality of light boards20. Each light board20is disposed on the back board10, and each light board20includes a plurality of light beads230arranged in an array, and a first substrate210and a second substrate220disposed in a stack. The plurality of light beads230are disposed on a side of the first substrate210, and the second substrate220is disposed on a side of the first substrate210away from the plurality of light beads230. The first substrate210includes a first portion211that exceeds the second substrate220, the second substrate220includes a second portion221that exceeds the first substrate210, and a first portion211in one of the plurality of light boards20splices with a second portion221in another adjacent one of the plurality of light boards20. The distance between two adjacent and outermost columns of light beads230in the two light boards20is within a preset range, and the two columns of light beads230are flush with each other in a splicing direction. Through overlapping the two adjacent light boards20through the first portion211and the second portion221, the plurality of light boards20can be spliced together, and a distance between two adjacent columns of light beads230on two sides after splicing is within the preset range, splicing seams between adjacent light boards20that are spliced can be blocked by the plurality of light beads230, so that the splicing seams do not appear after splicing of the plurality of light boards20, which can reduce an effect of the splicing seams on the display panel100, prevent an existence of shadow of the splicing seams, and increase a display effect. It should be noted that a thickness of the first substrate210and a thickness of the second substrate220are the same, so that a plurality of groups of light boards20can be on a same horizontal direction after splicing, thereby preventing a problem of discontinuous layers of the plurality of light boards20after splicing. The back board10includes a bottom wall110and a side wall arranged around the bottom wall110. The side wall includes a first side wall130and a second side wall120that are oppositely arranged. A first protrusion140is disposed on an end of the bottom wall110adjacent to the first side wall130, a second protrusion150is disposed on the second side wall120, and the second protrusion150is spaced from the bottom wall110to form a slot. A light-emitting region212in the light board20includes a first column of light beads2121, a first row of light beads2122, a second column of light beads2123, and a second row of light beads2124arranged in sequence. The first substrate210includes a fifth side, a sixth side, a seventh side, and an eighth side arranged in sequence. The first column of light beads2121is disposed corresponding to the fifth side, and a first edge region2111is formed between the first column of light beads2121and the fifth side. The first row of light beads2122is disposed corresponding to the sixth side, and a second edge region2112is formed between the first row of light beads2122and the sixth side. That is to say, the first portion211includes the first edge region2111adjacent to the first column of light beads2121and the second edge region2112adjacent to the first row of light beads2122. The second portion221includes a third edge region2211adjacent to the second column of light beads2123and a fourth edge region2212adjacent to the second row of light beads2124. In a first direction or in a second direction of any light board20, a first distance between the first column of light beads2121and the fifth side is less than or equal to a distance between two adjacent light beads230. That is to say, the first distance between an outer edge in the first edge region2111corresponding to the first column of light beads2121and the first column of light beads2121is less than or equal to the distance between two adjacent light beads230. A second distance between the first row of light beads2122and the sixth side is less than or equal to the distance between two adjacent light beads230. That is to say, the second distance between an outer edge in the second edge region2112corresponding to the first row of light beads2122and the first row of light beads2122is less than or equal to the distance between two adjacent light beads230. A third distance between an outer edge in the third edge region2211corresponding to the second column of light beads2123and the second column of light beads2123is less than or equal to the distance between two adjacent light beads230. A fourth distance between an outer edge in the fourth edge region2212corresponding to the second row of light beads2124and the second row of light beads2124is less than or equal to the distance between two adjacent light beads230. It should be noted that, through having the first distance, the second distance, the third distance, and the fourth distance disposed to be less than or equal to the distance between two adjacent light beads230, after the adjacent light boards20are spliced in splicing regions, the distance between the two adjacent rows of light beads230on two sides is similar to a distance between the two adjacent rows of light beads230on a single light board20, which can prevent a distance between the splicing seams and the plurality of light beads230from being too large, so that the plurality of light boards20that are spliced do not visually show the splicing seams. In some other embodiments, the second column of light beads2123are arranged to overlap with the seventh side, and the second rows of light beads2124are arranged to overlap with the eighth side. That is to say, a distance between light beads230on the second column of light beads2123and an outer edge of the first substrate210corresponding to the second column of light beads2123is zero, and a distance between light beads230on the second row of light beads2124and an outer edge of the first substrate210corresponding to the second row of light beads2124is zero. Through arranging the second column of light beads2123to overlap with the seventh side, and the second row of light beads2124to overlap with the eighth side, the splicing seams can be blocked by the plurality of light beads230, thereby solving a problem of the shadow of the splicing seams. It can be understood that the second column of light beads2123and the seventh side are not necessarily fully overlapped, and a distance between the second column of light beads2123and the seventh side can also be within a certain preset range, and the second row of light beads2124and the eighth side are not necessarily fully overlapped, and a distance between the second row of light beads2124and the eighth side can also be within a certain preset range. Specific settings can be arranged according to actual situations, which are not specifically limited herein. In some embodiments, the first distance and the third distance both equal to the distance between two adjacent light beads230, and the second distance and the fourth distance are zero. Through such arrangement, the plurality of light boards20can be spliced along the first direction perpendicular to the first column of light beads2121. In some embodiments, the first distance and the third distance are zero, and the second distance and the fourth distance both equal to the distance between two adjacent light beads230. Through such arrangement, the plurality of light boards20can be spliced along the second direction perpendicular to the first row of light beads2122. In some other embodiments, the first distance, the second distance, the third distance, and the fourth distance all equal to the distance between two adjacent light beads230. Through such arrangement, the plurality of light boards20can be spliced along the first direction perpendicular to the first column of light beads2121and the second direction perpendicular to the first row of light beads2122, so that each splicing seams are all arranged at edges of the plurality of light beads230, and the plurality of light beads230can block the splicing seams to solve the problem of the shadow of the splicing seams. For example, references are further made toFIG.3toFIG.5.FIG.3is a structural schematic view of a first type of splicing of the plurality of light boards in the display panel provided inFIG.1.FIG.4is a structural schematic view of a second type of splicing of the plurality of light boards in the display panel provided inFIG.1.FIG.5is a structural schematic view of a third type of splicing of the plurality of light boards in the display panel provided inFIG.1. As shown inFIG.3, the plurality of light boards20include a first light board201and a second light board202arranged in the first direction. The first edge region2111of the second light board202splices on the third edge region2211of the first light board201. A distance between the second column of light beads2123in the first light board201and the first column of light beads2121in the second light board202is within a preset range. The second row of light beads2124in the first light board201are flush with the second row of light beads2124in the second light board202in the first direction. That is to say, a distance between the plurality of light beads230in the first light board201adjacent to the third edge region2211and the plurality of light beads230in the second light board202adjacent to the first edge region2111is within a preset range, and the plurality of light beads230in the first light board201adjacent to the fourth edge region2212are flush with the plurality of light beads230in the second light board202adjacent to the fourth edge region2212in the first direction. When the light board20is mounted on the back board10, the first edge region2111is disposed on a side of the first protrusion140away from the bottom wall110, the third edge region2211is adapted to the slot, a side in the second substrate220overlapping with the first column of light beads2121is abuts against a side of the first protrusion140away from the first side wall130, and the second column of light beads2123abuts against a side of the second protrusion150away from the second side wall120, so that the light board20is engaged with the back board10. Through the first edge region2111, the second edge region2112, the third edge region2211, and the fourth edge region2212that are protrudingly arranged, and a conjunction of the slot in the back board10and the first protrusion140, the light board20can be engaged with the back board10to prevent the light board20from shaking after being mounted on the back board10. As shown inFIG.4, the plurality of light boards20include a third light board203and a fourth light board204arranged in a second direction, and the first direction is perpendicular to the second direction. The second edge region2112in the third light board203splices on the fourth edge region2212in the fourth light board204, a distance between the first row of light beads2122in the third light board203and the second row of light beads2124in the fourth light board204is within a preset range, and the first column of light beads2121in the third light board203are flush with the first column of light beads2121in the second light board202in the second direction. That is to say, a distance between the plurality of light beads230in the first light board201adjacent to the second edge region2112and the plurality of light beads230in the second light board202adjacent to the fourth edge region2212is within a preset range, and the plurality of light beads230in the first light board201adjacent to the first edge region2111are flush with the plurality of light beads230in the second light board202adjacent to the second edge region2112in the second direction. When the light board20is mounted on the back board10, the first edge region2111of the first light board201and the first edge region2111of the second light board202are both disposed on the side of the first protrusion140away from the bottom wall110, the third edge region2211of the second light board202and the third edge region2211of the first light board201are at least partially disposed in the slot. In some other embodiments, a side of a third portion of the first light board201overlapping with the first column of light beads2121abuts the side of the first protrusion140away from the first side wall130, and the second column of light beads2123of the second light board202abuts the side of the second protrusion150away from the second side wall120. As shown inFIG.5, the plurality of light boards20include the first light board201, the second light board202, the third light board203, and the fourth light board204. The first light board201and the second light board202are arranged in the first direction, the third light board203and the fourth light board204are arranged in the first direction. The first light board201and the third light board203are arranged in the second direction, and the second light board202and the fourth light board204are arranged in the second direction. The first edge region2111in the second light board202splices with the third edge region2211in the first light board201, and the first edge region2111in the fourth light board204splices with the third edge region2211in the third light board203, the second edge region2112in the third light board203splices with the fourth edge region2212in the first light board201, and the second edge region2112in the fourth light board204splices with the fourth edge region2212in the second light board202. It can be understood that the display panel100can be any number of the plurality of light boards20that are spliced, and a number and a direction of splicing can be arranged according to requirements, which are not limited to above-mentioned examples, and are not specifically limited herein. As shown inFIG.2, an embodiment of the present application further provides a light board20, the light board20includes a plurality of light beads230arranged in an array, and a first substrate210and a second substrate220disposed in a stack. The plurality of light beads230are disposed on a side of the first substrate210, and the second substrate220is disposed on a side of the first substrate210away from the plurality of light beads230. The first substrate210includes a first portion211that exceeds the second substrate220, and the second substrate220includes a second portion221that exceeds the first substrate210. A light-emitting region212in the light board20includes a first column of light beads2121, a first row of light beads2122, a second column of light beads2123, and a second row of light beads2124. A distance between the first column of light beads2121and an outer edge in the first substrate210corresponding to the first column of light beads2121is within a preset range, a distance between the first row of light beads2122to an outer edge in the first substrate210corresponding to the first row of light beads2122is within a preset range, a distance between the second column of light beads2123and an outer edge in the first substrate210corresponding to the second column of light beads2123is zero, and a distance between the plurality of light beads230on the second row of light beads2124and an outer edge in the first substrate210corresponding to the second row of light beads2124is zero. It can be understood that details of the light board20can be found in the above description, which will not be reiterated herein. References are further made toFIG.6toFIG.8.FIG.6is a second structural schematic view of a display panel provided by an embodiment of the present application.FIG.7is a structural schematic view of a light board in the display panel provided inFIG.6.FIG.8is a top view of a back board in the display panel provided inFIG.6. Embodiments of the present application further provide a display panel100. The display panel100includes a back board10and a plurality of light boards20. The light boards20are disposed on the back board10, and the light boards20include a plurality of light beads230arranged in an array, a first substrate210, and a second substrate220. The second substrate220is arranged on a side of the first substrate210and overlaps the first substrate210. That is to say, a projection of the second substrate220on the first substrate210is within the first substrate210. The first substrate210includes an overlapping region214that overlaps the first substrate210second substrate220and an edge region215that protrudes from the second substrate220, and the edge regions215of two adjacent light boards20are overlapped with the light boards20to form a tiling region. A distance between two adjacent columns of light beads230on two sides of the tiling region is within a preset range, and the two adjacent columns of light beads230on the two sides of the tiling region are flush with each other in a tiling direction. By connecting protruding portions of the first substrate210relative to the second substrate220in the light boards20to the back board10, the tiling seams of the light boards20can be arranged on the back board10, i.e., a non-display region. In this way, the tiling seams can be blocked, thereby reducing an effect of the tiling seams on the display panel100. The back board10includes a plurality of first reinforcing ribs111arranged at intervals in a vertical direction and a plurality of second reinforcing ribs112arranged at intervals in a horizontal direction. On a back side of the back board10, a hollow structure113is formed between the first reinforcing ribs111and the second reinforcing ribs112that are intersecting, and the edge regions215in the plurality of light boards20are connected with the first reinforcing ribs111or the second reinforcing ribs112. The second substrate220in each light board20is disposed in the hollow structure113, so that a distance between two adjacent columns of light beads230that are spliced at the first reinforcing rib111or the second reinforcing rib112is within a preset range, and the two adjacent columns of light beads230on the two sides of the splicing region are flush with each other in the splicing direction. Through a design of the back board10, the plurality of light boards20can be positioned and fixed, which reduces a difficulty of splicing the plurality of light boards20, and achieves thinning of the display panel100. It can be understood that a width of the edge region215of the first substrate210in each light board20is smaller than a width of the first reinforcing ribs111and the second reinforcing ribs112, such that when two adjacent light boards20are spliced together, the splicing seams are just right on the first reinforcing ribs111or the second reinforcing ribs112, thereby solving the effect of the splicing seams. It should be noted that, in some embodiments, graphene is attached to back sides of the plurality of light boards20for heat dissipation, so as to solve a problem of heat dissipation and increase a service life of the plurality of light boards20. The display panel100further includes a reflection sheet60, support posts30, a fully-fitted diffuser board40and an over coat (OC) layer50. The reflection sheet60is disposed on the light board20, and the support posts30are disposed at intervals among the light beads230. The fully-fitted diffuser board40is disposed on a side of the light board20away from the back board10, and the OC layer50is disposed on a side of the fully-fitted diffuser plate40away from the back board10. The display panel and the light board provided by the present application are described in detail above, the specific examples of this document are used to explain principles and embodiments of the present application, and the description of embodiments above is only for helping to understand the present application. Meanwhile, those skilled in the art will be able to change the specific embodiments and the scope of the present application according to the idea of the present application. In the above, the content of the specification should not be construed as limiting the present application. Above all, the content of the specification should not be the limitation of the present application. | 23,082 |
11862052 | DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS Please refer toFIG.1showing a three-dimensional view of the present invention. As the figure shows, the present invention comprises a box body1therein are disposed a rotating water spray unit2, a swinging water spray unit3, a multicolored water spray unit4, an intermittent water spray unit5, a fogging device6, and a laser projector8. The rotating water spray unit2, swinging water spray unit3, multicolored water spray unit4, intermittent water spray unit5, and fogging device6are respectively connected to a tube body7of the box body1for connecting to an external water supply. And the units and the laser projector8are respectively connected to a control circuit9disposed in the box body1. Please refer toFIG.2, showing a structural drawing of a rotating water spray unit according to the present invention. Please also refer toFIG.1. As the figures show, in the rotating water spray unit2according to the present invention is disposed a base body21thereon are disposed a plurality of radially arranged nozzles22with eccentric water outlets221. The base body21is connected to a water outlet valve23and a water volume adjusting member24and is further connected to a tube body7via the water volume adjusting member24which is connected to a control circuit9for controlling the water discharge volume. When a flow of water enters the plurality of nozzles22via the tube body7, the nozzles22are rotated by means of the eccentric water outlets221to produce rotating water jets. Please refer toFIG.3, showing a structural drawing of a swinging water spray unit according to the present invention. Please also refer toFIG.1. As the figures show, in the swinging water spray unit3are disposed a plurality of base bodies31thereon are respectively disposed nozzles32. The plurality of base bodies31are connected to a water outlet valve33and a water volume adjusting member34and are further connected to the tube body7via the water volume adjusting member34. The nozzles32are connected to a motor35while the water volume adjusting member34and the motor35are connected to the control circuit9for controlling the water discharge volume and the swinging of the nozzles32. When a flow of water enters the nozzles32via the tube body7, the nozzles32produce swinging water jets by means of the swinging of the nozzles32. Please refer toFIG.4, showing a structural drawing of a multicolored water spray unit according to the present invention. Please also refer toFIG.1. As the figures show, in the multicolored water spray unit4is disposed a base body41thereon are disposed a nozzle43and a projecting lamp42with color change effect. The base body41is connected to a water outlet valve44and a water volume adjusting member45and is further connected to the tube body7via the water volume adjusting member45. The projecting lamp42and the water volume adjusting member45are connected to the control circuit9for controlling the water discharge volume and the color change of the projecting lamp42. When a flow of water enters the nozzle43via the tube body7, the nozzle43produces multicolored water jets by means of the projecting lamp42. Please refer toFIG.5, showing a structural drawing of an intermittent water spray unit according to the present invention. Please also refer toFIG.1. As the figures show, in the intermittent water spray unit5is disposed a base body51thereon is disposed a water outlet511. In an interior of the base body51are disposed a rectifying space512and a holding area513with the base body7connected to the tube body7via the rectifying space512. In the holding area513is disposed a blocking piece52connected to an electromagnetic valve53which is connected to the control circuit9for controlling the blocking piece52to block or unblock the water outlet511. When a flow of water enters the rectifying space512and the holding area513via the tube body7, single projectile water jets are sprayed through the water outlet511. Please refer toFIG.6, showing a drawing of the present invention in operation. As the figure shows, when the present invention is in operation, the rotating water spray unit2and the swinging water spray unit3provide rotating and swinging water jets, the multicolored water spray unit4provides water jets of various colors, and the intermittent water spray unit5provides single projectile water jets. Providing various forms of water dance performance composed in its box body, the present invention may be placed in businesses, stores and shops for achieving the purpose of attracting potential customers. Please refer toFIG.7, showing another drawing of the present invention in operation. As the figure shows, besides producing various forms of water dance performance in its box body1, the present invention may also be used for advertising with its fogging device6producing a foggy veil for the laser projector8to project texts or images thereon under the control of the control circuit9according to preset programs. It may be placed in businesses, stores and shops for achieving the purpose of attracting potential customers with various forms of water dance performance and advertising effect. The foregoing preferred embodiment of the present invention is illustrated of the present invention rather than limiting of the present invention. They are intended to cover various modifications and changes included within the spirit and scope of the appended claims, the scope of which should be accorded the broadest interpretation so as to encompass all such modifications and similar structures. In view of the foregoing considerations, the present invention relates to a water dance device with display screen effect which comprises various water spray units operating in coordination with a fogging device and a laser projector. It may be placed in businesses, stores and shops for achieving the purpose of attracting potential customers with various forms of water dance performance and advertising effect. | 5,999 |
11862053 | DETAILED DESCRIPTION OF THE EMBODIMENTS Various example embodiments of the present disclosure will now be described in detail with reference to the accompanying drawings. It should be noted that the relative arrangement of the components and steps, the numerical expressions and values set forth in these embodiments do not limit the scope of the present disclosure unless specifically stated otherwise. Meanwhile, it should be understood that for the convenience of description, the dimensions of various parts shown in the accompanying drawings are not drawn in an actual proportional relationship. The following description of at least one example embodiment is merely illustrative in nature and is not intended to limit the present disclosure, its implementation or use in any way. Techniques, methods, and apparatus known to those of ordinary skill in the art may not be discussed in detail, but such techniques, methods, and apparatus should be considered, where appropriate, as part of the specification. It should be noted that same numerals and letters refer to same items in the following figures. Thus, once an item is defined in one figure, it does not require further discussion in subsequent figures. In addition, the technical solutions of the various embodiments of the present disclosure can be combined with each other, but must be based on the enablement by those of ordinary skill in the art. When the combination of technical solutions is contradictory or cannot be realized, it should be considered that the combination of technical solutions does not exist and is not within the scope of protection claimed in this disclosure. It should be noted that all directional indications (such as up, down, left, right, front, back) in the embodiments of the present disclosure are only used to explain the relationship between various components under a certain posture (as shown in the accompanying drawings). If the specific posture changes, the directional indication also changes accordingly. The following describes a display method based on pulse signals according to an example embodiment of the present disclosure with reference toFIGS.1-16. It should be noted that the following disclosure scenarios are only shown to facilitate understanding of the spirit and principles of the present disclosure, and that the embodiments of the present disclosure are not limited in this regard. Rather, the embodiments of the present disclosure can be applied to any scenario where applicable. The present disclosure also proposes a display method based on pulse signals, an apparatus, an electronic device (e.g., a target terminal) and a medium. The method may be used for directly or indirectly displaying input signals in the form of pulse sequences via a display device, a projector, a virtual reality device and the like. FIG.1Aschematically shows a schematic diagram of generating a pulse signal sequence by a signal collector according to an embodiment of the present disclosure. By means of a high-speed responsiveness of the signal collector (e.g., a photoelectric sensor), a cluster of photons may be transformed to a digital bit “1” (i.e., a pulse). The time interval between two neighboring pulses represents the intensity of the light. In the example shown inFIG.1A, a signal collector110responds to the light by generating a pulse signal sequence in the form of “1 0 0 0 1 1 0 0 0 0 0 1 . . . ”. Such a pulse signal sequence records the continuous variation of the intensity of the light with a high temporal resolution. FIG.1Bis a schematic diagram of implementing high-speed imaging by an array of signal collectors according to an embodiment of the present disclosure. As shown inFIG.1B, a plurality of pulse signal sequences may be generated by a plurality of signal collectors110capturing a scene, which signal collectors110are arranged in an array in the x-y plane and may be called as a Spike Camera. The plurality of pulse signal sequences are spatially arranged into an array of bit streams, which accurately depicts the process of light variation captured by the Spike Camera within a time period. The light intensity at a designated time (e.g., when t=t1) within the time period may be calculated from the plurality of pulse signal sequences, thereby achieving high-speed imaging of the scene. In an appropriate way, light intensity information120at a time recorded by the plurality of pulse signal sequences can be displayed, for example, on a display device, obtaining a visual image130. The visualization of such pulse signal sequences will be discussed in detail in the following. FIG.2schematically shows a schematic flowchart of a display method based on pulse signals according to an embodiment of the present disclosure. As shown inFIG.2, the method includes: Step201: obtaining information of a target display array on a display device, where the target display array includes a first number of display units arranged. The display units in the target display array will be used to display the pulsed signals. The information of the target display array includes a display resolution and/or a display rate, etc. In one implementation, the display unit may be a pixel unit. The present disclosure does not specifically limit the first number. Step202: obtaining target pulse sequences that characterize dynamic spatiotemporal information. The target pulse sequences may include multiple pulse signal sequences. In one implementation, a pulse signal sequence can be represented by 0 and 1. In another implementation, a pulse signal sequence can be represented by peaks and troughs. For example, a “0” or “trough” indicates the absence of a pulse, and a “1” or “peak” indicates the presence of a pulse. In addition, the target pulse sequences in this disclosure are generated based on the acquisition of dynamic spatiotemporal information, for example, by a plurality of signal collectors (e.g., photosensitive devices) arranged in an array. The dynamic spatiotemporal information may be spatiotemporal signals of spatial positions collected by the plurality of signal collectors. The spatiotemporal signal may be an optical signal. More information related to the pulse sequence may be found in the inventor's U.S. Pat. No. 10,523,972 B2, entitled “METHOD AND DEVICE FOR ENCODING SPACE-TIME SIGNALS”, the content of which is incorporated herein in its entirety by reference. Further, information such as a generation rate and/or a generation resolution of the target pulse sequences may also be obtained. Step203: determining display state information of each display unit in the first number of display units from a spatiotemporal relationship between the target pulse sequences and the target display array. In some embodiments, the spatiotemporal relationship between the target pulse sequences and the target display array is determined based on the generation resolution of the target pulse sequences and the display resolution of the target display array and/or based on the generation rate of the target pulse sequences and the display rate of the target display array. For example, the spatiotemporal relationship between the target pulse sequences and the target display array is determined based on the display resolution and the generation resolution of the target pulse sequences; the spatiotemporal relationship between the target pulse sequences and the target display array is determined based on the display rate and the generation rate of the target pulse sequences; or the spatiotemporal relationship between the target pulse sequences and the target display array is determined based on the display resolution and the generation resolution of the target pulse sequences, and on the display rate and the generation rate of the target pulse sequences. The generation resolution of the target pulse sequences can be expressed as W1*H1. That is, the width and height of a pulse plane corresponding to the target pulse sequences are W1 pulse positions and H1 pulse positions, respectively, with each pulse position corresponding to the information of the spatial position of one pulse signal in the target pulse sequences. The pulse plane referred to herein may be a plane formed by respective pulse signals generated by the plurality of signal collectors arranged in an array, with each of the pulse signals occupying a respective position (i.e., the so-called pulse position) in the pulse plane. In this regard, the generation resolution of the target pulse sequences may also be referred to as a spatial resolution of the plurality of signal collectors. The generation rate represents the number of the pulse planes per second. For example, if the generation rate is 40,000 frames per second, there are 40,000 pulse planes per second, with each pulse plane expressing the information of the optical signals in 1/40,000 second. The display resolution of the target display array may be expressed as W2*H2. That is, the width of the target display array is W2 display units, and the height is H2 display units. The display rate of the target display array indicates the number of pictures displayed per second. For example, if the display rate is 1,000 frames per second, then 1,000 pictures are displayed per second. The display rate may also be referred to as refresh rate. The display state information of each display unit includes at least one of lighting-up, lighting-off, a voltage value, a luminance value, a duration of lighting-up, and the like. Step204: causing visualization of pulse signals in the target pulse sequences on the display device based on the display state information of each display unit in the first number of display units. In some embodiments, a display state of the display unit can be controlled by sending a signal representing the display state to the drive circuit of the display unit according to the display state information, thereby realizing the visualization of the pulse signal. In this disclosure, the information of the target display array, including a first number of display units arranged, on the display device can be obtained. The target pulse sequences that characterize the dynamic spatiotemporal information are obtained. The display state information of each display unit is determined from the spatiotemporal relationship between the target pulse sequences and the target display array. The visualization of the pulse signals on the display device is realized based on the display state information of each display unit. The technical solution of the present disclosure can determine the display state information of each display unit on the display device from the spatiotemporal relationship between the target pulse sequences and the target display array, so as to realize complete display of the optical signal information recorded in the target pulse sequences, thereby facilitating accurate reproduction of the change process of optical signals of an original scene. Since the process does not involve traditional image reconstruction, the disadvantage of losing the information carried by the original pulse signals in the prior art is also avoided. As shown inFIG.3, in some embodiments based on the above method of the present disclosure, the step203of determining the display state information of each display unit in the first number of display units from the spatiotemporal relationship between the target pulse sequences and the target display array includes: Step3031: determining, from the target pulse sequences, respective pulse signals corresponding to each display unit in the first number of display units according to the spatiotemporal relationship; Step3032: accumulating the pulse signals corresponding to the display unit to obtain an accumulated pulse signal value; and Step3033: generating the display state information based on the accumulated pulse signal value. The display state of each display unit can be controlled according to the display state information. This embodiment may apply to a synchronous display mode in which the display control is performed synchronously on each display unit at a fixed refresh frequency. The spatiotemporal relationship may be expressed as a spatial and/or temporal relationship between the pulse positions and the display units. In some embodiments, a spatial relationship between the target pulse sequences and the target display array may be determined based on the generation resolution of the target pulse sequences and the display resolution of the target display array. A temporal relationship between the target pulse sequences and the target display array may be determined based on the generation rate of the target pulse sequences and the display rate of the target display array. For example, a first proportional relationship between the generation resolution of the target pulse sequences and the display resolution of the target display array is determined, and the spatial relationship between the target pulse sequences and the target display array is determined based on the first proportional relationship, which spatial relationship indicates which pulse positions on the pulse plane each display unit of the target display array corresponds to. A second proportional relationship between the generation rate of the target pulse sequences and the display rate of the target display array is determined, and the temporal relationship between the target pulse sequences and the target display array is determined based on the second proportional relationship, which temporal relationship indicates how many pulse planes each display plane corresponds to. From the spatiotemporal relationship, it can be determined which pulse signals each display unit corresponds to. In such an embodiment of the present disclosure, the spatiotemporal relationship between the target pulse sequences and the target display array may be determined from the first proportional relationship and/or the second proportional relationship. The accumulation of the pulse signals includes accumulating the pulse signals. The accumulation may be weighted accumulation. In some embodiments based on the above method of the present disclosure, generating the display state information based on the accumulated pulse signal value includes: comparing the accumulated pulse signal value with a first preset threshold to obtain a comparison result; and generating the display state information based on the comparison result. The display state information includes lighting up or not. In some embodiments, if the accumulated pulse signal value is greater than or equal to the first preset threshold, the generated display state information is “lighting-up”; otherwise it is “not lighting-up” (i.e., “lighting-off”). For example, in the case where each display unit corresponds to one pulse position, the first preset threshold may be 1. In that case, when a pulse is present at that pulse position, the accumulated pulse signal value is 1, which is equal to the first preset threshold, and thus the display state information is lighting-up; when no pulse is present at that pulse position, the accumulated pulse signal value is 0, which is less than the first preset threshold, and thus the display state information is “not lighting-up”. In some embodiments, the display state information may also include at least one of a voltage value, a luminance value, and a duration of lighting-up. For example, if the accumulated pulse signal value is greater than or equal to a first threshold, the display state information is a first voltage value, a first luminance value and/or a first duration of lighting-up, and if the accumulated pulse signal value is greater than or equal to a second threshold, the display state information is a second voltage value, a second luminance value and/or a second duration of lighting-up. In some embodiments based on the above method of the present disclosure, generating the display state information based on the accumulated pulse signal value includes: obtaining the display state information from a preset function of the accumulated pulse signal value. The display state information includes a lighting-up state, a lighting-off state, a voltage value, a luminance value and/or a duration of lighting-up. In some embodiments, the preset function may be a positive proportional function, in which case a value obtained by multiplying the accumulated value by a preset value can be used as a trigger value for the lighting-up state or the lighting-off state, or the accumulated value can be directly used as the voltage value, the luminance value and/or the duration of lighting-up. Alternatively, the preset function may be other complex functions, in which case the accumulated value is input to the preset function to obtain the trigger value for the lighting-up state or the lighting-off state, the voltage value, the luminance value and/or the duration of lighting-up as the display state information. As shown inFIG.4, in some embodiments based on the above method of the present disclosure, the step203of determining the display state information of each display unit in the first number of display units from the spatiotemporal relationship between the target pulse sequences and the target display array includes: Step4031: determining, from the target pulse sequences, respective pulse signals corresponding to each display unit in the first number of display units according to the spatiotemporal relationship; and Step4032: determining the display state information from a change in the pulse signals corresponding to the display unit. This embodiment may apply to an asynchronous display mode, in which the display control is performed on different display units independently of each other. Based on the generation resolution of the target pulse sequences and the display resolution of the target display array and/or based on the generation rate of the target pulse sequences and the display rate of the target display array, the spatiotemporal relationship between the target pulse sequences and the target display array are determined. In some embodiments, a first proportional relationship between the generation resolution of the target pulse sequences and the display resolution of the target display array is determined, and the spatial relationship between the target pulse sequences and the target display array is determined based on the first proportional relationship. The spatial relationship may, for example, indicate which pulse positions on the pulse plane each display unit of the target display array corresponds to. In this embodiment, the display rate of the target display array refers to a display rate upper limit of each display unit. As the display rate upper limit of the display unit may be lower than the generation rate of the target pulse sequences, it is possible that the display state cannot be controlled according to each change in the pulse signals. Therefore, a second proportional relationship between the generation rate of the target pulse sequences and the display rate upper limit of the display unit is determined, and the temporal relationship between the target pulse sequences and the target display array is determined based on the second proportional relationship. That is, a target pulse sequence is temporally divided into pulse signal groups, and the number of pulse signals in the pulse signal group is determined based on the second proportional relationship. For example, if the generation rate is N times the display rate, then every N (or more) pulse signals fall into a pulse signal group. The N pulse signals may be located in N pulse planes, respectively. It should be noted that the display units in the target display array may have different display rate upper limits, and that for each display unit, the pulse signal group may have a different number of pulse signals. In some embodiments based on the above method of the present disclosure, the determining the display state information from the change in the pulse signals corresponding to the display unit includes: every time when the first preset condition is met, calculating a current value corresponding to the display unit from the pulse signals corresponding to the display unit; determining a numerical relationship between the current value corresponding to the display unit (that is, the current value) and a historical value, the historical value being the value corresponding to the display unit when the first preset condition was met last time; and determining the display state information when the numerical relationship meets a second preset condition. In some embodiments, the first preset condition is elapse of a set duration, and the calculating the current value corresponding to the display unit from the pulse signals corresponding to the display unit includes: accumulating the pulse signals corresponding to the display unit within the set duration to obtain an accumulated pulse signal value as the current value. Alternatively, the first preset condition is that a cyclically accumulated value of the pulse signals received by the display unit reaches a second preset threshold that is a maximum value of the cyclic accumulation. The calculating the current value corresponding to the display unit from the pulse signals corresponding to the display unit includes: calculating the current value from a time interval between two neighboring time points at which the cyclically accumulated value of the pulse signals reaches the second preset threshold. For example, a time point is recorded when the cyclically accumulated value reaches the second preset threshold M1, and then the time interval ΔT′ from the last recorded time point is calculated every time the cyclically accumulated value reaches the second preset threshold M1. The current value Lv(T) corresponding to the display unit is calculated according to the following equation: Lv(T)=C/ΔT′, where C is a constant value. In some embodiments, the numerical relationship between the current value Lv(T) corresponding to the display unit and a historical value Lv(T′) includes Q(F(Lv(T))−F(Lv(T′))), where Q( ) and F( ) are functions. That is, the numerical relationship is a function value of a difference between respective function values of two values. In some embodiments, Q( ) is a function that takes an absolute value, and F( ) is a function that takes a value itself. That is, the numerical relationship is the absolute value of the difference between the current value and the historical value, expressed as |Lv(T))−Lv(T′)|. In some embodiments, the second preset condition includes greater than or equal to a third preset threshold, a function value of the current value and/or a function value of the historical value, such as one or more of the following: Q(F(Lv(T))−F(Lv(T′)))=M2 Q(F(Lv(T))−F(Lv(T′)))>M3 Q(F(Lv(T))−F(Lv(T′)))>Y(Lv(T)) Q(F(Lv(T))−F(Lv(T′)))>Y(Lv(T′)). In the above, Q( ), F( ) and Y( ) are functions, and M2 and M3 are third preset thresholds. In an example, Y( ) is a second-order or higher-order continuous function, such as a polynomial function. In an example, if the pulse signal represents an accumulated light intensity over a period of time, the third preset threshold here may be a ratio of a previously recorded accumulated light intensity to a display luminance. This allows the display to reproduce the original light intensity over a certain period of time. Once the numerical relationship meets the second preset condition, the display state information is generated, and the display state of the display unit is controlled according to the display state information. The display state information includes a lighting-up state, a lighting-off state, a voltage value, a luminance value and/or a duration of lighting-up. The display state information may be calculated from the current value corresponding to the display unit, the numerical relationship between the current value corresponding to the display unit and the historical value, and/or the historical value for the display unit. For example, determining the display state information when the numerical relationship meets the second preset condition includes: determining the display state information based on the current value when the numerical relationship meets the second preset condition. Further, the causing the visualization of the pulse signals in the target pulse sequences on the display device based on the display state information of each display unit in the first number of display units includes: controlling the display state of the display unit according to the display state information to realize the visualization of pulsed signals in the target pulse sequences on a display device. Further, some examples are described below for illustration in order to facilitate the understanding of the technical solutions of the present disclosure. In the following, examples 1 to 3 relate to synchronous display modes, and examples 4 to 7 relate to asynchronous display modes. In the figures, “X” represents an unknown state. EXAMPLE 1 In the example ofFIG.5, the generation rate V1 of the target pulse sequences is the same as the display rate V2 of the target display array, and the generation resolution R1 of the target pulse sequences is the same as the display resolution R2 of the target display array. The light-emitting state signal includes On and Off. The ratio of the generation rate V1 to the display rate V2 is 1, and the ratio of the generation resolution R1 to the display resolution R2 is 1. Thus, the spatiotemporal relationship is determined as follows. The target display array corresponds to one pulse plane of the input target pulse sequences, with each display unit corresponding to one pulse signal at one pulse position on the pulse plane. The display unit is, for example, a pixel unit. The first preset threshold is 1. If the pulse signal is 1, then the accumulated pulse signal value is 1, and the display state information is lighting-up to control the corresponding display unit to light up. If the pulse signal is 0, then the accumulated pulse signal value is 0, and the display state information is lighting-off. EXAMPLE 2 In the example ofFIG.6, the generation resolution R1 of the target pulse sequences is the same as the display resolution R2 of the target display array, both of which are 6*6 (i.e., W=H=6). The generation rate V1 of the target pulse sequences is equal to 40,000 frames/second, and the display rate V2 of the target display array is equal to 8,000 frames/second. For the pulse signals during 5/40,000 seconds (the number 5/40,000 here is based on the fact that a pulse frame is generated every 1/40,000 seconds and that 5 pulse frames are illustrated inFIG.6), according to their input order, the target pulse sequences as shown inFIG.6can be obtained (the pulse signals that are input first come first, and the pulse signals that are input later come later). According to the resolution of 6*6 and the input order, the target pulse sequences correspond to 5 pulse planes (pulse frames) as shown inFIG.6, with the pulse planes input first being on the left, and the pulse planes input later being on the right. As shown inFIG.7, with the resolutions being the same, each display unit spatially corresponds to one pulse position in a pulse plane. With V1/V2=5, each display unit temporally corresponds to five pulse signals at that pulse position in the pulse plane (i.e., five pulse signals). The five pulse signals are accumulated, and the display state information is generated from the accumulated value (i.e., accumulated pulse signal value). Taking the two regions (2, 2) and (6, 2) in the target pulse sequences as an example, the first preset threshold is set to 4. As shown inFIG.7, the five pulse signals at the position (2, 2) are {0 1 0 0 0}, and the accumulated value is 1, which is less than the first preset threshold value of 4. Then, the display state information is generated to be lighting-off. The display signal is in the form of (2, 2, Off), for example, and the corresponding display unit is not lit up. The five pulse signals at the position (6, 2) are {1 1 1 1 1}, and the accumulated value is 5, which is greater than the first preset threshold value of 4. Then, the display state information is generated to be lighting-up. The display signal is in the form of (6, 2, On), for example, and the corresponding display unit is lit up. EXAMPLE 3 In the example ofFIG.8, the generation resolution R1 of the input target pulse sequences is 6*6, and the display resolution R2 of the target display array is 2*2. The generation rate V1 of the target pulse sequences is equal to 40,000 frames/second, and the display rate V2 of the target display array is equal to 8,000 frames/second. The display state information includes a voltage value. The received target pulse sequences are the same as in Embodiment 2. With the ratio of resolution R1/R2=9, each display unit corresponds to nine pulse positions in the pulse plane. With V1/V2=5, each display unit corresponds to pulse signals at the same position in five pulse planes. In this case, the pulse signals at the same nine pulse positions in five pulse planes are accumulated. A weighted accumulation is used, in which the pulse signals at different pulse positions have different weights. The weight matrix is: 121242121 For the display units at positions (1, 1) and (2, 1), the accumulated pulse signal values are 36 and 40, respectively, and the display state information generated by multiplying a preset voltage value are {1, 1, 36*V} and {2, 1, 40*V}, respectively, where V is the preset voltage value. Alternatively, the display state information generated by functional calculation is {1, 1, 0.45*V} and {2, 1, 0.5*V}, respectively, where 0.45=36/80, 0.5=40/80, and 80=(1*1+2*1+1*1+2*1+4*1+2*1+1*1+2*1+1*1)*5. The number 80 corresponds to the upper limit of the maximum accumulated pulse signal that can be reached when the pulse signals at the 9 pulse positions are all 1. V is the preset voltage value. It should be noted that in the embodiments of the present disclosure, the display state information may also include a light-emitting duration, the calculation of which is the same as that of the voltage value. EXAMPLE 4 In the example ofFIG.9, the generation resolution R1 of the input target pulse sequences is the same as the display resolution R2 of the target display array, both of which are 6*6 (i.e., W=H=6). The generation rate V1 of the target pulse sequences and the display rate upper limit V2 of the target display array are both 40,000 frames/second. For the pulse signals within 5/40,000 seconds (the number 5/40,000 here is based on the fact that a pulse frame is generated every 1/40,000 seconds and that 5 pulse frames are illustrated inFIG.9), the target pulse sequences correspond to 5 pulse planes (pulse frames) according to the resolution 6*6 and the input order. As shown inFIG.10, with the resolutions being the same, each display unit spatially corresponds to one pulse position in a pulse plane. The set duration in the first preset condition is 1/40,000 seconds, and a number of pulse signals received within the set duration is accumulated as the current value for the display unit, including two cases: either 0 or 1. In other words, the current value corresponding to the display unit can be determined from the current pulse signal for the display unit. For example, if the pulse signal is 0, the current value is 0, and if the pulse signal is 1, the current value is 1. The numerical relationship between the current value and the historical value is calculated as the absolute value of their difference. The second preset condition includes whether the absolute value is equal to 1. When the second preset condition is met, the display state information is generated from the current value: lighting-up when the current value is 1, and lighting-off when the current value is 0. When the second preset condition is not met, no display state information is generated. As shown inFIG.10, when two consecutive pulse signals in the time series are 0 1, then one pulse signal is currently received, the current value is 1, the historical value is 0, and the absolute value of the difference is 1. Thus, the second preset condition is met, the display state information is generated to be lighting-up (represented by), and a signal is sent to the drive circuit of the display unit to light up the display unit. When two consecutive pulse signals in the time series are 1 0, the current value is 0, the historical value is 1, and the absolute value of the difference is 1. Thus, the second preset condition is met, the display state information is generated to be lighting-off (represented by), and a signal is sent to the drive circuit of the display unit to turn off the display unit. When two consecutive frames in the time series are 0 0 or 1 1, the absolute value of the difference is 0, so that the second preset condition is not met and no display state information is generated. Where there is no change in the display state information, no display signal is sent, such as indicated by “X” inFIG.10. EXAMPLE 5 In the example ofFIG.11, the generation resolution R1 of the input target pulse sequences is the same as the display resolution R2 of the target display array, both of which are 6*6 (i.e., W=H=6). The generation rate V1 of the target pulse sequences is equal to 40,000 frames/second, while the display rate upper limit V2 of the display unit is 10,000 frames/second. For the pulse signals during 12/40,000 seconds (the number 12/40,000 here is based on the fact that a pulse frame is generated every 1/40,000 seconds and that 12 pulse frames are illustrated inFIG.11), according to the input order, the target pulse sequences as shown inFIG.11can be obtained (the pulse signals that are input first come first, and the pulse signals that are input later come later). With the resolution of 6*6 and the input order, the target pulse sequences correspond to 12 pulse planes (pulse frames) as shown inFIG.11, in which the pulse planes input first are on the left, and the pulse planes input later are on the right. As shown inFIG.12, with the resolutions being the same, each display unit spatially corresponds to one pulse position in a pulse plane, which is different from the fourth embodiment where each display unit corresponds to the pulse signals at one pulse position in 4 pulse planes. With the set duration in the first preset condition being 4/40,000 seconds, four pulse signals are accumulated, and the obtained accumulated pulse signal value is used as the current value corresponding to the display unit. The numerical relationship is the absolute value of the difference, and the second preset condition is the numerical relationship not less than 1. A luminance value is obtained as the display state information by multiplying the current value by a preset value L. As shown inFIG.12, taking the two regions (2, 2) and (5, 2) in the target pulse sequences as an example, the 12 pulse signals at the position (2, 2) are {0 1 0 0 0 0 1 0 0 0 0 0}, and a sequence of the accumulated values generated on the basis of 4 pulse signals per group is {1 1 0}. When the current value for the display unit is 0, the historical value is 1, and the second preset condition is met. Then a luminance value is calculated from the current value 0 as the display state information. The display signal sent is in the form of (luminance, 0), for example. The 12 pulse signals at the position (5, 2) are {0 1 1 0 0 0 1 1 0 0 0 0}, and the sequence of accumulated values generated on the basis of 4 pulse signals per group is {2 2 0}. Similarly, when the current value for the display unit is 0, the historical value is 2, and a display signal of (luminance, 0) is to be sent. In one embodiment, when the second preset condition is not met, the display state information is not generated, and the display signal may not be sent, such as indicated by “X” inFIG.13. In some cases, if the current value is 4 and the historical value is 2, a display signal of (luminance, 4 L) is sent. EXAMPLE 6 In the example ofFIG.14, the generation resolution R1 of the input target pulse sequences is the same as the display resolution R2 of the target display array, both of which are 6*6 (i.e., W=H=6), and the generation rate V1 of the target pulse sequences is equal to 40,000 frames/second. For the pulse signals during a period of 15/40,000 seconds (the number 15/40,000 here is based on the fact that a pulse frame is generated every 1/40,000 seconds and that 15 pulse frames are illustrated in this example), a bit stream including 540 bits in total represents the pulse data. According to the same method as the first and second examples, assuming that the pulse signals at the position (2, 2) are {1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0}, and the pulse signals at the position (5, 2) are { 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1}. The first preset condition is that the cyclically accumulated value of the pulse signals reaches the second preset threshold. When the first preset condition is met, the current value for the display unit is calculated according to Lv(T)=C/ΔT′, where C=8, and ΔT′ is the time interval between two neighboring time points at which the accumulated value of the pulse signals reaches the second preset threshold. As shown inFIG.14, “e” is a display constant for fill light. Assuming that the second preset threshold in the first preset condition is 1, the current value is calculated every time a pulse is received. The ΔT's corresponding to respective pulse signals at the position (2, 2) are {X, ⋅, 2, ⋅, ⋅, ⋅, 4, 1, ⋅, ⋅, ⋅, ⋅, 5, ⋅,⋅}. The ΔT's corresponding to respective pulse signals at the position (5, 2) are {X, 1, ⋅, 2, ⋅, 2, ⋅, ⋅, ⋅, ⋅, ⋅, ⋅, 7, ⋅, 2}. The sign “.” represents that the first preset condition is not met. The numerical relationship between the current value and the historical value is the absolute value of the difference between logarithmic function values. When the numerical relationship is greater than 0.3, the display state information is sent, and the display state information is the luminance value calculated according to the current value. That is, the second preset condition is the numerical relationship greater than 0.3. When the second preset condition is met, the luminance value is obtained as e+current value. EXAMPLE 7 In the example ofFIG.15, when the resolution of the target pulse sequences is different from that of the target display array, an approach of block accumulation on the target pulse sequence plane is adopted. The specific implementation is as follows. The generation resolution R1 of the input target pulse sequences is 6*6, and the display resolution R2 of the target display array is 2*2. The generation rate V1 of the target pulse sequences is equal to 40,000 frames/second. And the received target pulse sequences are the same as those of the first kind of examples. With the resolution R1/R2=9, each display unit corresponds to 9 pulse positions in a pulse plane, in which case 9 pulse positions of the 5 pulse planes are accumulated, using a weighted accumulation method. The weight matrix is 121242121 As shown inFIG.16, for a 2*2 target display array plane, the accumulated pulse signal values at the position (1, 1) are {8, 11, 6, 4, 7}, and the accumulated pulse signal values at the position (2, 1) are {5, 9, 12, 6, 8}. The absolute value of the difference between the current value and the historical value is compared with the third preset threshold value of 2.5. When it is not less than 2.5, a luminance value is calculated according to the current value as the display state information to control the luminance of lighting. In this embodiment, the display state information of the display unit is determined from a change in the pulse signals corresponding to the display unit. For example, in this embodiment, the elapse of a set duration may be used as the first preset condition, and the set duration may be a duration for displaying one frame of image. Within the set duration, the weighted and accumulated value of the pulse signals at 9 pulse positions can be used as the current value for the display unit. The historical value is the value corresponding to the display unit when the set duration elapses last time. When the numerical relationship between the current value and the historical value (the absolute value of the difference) meets the second preset condition (greater than or equal to the third preset threshold), a luminance value can be obtained as e+current value. In addition, it should be noted that the illustration of the pulse plane(s) is used in the above-mentioned embodiments to elucidate the spatiotemporal relationship; however the pulse plane(s) may not be actually generated in the method. Instead, the pulse signals corresponding to the display unit may be directly determined according to the position information of the pulse signals in the pulse plane. In another embodiment of the present disclosure, as shown inFIG.17, a display apparatus1700based on pulse signals is further provided. The pulse signal-based display apparatus1700includes a first obtaining module1701, a second obtaining module1702, a determination module1703and a display module1704. The first obtaining module1701is configured to obtain information of a target display array on a display device, with the target display array including a first number of display units arranged. The second obtaining module1702is configured to obtain target pulse sequences that characterize dynamic spatiotemporal information. The determining module1703is configured to determine display state information of each display unit in the first number of display units from the spatiotemporal relationship between the target pulse sequences and the target display array. The display module1704is configured to cause visualization of pulse signals in the target pulse sequences on the display device based on the display state information of each display unit in the first number of display units. In this disclosure, the information of the target display array, including a first number of display units arranged, on the display device can be obtained. The target pulse sequences used to characterize the dynamic spatiotemporal information are obtained. The display state information of each display unit is determined according to the spatiotemporal relationship between the target pulse sequences and the target display array. The visualization of the pulse signals on the display device is realized based on the display state information of each display unit. The technical solution of the present disclosure can determine the display state information of each display unit on the display device from the spatiotemporal relationship between the target pulse sequences and the target display array, so as to realize complete display of the optical signal information recorded in the target pulse sequences, thereby facilitating accurate reproduction of the change process of optical signals of an original scene. Since the process does not involve traditional image reconstruction, the disadvantage of losing the information carried by the original pulse signals in the prior art is also avoided. In another embodiment of the present disclosure, the information of the target display array includes the display resolution of the target display array. The determining module1703is further configured to determine the spatiotemporal relationship between the target pulse sequences and the target display array based on the display resolution and the generation resolution of the target pulse sequences. In another embodiment of the present disclosure, the information of the target display array includes a display rate of the target display array. The determining module1703is further configured to determine the spatiotemporal relationship between the target pulse sequences and the target display array based on the display rate and the generation resolution of the target pulse sequences. In another embodiment of the present disclosure, the information of the target display array includes a display resolution and a display rate of the target display array. The determining module1703is further configured to determine the spatiotemporal relationship between the target pulse sequences and the target display array based on the display resolution and the generation resolution of the target pulse sequences, as well as the display rate and the generation rate of the target pulse sequences. In another embodiment of the present disclosure, the determining module1703is configured to determine a first proportional relationship between the generation resolution and the display resolution, and to determine the spatiotemporal relationship between the target pulse sequences and the target display array based on the first proportional relationship. In another embodiment of the present disclosure, the determining module1703is configured to determine a second proportional relationship between the generation rate and the display rate, and to determine the spatiotemporal relationship between the target pulse sequences and the target display array based on the second proportional relationship. In another embodiment of the present disclosure, the determining module1703is configured to determine a first proportional relationship between the generation resolution and the display resolution, and a second proportional relationship of the generation rate to the display rate, and to determine the spatiotemporal relationship between the target pulse sequences and the target display array based on the first proportional relationship and the second proportional relationship. In another embodiment of the present disclosure, the determining module1703is configured to determine from the target pulse sequences respective pulse signals corresponding to each display unit in the first number of display units according to the spatiotemporal relationship, to accumulate the pulse signals corresponding to the display unit to obtain an accumulated pulse signal value, and to generate the display state information based on the accumulated pulse signal value. In another embodiment of the present disclosure, the determining module1703is configured to compare the accumulated pulse signal value with a first preset threshold to obtain a comparison result, and to generate the display state information based on the comparison result. In another embodiment of the present disclosure, the determining module1703is configured to obtain the display state information from a preset function of the accumulated pulse signal value. In another embodiment of the present disclosure, the determining module1703is configured to determine from the target pulse sequences respective pulse signals corresponding to each display unit in the first number of display units according to the spatiotemporal relationship, and to determine the display state information from a change in the pulse signals corresponding to the display unit. In another embodiment of the present disclosure, the determining module1703is configured to calculate a current value corresponding to the display unit from the pulse signals corresponding to the display unit every time when a first preset condition is met, to determine a numerical relationship between the current value and the historical value, the historical value being the value corresponding to the display unit when the first preset condition was met last time, and to determine the display state information when the numerical relationship meets a second preset condition. In another embodiment of the present disclosure, the first preset condition is elapse of a set duration, and the determining module1703is configured to accumulate the pulse signals corresponding to the display unit within the set duration to obtain an accumulated pulse signal value as the current value. In another embodiment of the present disclosure, the first preset condition is that a cyclically accumulated value of the pulse signal received by the display unit reaches the second preset threshold, and the determining module1703is configured to calculate the current value from a time interval between two neighboring time points at which the cyclically accumulated value of the pulse signals reaches the second preset threshold that is a maximum value of the cyclic accumulation. In another embodiment of the present disclosure, the determining module1703is configured to determine the display state information based on the current value when the numerical relationship meets the second preset condition. In another embodiment of the present disclosure, the display module1704is configured to control a display state of the display unit according to the display state information to realize the visualization of the pulse signals in the target pulse sequences on the display device. In another embodiment of the present disclosure, the display state information includes at least one of a lighting-up state, a lighting-off state, a voltage value, a luminance value, and a duration of lighting-up. FIG.18is a block diagram showing a logical structure of an electronic device according to an example embodiment. For example, electronic device1800may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, a display device, and the like. In an example embodiment, there is also provided a non-transitory computer-readable storage medium including instructions, such as a memory including instructions, the instructions can be executed by an electronic device processor to complete the above-mentioned pulse signal-based display method, the method includes: obtaining information of a target display array on the display device, where the target display array includes a first number of display units arranged; obtaining target pulse sequences that characterize dynamic spatiotemporal information; determining the display state information of each display unit in the first number of display units according to the spatiotemporal relationship between the target pulse sequences and the target display arrays; causing the visualization of the pulse signals in the target pulse sequences on the display device based on the display state information of each display unit in the first number of display units. In some embodiments, the above-mentioned instructions may also be executed by the processor of the electronic device to complete other steps involved in the above-mentioned exemplary embodiments. For example, the non-transitory computer-readable storage medium may be a ROM, a random access memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like. In an example embodiment, an application program/computer program product is also provided, including one or more instructions, which can be executed by a processor of an electronic device to implement the above-mentioned pulse signal-based display method. The method includes: obtaining information of a target display array on the display device, where the target display array includes a first number of display units; obtaining target pulse sequences that characterize dynamic spatiotemporal information; determining display state information of each display unit in the first number of display units from a spatiotemporal relationship between the target pulse sequences and the target display arrays; causing visualization of the pulse signals in the target pulse sequences on the display device based on the display state information of each display unit in the first number of display units. In some embodiments, the instructions may also be executed by the processor of the electronic device to implement other steps involved in the above-mentioned example embodiments. Those skilled in the art can understand thatFIG.18is only an example of the electronic device (or computer device)1800and does not constitute a limitation on the electronic device1800. It may include more or less components than those shown, or combine some components or a different component. For example, the electronic device1800may further include an input and output device, a network access device, a bus, and the like. The electronic device1800may include a processor1801and a memory1802. The so-called processor1801may be a Central Processing Unit (CPU), and may also be other general-purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field-Programmable Gate Arrays (FPGAs) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc. The general-purpose processor can be a microprocessor or the processor1801can also be any conventional processor, etc. The processor1801is the control center of the electronic device1800and connects to various parts of the entire electronic device1800with various interfaces and circuits. The memory1802can be used to store computer-readable instructions, and the processor1801implements various functions of the electronic device1800by running or executing the computer-readable instructions or modules stored in the memory1802and calling data stored in the memory1802. The memory1802may mainly include a stored program area and a stored data area, wherein the stored program area may store an operating system, an application program (such as a sound playback function, an image playback function, etc.) required for at least one function, and the like, data created by the use of the electronic device1800, and the like. In addition, the memory1802may include a hard disk, a memory, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) card, a Flash Card (Flash Card), at least one disk storage device, a flash memory device, Read-Only Memory (ROM), Random Access Memory (RAM), or other non-volatile/volatile storage devices. If the modules integrated in the electronic device1800are implemented in the form of software functional modules and sold or used as independent products, they may be stored in a computer-readable storage medium. Based on this understanding, the present disclosure can implement all or part of the processes in the methods of the above embodiments and can also be completed by instructing relevant hardware through computer-readable instructions, and the computer-readable instructions can be stored in a computer-readable storage medium. The computer-readable instructions, when executed by the processor, can implement the steps of the various method embodiments described above. Other embodiments of the present disclosure will readily occur to those skilled in the art upon consideration of the specification and practice of the embodiments disclosed herein. This disclosure is intended to cover any variations, uses or adaptations of this disclosure that follow the general principles of this disclosure and include common knowledge or conventional techniques in the technical field not disclosed in this disclosure. The specification and embodiments are to be regarded as examples only, with the true scope and spirit of the disclosure being indicated by the claims of the disclosure. It is to be understood that the present disclosure is not limited to the precise structures described above and shown in the accompanying drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the disclosure is limited only by the appended claims. | 56,054 |
11862054 | DETAILED DESCRIPTION Various changes may be made to the disclosure, and the disclosure may come with a diversity of embodiments. Some embodiments of the disclosure are shown and described in connection with the drawings. However, it should be appreciated that the disclosure is not limited to the embodiments, and all changes and/or equivalents or replacements thereto also belong to the scope of the disclosure. Similar reference denotations are used to refer to similar elements throughout the drawings. The terms “first” and “second” may be used to describe various components, but the components should not be limited by the terms. The terms are used to distinguish one component from another. For example, a first component may be denoted a second component, and vice versa without departing from the scope of the disclosure. The term “and/or” may denote a combination(s) of a plurality of related items as listed or any of the items. It will be understood that when an element or layer is referred to as being “on,” “connected to,” “coupled to,” or “adjacent to” another element or layer, it can be directly on, connected, coupled, or adjacent to the other element or layer, or intervening elements or layers may be present. In contrast, when a component is “directly connected to” or “directly coupled to” another component, no other intervening components may intervene therebetween. The terms as used herein are provided merely to describe some embodiments thereof, but not to limit the disclosure. As used herein, the singular forms “a,” “an,” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. As used herein, the term “comprise,” “include,” or “have” should be appreciated not to preclude the presence or addability of features, numbers, steps, operations, components, parts, or combinations thereof as set forth herein. Unless otherwise defined, all terms including technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which the embodiments of the disclosure belong. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein. The components, processes, steps, or methods according to embodiments of the disclosure may be shared as long as they do not technically conflict with each other. FIG.1Ais a view illustrating a configuration of an augmented reality optical device according to an embodiment. Referring toFIG.1A, according to an embodiment, an augmented reality optical device100amay include a light source unit110, a reflector120, a display element130, a beam splitter140, and a controller (not shown). The light source unit110emits light that is to be output as an augmented reality image. The light source unit110emits light that is to be reflected from the display device130and to be output as an augmented reality image. The light source unit110may mainly emit white light but, without limitations thereto, and may emit other wavelength bands of light. The light source unit110outputs a plurality of light beams having different (light) paths. If the light beams enter the user's eye, a difference between the (light) paths may be equal to or less than the width of the pupil of the eye. As the difference between the paths is the width of the pupil or less, all or some of the plurality of light beams may enter the user's eye. Since the light beams are incident on the viewer's eye along different paths, it is possible to implement a multifocal augmented reality image. A detailed structure and operation of the light source unit110is described below with reference toFIGS.2to5. The reflector120reflects the light emitted from the light source unit110to the display element130and transmits the light reflected from the display element130. However, without limitations thereto, the light irradiated from the light source unit110may pass through the reflector120and is incident on the display element130, and the light reflected from the display element130may be reflected to the beam splitter140. The display element130(e.g., a display device) reflects incident light as an augmented reality image. The display element130may be implemented as an LCoS element and reflects incident light as an augmented reality image. The beam splitter140reflects the augmented reality image reflected from the display element130to the user's eye while transmitting real-world light (e.g., external light) to the user's eye. The beam splitter140may be replaced with another optical element, e.g., a half mirror, that performs the same operation as the beam splitter140. A controller (not shown) controls the operations of the light source unit110and the display element130. The controller (not shown) controls the operation of the light source unit110. As described above, the light source unit110outputs a plurality of light beams having different (light) paths. In this case, the controller (not shown) controls the light source unit110to simultaneously or sequentially the light beams from light sources arranged at predetermined intervals. The controller (not shown) controls the operation of the display element130. The controller (not shown) controls the operation of the light source unit110and also controls the operation of the display element130, corresponding to the operation of the light source unit110. The augmented reality image corresponding to the output light varies according to the path of the output light. When the light source unit110sequentially outputs light beams along specific paths, the controller (not shown) controls the display element130to output (or reflect) an augmented reality image corresponding to the light beams of the specific paths. Accordingly, the user of the augmented reality optical device100amay view the augmented reality image together with the real-world light and, by viewing the multifocal augmented reality image, the user may experience an enhanced sense of reality for the augmented reality image. When the light source unit110simultaneously drives the light sources arranged at regular intervals to output light beams along a specific path, the controller (not synchronized) outputs (or reflects) the augmented reality image corresponding to the light beams to the display element. Accordingly, the user of the augmented reality optical device100amay view an augmented screen image at a certain distance, together with the real-world light, and may experience the augmented image according to the user's viewing environment. Therefore, when the light source unit110and the display element130sequentially interact with each other, each pixel may be reproduced at a different depth depending on the configuration of the image. When the light sources of the light source unit110, which are arranged at predetermined intervals are simultaneously driven, and the display element130interworks with the light sources, all of the pixels of the image are reproduced at the same depth, and the depth of the plane of the image reproduced is determined depending on the interval between the point light sources. FIG.1Bis a view illustrating a configuration of a virtual reality optical device according to an embodiment. Referring toFIG.1B, a virtual reality optical device100baccording to an embodiment may include a light source unit110, a reflector120, and a display element130. The virtual reality optical device100bmay include the remaining components110to130except for the beam splitter140among the components of the augmented reality optical device100a. Each of the components110to130performs the same operation as that of the augmented reality optical device. However, since the virtual reality optical device100bdoes not include the beam splitter140, the virtual reality image reflected from the display element130passes through the reflector120and is directly incident on the user's eye. If the virtual reality image is output without being directly incident on the viewer's eye, the virtual reality optical device100bmay be implemented as a projector. FIG.2Ais a view illustrating a configuration of a light source unit according to a first embodiment.FIG.3Ais a view illustrating an example in which light is output through a light source unit and a display element according to the first embodiment. Although the light emitted from the light source unit110passes through the reflector120and is reflected by the display element130,FIG.3illustrates an example in which the light is transmitted through the display element130for convenience of description. The light source unit110includes a plurality of point light sources210ato210fand outputs light beams with different (light) paths, rather than emitting surface light as conventional. The point light sources210are arranged apart from each other at predetermined (same or different) intervals and output light beams along different paths. The number of the point light sources included in the light source unit110may be varied, but all or some of the point light sources output light beams along different paths. Each point light source210ato210foutputs a light beam along a different path, and the light emitted from each light source unit is output as an augmented reality image having a different light path while passing through the reflector120and the display element130. FIG.2Bis a view illustrating a configuration of a light source unit according to a second embodiment.FIG.3Bis a view illustrating an example in which light is output through a light source unit and a display element according to the second embodiment. Referring toFIG.3B, a light source unit110includes a plurality of point light sources210and a lens220. The light source unit110may include a plurality of point light sources210ato210f, which are identical to those of the first embodiment, and the lens220is positioned the focal length away from the light source unit110in the direction along which the light from the light source unit110travels. As in the first embodiment, the point light sources210output light beams along different paths, and the output light beams are changed into parallel light beams while passing through the lens220. Thus, the amount of light of the augmented reality image to be incident on the user's eye may increase. The augmented reality image or virtual reality image output as in the first or second embodiment of the disclosure is illustrated inFIGS.4A to4C. FIGS.4A,4B, and4Care views illustrating an augmented reality image or a virtual reality image output through a light source unit and a display element according to an embodiment. FIG.4Billustrates an augmented reality image or virtual reality image400binto which the light emitted from the point light source210bpositioned in the center of the light source unit110ofFIGS.2A and2Bis converted by the display element130. Referring toFIG.4B, in the augmented reality image or virtual reality image400b, a triangle410band a circle420boverlap each other in a predetermined area. FIG.4Aillustrates an augmented reality image or virtual reality image400ainto which the light emitted from the point light source210apositioned on the left of the light source unit110is converted by the display element130. In the augmented reality image or virtual reality image400a, a circle420is output apart, relatively to the left, from a triangle410a, without overlapping the triangle410a. FIG.4Cillustrates an augmented reality image or virtual reality image400cinto which the light emitted from the point light source210cpositioned on the right of the light source unit110is converted by the display element130. In the augmented reality image or virtual reality image400c, a circle420cis output apart, relatively to the right, from a triangle410c, without overlapping the triangle410c. The so-output augmented reality or virtual reality images400ato400care introduced into the user's eye as illustrated inFIGS.5and6. FIG.5is a view illustrating a light path along which an image output from an augmented reality optical device is incident on a user's eye according to an embodiment.FIGS.6A and6Bare views illustrating a multifocal augmented reality image output to a user according to an embodiment. Referring toFIG.5, augmented reality images or virtual reality images having path differences, along with real-world light (not shown), are introduced into the user's eye. Since the path differences between the augmented reality images or virtual reality images obtained by the light output from the point light sources merely amount to the diameter (about 2 to 6 mm) of the user's eye, in particular, the pupil, the augmented reality images or virtual reality images are fully incident on the user's eye. As such, when augmented reality images or virtual reality images having a path difference enter the user's eye, the user sees the image as illustrated inFIG.6A or6B. When the user focuses on the triangle410, the circle420is dispersed in the augmented reality image or virtual reality image600as illustrated inFIG.6A. Accordingly, in the augmented reality image or virtual reality image600, the triangle410looks clear, and the circle420looks blurry around the triangle410. Therefore, when focusing on the triangle410, the surroundings become blurred, and the user may have a real-world feel. Conversely, when the user focuses on the circle420in the augmented reality image or virtual reality image600, the circle420is clearly viewed to the user, and the triangle410is blurred around the circle420as illustrated inFIG.6B. According to these characteristics, the image may have multiple focuses and provide a more real-world feel to the user. The above-described embodiments are merely examples, and it will be appreciated by one of ordinary skill in the art various changes may be made thereto without departing from the scope of the disclosure. Accordingly, the embodiments set forth herein are provided for illustrative purposes, but not to limit the scope of the disclosure, and should be appreciated that the scope of the disclosure is not limited by the embodiments. The scope of the disclosure should be construed by the following claims, and all technical spirits within equivalents thereof should be interpreted to belong to the scope of the disclosure. | 14,567 |
11862055 | DESCRIPTION OF AN EXEMPLARY EMBODIMENT FIG.1is a diagram showing a configuration of a position detection system1A. The position detection system1A is provided with a plurality of projectors100corresponding to a display device, and an image supply device300. The position detection system1A according to the present embodiment is provided with four projectors100A,100B,100C, and100D, but the number of projectors100provided to the position detection system1A is not limited to four. In the following description, the projectors100A,100B,100C, and100D are described as projectors100when there is no need to distinctly describe them. The image supply device300and the projectors100A,100B,100c, and100D are coupled to each other with cables for image transmission. As the cables for the image transmission, there are used cables compatible with a standard such as USB (Universal Serial Bus), HDMI (High-Definition Multimedia Interface), or Display Port. HDMI is a registered trademark. Further, the projector100A and the projectors100B,100c, and100D are coupled to each other with cables for data communication. As the cables for the data communication, there are used cables compatible with a standard such as Ethernet, IEEE 1394, or USB. Ethernet is a registered trademark. The projector100A operates as a master machine, and the projectors100B,100C, and100D each operate as a slave machine. In other words, the projector100A controls operations of the projectors100B,100C, and100D. In the present embodiment, when generating the calibration data, the projectors100B,100C, and100D display predetermined images on the projection surface5, or take the projection surface5on which the images are displayed in accordance with an instruction of the projector100A. The projection surface5corresponds to a display surface. The calibration data is data in which a coordinate system set in the taken image taken by the imaging section120provided to each of the projectors100and a coordinate system set in a liquid crystal panel163of an image projection section160provided to corresponding one of the projectors100are made to correspond to each other. The coordinate system set in the taken image is hereinafter referred to as a taken image coordinate system, and the coordinate system set in the liquid crystal panel163is hereinafter referred to as a panel coordinate system. The image supply device300supplies the projectors100A,100B,100c, and100D with image data via the cables for image transmission. Each of the projectors100generates image light based on the image data thus supplied, and then projects the image data thus generated on the projection surface5. The image data to be supplied by the image supply device300can be data of a still image, or can also be image of a moving image. As the image supply device300, it is possible to use, for example, a notebook PC (Personal Computer), a desktop PC, a tablet terminal, a smartphone, and a PDA (Personal Digital Assistant). InFIG.1, there is illustrated when the projectors100A,100B,100C, and100D are flatly installed in a line in a lateral direction of the projection surface5, and the projectors100display images in a lateral arrangement. An installation method of the projectors100A,100B,100C, and100D is not limited to the flat installation, but it is possible to adopt ceiling installation in which the projectors100are suspended from the ceiling, or wall installation in which the projectors100are hanged on the wall. Further, it is possible to install the projectors100A,100B,100C, and100D in a tandem arrangement, further, when coupling a larger number of printers100to each other, it is possible to arrange the projectors100in an N×M matrix (N and M are each an arbitrary natural number). Areas of the projection surface5on which the projectors100A,100B,100C, and100D respectively project the image light are referred to as projection areas10. The projector100A projects the image light on the projection area10A as a left end area of the projection surface5. The projector100B projects the image light on the projection area10B as a right neighboring area of the projection area10A. The projector100C projects the image light on the projection area10C as a right neighboring area of the projection area10B. The projector100D projects the image light on the projection area10D as a right neighboring area of the projection area10C. The projectors100A,100B,100C, and100D perform tiling projection. The tiling projection is a projection method in which the plurality of projectors100is made to project the image light, and the images displayed by the plurality of projectors100are combined with each other on the projection surface5to thereby display a single large screen image. In the tiling projection, the projectors100adjacent to each other project the image light so that the edges of the images to be displayed overlap each other. This is for making the boundaries of the images to be displayed inconspicuous. For example, the image to be displayed by the projector100A and the image to be displayed by the projector100B located at the right side thereof overlap each other in the edges thereof to forma superimposition area11. Similarly, the image to be displayed by the projector100B and the image to be displayed by the projector100C located at the right side thereof overlap each other in the edges thereof to form a superimposition area12. Similarly, the image to be displayed by the projector100C and the image to be displayed by the projector100D located at the right side thereof overlap each other in the edges thereof to form a superimposition area13. FIG.2is a diagram showing a configuration of the projector100A. The projectors100A,100B,100C, and100D are respectively provided with substantially the same configurations. Therefore, the configuration of the projector100A will representatively be described, and the description of the configurations of other projectors100B,100C, and100D will be omitted. Further, in the following description, in order to distinguish the configurations of the projectors100from each other, the constituents of the projector100A are each attached with a symbol “A,” and the constituents of the projector100B are each attached with a symbol “B.” Similarly, there is provided the description attaching a symbol “C” to the constituents of the projector100C, and attaching a symbol “D” to the constituents of the projector100D. For example, a control section of the projector100A is described as a control section170A, and a control section of the projector100B is described as a control section170B. The projector100A is provided with a communication I/F110A, an imaging section120A, an operation reception section130A, an image input I/F140A, an image processing section150A, a frame memory155A, an image projection section160A, and the control section170A. The communication I/F110A is an interface for data communication for performing data communication with the image supply device300and the projectors100B,100C, and100D. To the communication I/F110A, there are coupled the cables for the data communication which are respectively coupled to the image supply device300and the projectors100B,100C, and100D. The imaging section120A is provided with an imaging element such as a CCD (Charge Coupled Device) or a CMOS (Complementary Metal Oxide Semiconductor) to generate the taken image. The imaging range of the imaging section120A is the projection area10A on which the projector100A projects the image light, and the projection area adjacent to the projection area10A. For example, the imaging range of the imaging section120B of the projector100B is a range in which the projection area10B and a part or the whole of the projection areas10A,10C adjacent to the projection area10B can be imaged. The operation reception section130A is provided with a plurality of operation keys for the user to provide a variety of instructions to the projector100A. As the operation keys provided to the operation reception section130A, there are cited a power key for switching between ON and OFF of the power, and a menu key for displaying a menu image for performing a variety of types of settings. When the user operates the variety of operation keys of the operation reception section130A, the operation reception section130A outputs an operation signal corresponding to the content of the operation thus received to the control section170A. Further, the operation reception section130A can be provided with a configuration of receiving an infrared signal transmitted from a remote controller not shown, and then outputting an operation signal corresponding to the operation content represented by the infrared signal thus received to the control section170A. The image input I/F140A is an interface for receiving the image data. The image input I/F140A is coupled to the cable for image transmission to receive the image data supplied from the image supply device300. The image input I/F140A outputs the image data thus received to the image processing section150A. The image processing section150A develops the image data thus input in the frame memory155A and then processes the image data. The processing to be performed by the image processing section150A includes, for example, a resolution conversion process, a shape correction process such as a distortion correction, a digital zooming process, a color compensation process, and a luminance correction process. The image processing section150A performs the processing designated by the control section170A, and performs the processing using a parameter input from the control section170A as needed. Further, it is obviously possible for the image processing section150A to perform two or more of the processing described above in combination with each other. The image processing section150A retrieves the image data the processing on which has been completed from the frame memory155A, and then outputs the image data thus retrieved to the image projection section160A as image information. FIG.3is a diagram showing a schematic configuration of the image projection section160A. The image projection section160A corresponds to a display section. Here, the configuration of the image projection section160A will be described with reference toFIG.3. The image projection section160A modulates the light emitted from a light source161A to generate the image light, and then projects the image light thus generated in an enlarged manner with an optical unit165A. The image projection section160A is provided with the light source161A, three liquid crystal panels163A(r),163A(g), and163A(b) as a light modulation device, the optical unit165A, and a panel drive section167A. The liquid crystal panels163A(r),163A(g), and163A(b) provided to the projector100A are hereinafter described as liquid crystal panels163A when collectively referring to them. The light source161A includes a discharge type light source lamp such as a super high-pressure mercury lamp or a metal halide lamp, or a solid-state light source such as a light emitting diode or a semiconductor laser. The light having been emitted from the light source161A enters the liquid crystal panel163A. The liquid crystal panels163A(r),163A(g), and163(b) are each formed of a transmissive liquid crystal panel having a liquid crystal material encapsulated between a pair of transparent substrates, and so on. The liquid crystal panel163A(r) modulates a red light beam, the liquid crystal panel163A(g) modulates a green light beam, and the liquid crystal panel163A(b) modulates a blue light beam. The liquid crystal panels are each provided with a pixel area constituted by a plurality of pixels arranged in a matrix, and are each arranged so that a drive voltage can be applied to the liquid crystal material pixel by pixel. The image information output by the image processing section150A is input to the panel drive section167A. The panel drive section167A applies the drive voltages corresponding to the image information thus input to the respective pixels in the pixel area to thereby set the pixels to respective light transmittances corresponding to the image information. The light emitted from the light source161A is transmitted through the pixel area of each of the liquid crystal panels163A(r),163A(g), and163A(b) to thereby be modulated pixel by pixel, and thus the image light corresponding to the image information is formed for each of the colored light beams. The colored light beams as the image light of the respective colors thus formed are combined with each other pixel by pixel by a color combining optical system not shown to turn to the image light representing a color image, and the image light is then projected on the projection surface5by the optical unit165A in an enlarged manner. The control section170A is a computer device provided with a storage section171A and a processor180A. The control section170A performs overall control of an operation of the projector100A by the processor180A operating in accordance with a control program173A stored in the storage section171A. The storage section171A is configured including memory devices such as a RAM (Random Access Memory) and a ROM (Read Only Memory). The RAM is used as a temporary storage of a variety of types of data, and the ROM stores the control program173A for controlling the operation of the projector100A, a variety of types of configuration information, and so on. The storage section171A stores the control program173A to be executed by the processor180A, and the taken image taken by the imaging section120A. Further, the storage section171A stores a pattern information table175A to be generated by a spatial code generation section181A described later. The processor180A is an arithmetic processing device formed of a CPU (Central Processing Unit) or an MPU (Micro Processing Unit). The processor180A executes the control program173A to control each section of the projector100A. The processor180A can be formed of a single processor, or can also be constituted by a plurality of processors. Further, the processor180A can also be formed of an SoC (System-on-a-Chip) integrated with a part or the whole of the storage section171A and other circuits. Further, the processor180A can also be formed of a combination of the CPU for executing a program and a DSP for executing predetermined arithmetic processing. Further, it is also possible to adopt a configuration in which all of the functions of the processor180A are implemented in hardware, or it is also possible to configure all of the functions of the processor180A using a programmable device. The control section170A of the projector100A is provided with the spatial code generation section181A, a projection control section183A, an imaging control section185A, a color determination section187A, a calibration data generation section189A, and a calibration control section191A as functional blocks. These functional blocks are functions realized by the processor180A executing the arithmetic processing in accordance with the control program173A described using the blocks for the sake of convenience. The spatial code generation section181A generates a spatial code. The spatial code is identification information for identifying the plurality of projectors100constituting the position detection system1A, and a plurality of feature regions250included in a pattern image230. The pattern image is an image to be projected on the projection surface5for generating the calibration data. The feature regions250will be described later. The feature regions250correspond to the areas. FIG.4is a diagram showing an example of a pattern information table175A to be generated by the spatial code generation section181A. In the pattern information table175A, a first serial number, a second serial number, an identification number, a Y-coordinate and an X-coordinate of each of the feature regions250, the spatial code, a color code, and so on are recorded as a single record. The spatial code generation section181A records the first serial number, the second serial number, the identification number, the Y-coordinate and the X-coordinate of each of the feature regions250, and so on in the pattern information table175A as a single record to generate the spatial code. The first serial number is set by, for example, an operation of the operation reception section130A by the user. In the present embodiment, “01” is set as the first serial number to the projector100A, and “02” is set as the first serial number to the projector100B. Further, “03” is set as the first serial number to the projector100C, and “04” is set as the first serial number to the projector100D. First, the spatial code generation section181A assigns the second serial number to each of the coordinates of the feature regions250set in advance. The feature regions250are each an area in which a color corresponding to a color code is formed in the pattern image230. Each of the feature regions250can be formed of one pixel, or can also be formed of a plurality of pixels. The color code is a part of a code constituting the spatial code, and each of the color codes is associated with a color. For example, the coordinate of the feature region250is set based on a coordinate system taking the upper left of the pattern image230as an origin, a vertical axis as the Y axis, and a horizontal axis as the X axis. The coordinate of the feature region250is a coordinate corresponding to the panel coordinate. In other words, the color of the color code associated with the coordinate of the feature region250is formed in the liquid crystal panel163. The spatial code generation section181A first selects a row with the smallest Y-coordinate value, and then assigns the second serial numbers to the feature regions250in the row thus selected in the ascending order of the X-coordinate value. Then, the spatial code generation section181A selects a row with the second smallest Y-coordinate value, and then assigns the second serial numbers to the feature regions250in the row thus selected in the ascending order of the X-coordinate value. The spatial code generation section181A repeats these operations to set the second serial numbers to all of the feature regions250to be formed in the pattern image230. Then, the spatial code generation section181A generates numbers each obtained by arranging the first serial number as high digits and the second serial number as low digits, and then converts the numbers thus generated into septenary numbers to generate the identification numbers. For example, when the first serial number is “01,” and the second serial number is “007,” the number becomes “01007.” Then, the spatial code generation section181A converts the number thus generated into the septenary number to generate the identification number. For example, when the serial number in the decimal system is “01007,” the septenary number is “01010.” Then, the spatial code generation section181A converts each digit of the identification number into the color code. In the present embodiment, the color code “001” is set as “0” in the septenary system, the color code “010” is set as “1” in the septenary system, and the color code “100” is set as “2” in the septenary system. Further, the color code “011” is set as “3” in the septenary system, the color code “101” is set as “4” in the septenary system, the color code “110” is set as “5” in the septenary system, and the color code “111” is set as “6” in the septenary system. A code obtained by arranging the color codes in the arrangement sequence of the corresponding digits of the identification number forms the spatial code. For example, when the identification number is “01000,” the color codes are “001,” “010,” “001,” “001,” “001,” and thus, the spatial code is obtained as “001010001001001.” Further, when the identification number is “01331,” the color codes are “001,” “010,” “100,” “100,” “010,” and thus, the spatial code is obtained as “001010100100010.” Further, since the identification number is a five-digit number, the identification number is converted into five color codes. The color code corresponding to the fifth digit of the identification number is referred to as a first color code, the color code corresponding to the fourth digit of the identification number is referred to as a second color code, and the color code corresponding to the third digit of the identification number is referred to as a third color code. Further, the color code corresponding to the second digit of the identification number is referred to as a fourth color code, and the color code corresponding to the first digit of the identification number is referred to as a fifth color code. At least one of the first color code, the second color code, the third color code, and the fourth color code corresponds to first partial information. Further, at least one of the second color code, the third color code, the fourth color code, and the fifth color code corresponds to second partial information. As each of the color codes, there is set the corresponding color. The three digits represented by each of the codes are associated with color components of red, green, and blue, respectively. Specifically, when the numerical value in the color code is “0,” it is represented that the corresponding color component is not included, and when the numerical value in the code is “1,” it means that the corresponding color component is included. The color code “001” represents that the red component is included, and the color represented by the color code “001” is red. The color code “010” represents that the green component is included, and the color represented by the color code “010” is green. The color code “100” represents that the blue component is included, and the color represented by the color code “100” is blue. The color code “011” represents that the red component and the green component are included, and the color represented by the color code “011” is yellow. The color code “101” represents that the red component and the blue component are included, and the color represented by the color code “101” is magenta. The color code “110” represents that the green component and the blue component are included, and the color represented by the color code “110” is cyan. The color code “111” represents that the red component, the green component, and the blue component are included, and the color represented by the color code “111” is white. The color code “000” represents that none of the red component, the green component, and the blue component is included, and the color represented by the color code “111” is black. In the present embodiment, the color code is expressed by a combination of the color components of red, green, and blue. In other words, although it is possible to express 23 color codes, black is used as a pixel other than the feature regions250in the pattern image230. Therefore, it is possible to express 23-1 color codes. The projection control section183A controls the image processing section150A and the image projection section160A to generate the image light based on the image data, and then projects the image light thus generated on the projection surface5in an enlarged manner. For example, the projection control section183A makes the image processing section150A perform generation of the pattern image data in which the colors corresponding to the color codes are formed in each of the feature regions250with reference to the pattern information table175A generated by the spatial code generation section181A. Specifically, the projection control section183A first instructs the image processing section150A to retrieve the first color code, and the Y-coordinate and the X-coordinate of the feature region250, and then form the color corresponding to the first color code at the Y-coordinate and the X-coordinate thus retrieved. Further, the projection control section183A instructs the image processing section150A to form black in the area other than the feature regions250of the pattern image data. When the image processing section150A generates the pattern image data in the frame memory155A, the projection control section183A controls the image processing section150A and the image projection section160A to generate the image light based on the pattern image data, and projects the image light thus generated on the projection surface5. Thus, the pattern image230ain which the color corresponding to the first color code is formed in each of the feature regions250is displayed on the projection surface5. When imaging of the pattern image in which the color corresponding to the first color code is formed in each of the feature regions250is terminated, the projection control section183A then instructs the image processing section150A to retrieve the second color code and the Y-coordinate and the X-coordinate of each of the feature regions250, and then form the color corresponding to the second color code at the Y-coordinate and the X-coordinate thus retrieved. Subsequently, the projection control section183A repeats substantially the same processing to make the image processing section150A generate the pattern image data in which the colors corresponding respectively to the third color code, the fourth color code, and the fifth color code are formed in the feature regions250. Thus, on the projection area10A of the projection surface5, there are sequentially displayed the pattern images in which the colors corresponding respectively to the first color code, the second color code, the third color code, the fourth color code, and the fifth color code are formed in each of the feature regions250. The pattern image230in which the color corresponding to the first color code is formed in each of the feature regions250is described as a pattern image230a. The pattern image230in which the color corresponding to the second color code is formed in each of the feature regions250is described as a pattern image230b. The pattern image230in which the color corresponding to the third color code is formed in each of the feature regions250is described as a pattern image230c. The pattern image230in which the color corresponding to the fourth color code is formed in each of the feature regions250is described as a pattern image230d. The pattern image230in which the color corresponding to the fifth color code is formed in each of the feature regions250is described as a pattern image230e. The pattern image230acorresponds to a first pattern image. Further, when the pattern image230acorresponds to the first pattern image, the pattern image230bcorresponds to a second pattern image. Further, the pattern image230bcorresponds also to the first pattern image. Further, when the pattern image230bcorresponds to the first pattern image, the pattern image230ccorresponds to the second pattern image. Hereinafter, the same applies to the pattern image230c, the pattern image230d, and the pattern image230e. Besides the above, the projection control section183A makes the image processing section150A generate the image data for displaying images of the three primary colors of red, green, and blue, and black on the entire surface of the projection area10A, respectively. The imaging control section185A controls the imaging section120A to make the imaging section120A generate the taken image. When the imaging control section185A receives the notice that the projection of the pattern image is completed from the projection control section183A, the imaging control section185A makes the imaging section120A perform imaging. The imaging section120A outputs the taken image thus generated to the control section170A. The imaging control section185A makes the storage section171A store the taken image generated by the imaging section120A. The color determination section187A retrieves the taken images in which the pattern images230a,230b,230c,230d, and230eare respectively imaged from the storage section171A. Hereinafter, the taken image in which the pattern image230ais imaged is referred to as a first taken image, the taken image in which the pattern image230bis imaged is referred to as a second taken image, and the taken image in which the pattern image230cis imaged is referred to as a third taken image. Further, the taken image in which the pattern image230dis imaged is referred to as a fourth taken image, and the taken image in which the pattern image230eis imaged is referred to as a fifth taken image. The color determination section187A performs a color determination process of determining the color of each of the feature regions250imaged in the first through fifth taken images. The calibration data generation section189A performs a spatial code detection process and a data generation process. The calibration data generation section189A restores the spatial code based on the color of each of the feature regions250in the first through fifth taken images determined by the color determination section187A. The calibration data generation section189A converts the colors of the feature regions250at the same position in the first taken image through the fifth taken image into the color codes, and then arranges the color codes thus converted in the order of the first taken image through the fifth taken image to restore the spatial code. Then, the calibration data generation section189A generates the calibration data with reference to the pattern information table175A. The calibration data generation section189A generates the calibration data for associating the imaging coordinates of the first taken image through the fifth taken image in which the spatial codes are detected, and the panel coordinates registered in the pattern information table175A with each other. Due to the calibration data, positions on the taken image generated by the imaging section120A are converted into positions on the liquid crystal panel163. Further, when the spatial code the same in value as the spatial code restored at a certain imaging coordinate is restored at an imaging coordinate which is not adjacent to that imaging coordinate, the calibration data generation section189A does not use these spatial codes in the generation of the calibration data. For example, when a member high in reflectance such as a mirror is located near to the projection surface5, the spatial code false in value is restored in some cases. Therefore, when there are included the spatial codes the same in value at the imaging coordinates not adjacent to each other, the calibration data generation section189A does not use these spatial codes in the generation of the calibration data. The calibration control section191A is a function provided only to the projector100A which functions as the master machine. The calibration control section191A instructs display of the pattern image, imaging of the projection surface5, and so on to the projectors100B,100C, and100D. FIG.5is a flowchart showing the operation of the projector100A as the master machine. The operation of the projector100A will be described with reference to the flowchart shown inFIG.5. In the position detection system1A according to the present embodiment, the projector100A and the projector100C the projection areas10of which are not adjacent to each other project images such as the pattern images230on the projection surface5at the same time to generate the calibration data. Subsequently, the projector100B and the projector100D project images such as the pattern images230on the projection surface at the same time to generate the calibration data in accordance with the control by the projector100A. Thus, the time required for the generation of the calibration data is reduced. In the following description, when the projectors100A,100C project the images on the projection surface5at the same time to generate the calibration data will only be described, and the description of the operations of the projector100B and the projector100D will be omitted. First, the control section170A determines (step S1) whether or not the operation of instructing a start of the calibration has been received by the operation reception section130A. When the control section170A has not received the operation (NO in the step S1), the control section170A stands ready to start the process until the control section170A receives the operation. When the control section170A has received the operation (YES in the step S1), the control section170A instructs (step S2) generation of the spatial code to the projectors100B,100C, and100D. Then, the control section170A sets (step S3) the second serial numbers to the feature regions250set in advance. The control section170A registers the first serial numbers set by the user and the second serial numbers thus set in the pattern information table175A so as to be associated with the coordinates of the feature regions250. Then, the control section170A generates (step S4) the spatial codes. The control section170A generates numbers each obtained by arranging the first serial number as high digits and the second serial number as low digits, and then converts the numbers thus generated into the septenary numbers to generate the identification numbers. Then, the control section170A converts each of the digits of the identification number thus generated into the color code to generate the spatial code. Then, the control section170A outputs (step S5) the instruction signal of instructing the display and imaging of a black image to the projectors100B,100C, and100D. After the control section170A output the instruction signal to the projectors100B,100C, and100D, the control section170A makes the image processing section150A generate black image data for displaying a black image on the entire surface of the projection area10A. The image processing section150A generates the black image data in the frame memory155A, and then outputs the black image data thus generated to the image projection section160A as the image information. The image projection section160A generates the image light based on the image information thus input, and then projects the image light thus generated on the projection surface5in an enlarged manner. Thus, the black image is displayed (step S6) in the entire area of the projection surface5.FIG.6is a diagram showing the state in which the black images210A,210C are displayed on the projection surface5. It should be noted thatFIG.6shows the state in which the display is performed only in the projection areas10A,10B, and10C on the projection surface5. This is because, the description of the operations of the projectors100B,100D is omitted.FIG.6shows the projection surface5on which the black image210A is displayed in the projection area10A, the black image210B is displayed in the projection area10B, and the black image210C is displayed in the projection area10C. Then, the control section170A makes the imaging section120A perform imaging (step S7) to generate the taken image obtained by imaging a range including the projection area10A. The control section170A makes the storage section171A store the taken image generated by the imaging section120A. The taken image obtained by taking the black image is hereinafter referred to as a black taken image. Then, the control section170A determines (step S8) whether or not a notification signal has been received from the projectors100B,100C, and100D. The notification signal is a signal for giving notice that the generation of the black taken image is completed. When there is the projector100from which the notification signal has not been received (NO in the step S8), the control section170A waits until the notification signals are received from all of the projectors100. When the control section170A has received the notification signals from all of the projectors100B,100C, and100D (YES in the step S8), the control section170A outputs (step S9) an instruction signal of instructing display and imaging of a monochrome image of a primary color to the projector100C. For example, the control section170A first outputs an instruction signal of instructing the display and the imaging of the monochrome image of red to the projector100C. Then, the control section170A makes the image processing section150A generate red image data for displaying a red image on the entire surface in the projection area10A. The image processing section150A generates the red image data in the frame memory155A, and then outputs the red image data thus generated to the image projection section160A as the image information. Thus, the red image is displayed (step S10) on the entire surface in the projection areas10A and10C.FIG.7shows a state in which the red image220A is displayed in the projection area10A of the projection surface5, and the red image220C is displayed in the projection area10C. Then, the control section170A makes the imaging section120A perform imaging (step S11) to generate the taken image obtained by imaging a range including the projection area10A. The control section170A makes the storage section171A store the taken image generated by the imaging section120A. The taken image obtained by taking the red image is hereinafter referred to as a first primary-color taken image. Then, the control section170A displays monochrome images of all of the primary colors on the projection surface5to determine (step S12) whether or not the monochrome image has been taken. When the taken images of all of the primary colors have not been generated (NO in the step S12), the control section170A returns to the step S9, and then repeats the steps S9through S12with respect to the green color and the blue color in substantially the same manner. The taken image obtained by taking the green image is hereinafter referred to as a second primary-color taken image, and the taken image obtained by taking the blue image is hereinafter referred to as a third primary-color taken image. When the generation of the taken images of all of the primary colors is completed (YES in the step S12), the control section170A determines (step S13) whether or not the notification signal has been received from the projector100C. This notification signal is a signal of giving notice that the projector100C has generated the taken image of the monochrome image instructed in the step S9. When the control section170A has not received the notification signal (NO in the step S13), the control section170A waits until the control section170A receives the notification signal. When the control section170A has received the notification signal from the projector100C (YES in the step S13), the control section170A outputs (step S14) an instruction signal of instructing the display and the imaging of the pattern image230ain which the color corresponding to the first color code is formed to the projector100C. Then, the control section170A instructs (step S15) generation of the pattern image data in which the color corresponding to the first color code is formed to the image processing section150A. The control section170A instructs (step S15) the image processing section150A to retrieve the first color code in the spatial code generated in the step S4, and the coordinate values of the feature region250, and then form the color corresponding to the first color code at the Y-coordinate and the X-coordinate thus retrieved. When the image processing section150A generates the pattern image data in the frame memory155A, the projection control section183A controls the image processing section150A and the image projection section160A to generate the image light based on the pattern image data, and projects the image light thus generated on the projection surface5. Thus, the pattern image230ain which the color corresponding to the first color code is formed in each of the feature regions250is displayed (step S16) on the projection areas10A and10C.FIG.8shows a state in which the pattern image230ais displayed in each of the projection areas10A,10C of the projection surface5. As shown inFIG.8, a black image is displayed in an area of the pattern image230awhere the feature regions250are not formed. Processing of displaying the pattern image230ain each of the projection areas10A and10C is referred to as a first display process. Then, the control section170A makes the imaging section120A perform imaging (step S17) to generate the taken image obtained by imaging a range including the projection area10A. The control section170A makes the storage section171A store the taken image generated by the imaging section120A. The taken image obtained by taking the pattern image in which the color corresponding to the first color code is formed is referred to as a first pattern taken image. Processing of taking the first pattern taken image is referred to as a first imaging process. Then, the control section170A displays the pattern image230in which the colors corresponding to all of the color codes from the first color code through the fifth color code are formed on the projection surface5, and then determines (step S18) whether or not the taken images have been generated. When the control section170A has not generated the taken images of all of the pattern images230(NO in the step S18), the control section170A returns to the processing in the step S17. The control section170A similarly repeats the processing of the steps S14through S17with respect also to the second color code, the third color code, the fourth color code, and the fifth color code. The taken image obtained by taking the pattern image230bin which the color corresponding to the second color code is formed is hereinafter referred to as a second pattern taken image. Further, the taken image obtained by taking the pattern image230cin which the color corresponding to the third color code is formed is referred to as a third pattern taken image. The taken image obtained by taking the pattern image230din which the color corresponding to the fourth color code is formed is referred to as a fourth pattern taken image. The taken image obtained by taking the pattern image230ein which the color corresponding to the fifth color code is formed is referred to as a fifth pattern taken image. Processing of displaying the pattern230bin the projection areas10A and10C is referred to as a second display process, and processing of generating the second pattern taken image is referred to as a second imaging process. When the control section170A has generated all of the taken images from the first pattern taken image through the fifth pattern taken image (YES in the step S18), the control section170A calculates (step S19) a threshold value. The control section170A calculates the threshold values for determining the colors of the feature regions250formed in the first pattern taken image through the fifth pattern taken image based on the black taken image generated in the step S7, and the first primary-color taken image, the second primary-color taken image, and the third primary-color taken image generated in the step S11. The control section170A calculates the threshold value for each of the primary colors and each of the feature regions250. Then, the control section170A determines (step S20) the colors of the feature regions250imaged in the first pattern taken image through the fifth pattern taken image generated in the step S17based on the threshold values calculated in the step S19. The control section170A determines colors of the feature regions250imaged in the first pattern taken image through the fifth pattern taken image, and then converts the colors thus determined into the respective color codes. Subsequently, the control section170A arranges the color codes of the colors of the feature regions250imaged at the same position of the first pattern taken image through the fifth pattern taken image in sequence to restore the spatial code (step S21). The control section170A restores the spatial code for each of the feature regions250, and then obtains the panel coordinate associated with the spatial code thus restored with reference to the pattern information table175A. Then, the control section170A generates (step S22) the calibration data in which the coordinate of the taken image in which the spatial code thus restored is detected, and the panel coordinate thus obtained are associated with each other. In other words, the control section170A generates the calibration data of converting the taken image coordinate into the panel coordinate. The control section170A makes the storage section171A store the calibration data thus generated. Then, the control section170A generates correction data. The correction data is data to be used for the shape correction such as a keystone distortion correction, or the color correction of the image data. When generating the correction data to be used for the color correction will hereinafter be described. The control section170A first instructs the image processing section150A to generate the image data of a measurement pattern set in advance. The image processing section150A generates the image data of the measurement pattern in the frame memory155A, and then outputs the image data of the measurement pattern thus generated to the image projection section160A as the image information. Thus, the image of the measurement pattern is displayed (step S23) on the entire surface of the projection area10A. Then, the control section170A makes the imaging section120A perform imaging (step S24) to generate the taken image obtained by imaging a range including the projection area10A. The control section170A makes the storage section171A store the taken image generated by the imaging section120A. Then, the control section170A retrieves the taken image from the storage section171A, selects pixels included in the taken image thus retrieved, and then performs (step S25) the coordinate conversion of the coordinates of the pixels thus selected into the panel coordinates using the calibration data. Subsequently, the control section170A generates the correction data to be used for the color correction based on pixel values in the panel coordinate thus converted, and the pixel values of the pixels of the taken image thus selected. The control section170A repeatedly performs the processing described above with respect to all of the pixels of the taken image or the pixel at the representative point set therein to thereby generate (step S26) the correction data. Then, a calculation method of the threshold values for determining the colors of the feature regions250of the pattern image will be described. First, the control section170A subtracts the pixel value of the pixel corresponding to the black taken image generated in the step S5from the pixel values of the pixels of the first primary-color taken image, the second primary-color taken image, and the third primary-color taken image generated in the step S9. This is the processing of removing an influence of a background including environmental light and so on from the first primary-color taken image, the second primary-color taken image, and the third primary-color taken image. Then, the control section170A obtains the pixel value of red at the reference position from the first primary-color taken image from which the influence of the background is removed. Similarly, the control section170A obtains the pixel value of green at the reference position from the second primary-color taken image from which the influence of the background is removed, and obtains the pixel value of blue at the reference position from the third primary-color taken image from which the influence of the background is removed. As the reference position, it is possible to select, for example, the pixel located at the center of the taken image, but is not limited to the center of the taken image. For example, in the following description, it is assumed that the pixel value of red at the reference position obtained from the first primary-color taken image is 87, the pixel value of green at the reference position obtained from the second primary-color taken image is 144, and the pixel value of blue at the reference position obtained from the third primary-color taken image is 71. Then, the control section170A obtains the pixel value of red in each of the feature regions250in the first primary-color taken image, the pixel value of green in each of the feature regions250in the second primary-color taken image, and the pixel value of blue in each of the feature regions250in the third primary-color taken image. Since the threshold value is generated for each of the feature regions250, the control section170A obtains the pixel value for each of the feature regions250. For example, in the following description, it is assumed that the pixel value of red in the feature region250obtained is 79, the pixel value of green in the feature region250obtained is 137, and the pixel value of blue in the feature region250obtained is 69. The control section170A sets a value obtained by multiplying the ratio between the pixel value at the reference position and the pixel value in the feature region250by 0.5 as the threshold value of each color. For example, the threshold value Rt of red becomes 79/87×0.5=0.45, the threshold value Gt of green becomes 137/144×0.5=0.48, and the threshold value Bt of blue becomes 69/71×0.5=0.49. The control section170A calculates these threshold values for each of the feature regions250. Then, the control section170A determines the color of each of the feature regions250based on the threshold values thus calculated. First, the control section170A obtains the pixel values of red, green, and blue of the pixels at the reference position from each of the first primary-color taken image, the second primary-color taken image, and the third primary-color taken image in which the pixel value of the black taken image is subtracted to thereby remove the influence of the background. The pixel values of red, green, and blue at the reference position obtained from the first primary-color taken image are described as (Rr, Gr, Br). The pixel values of red, green, and blue at the reference position obtained from the second primary-color taken image are described as (Rg, Gg, Bg). The pixel values of red, green, and blue at the reference position obtained from the third primary-color taken image are described as (Rb, Gb, Bb). Then, the control section170A obtains a determinant for converting the pixel values of red, green, and blue of each of the first through fifth pattern taken images into the pixel values of red, green, and blue of an image displayed by the projector100A. When the target is the first pattern taken image will hereinafter be described. Considering the pixel values of red, green, and blue of the image displayed by the projector100A which are normalized into 0 through 1, the control section170A first creates a matrix M obtained by arranging (Rr, Gr, Br), (Rg, Gg, Bg), and (Rb, Gb, Bb). The matrix M is described as follows. M=(RrRgRbGrGgGbBrBgBb) Then, the control section170A obtains an inverse matrix of the matrix M to convert the pixel values of red, green, and blue of the first pattern taken image into the pixel values of red, green, and blue of the image displayed by the projector100A using Formula (1) described below. (RpjGpjBpj)=(RrRgRbGrGgGbBrBgBb)-1(RcGcBc)(1) A vector (Rc, Gc, Bc) shown in Formula (1) described above represents the pixel values of red, green, and blue of the first pattern taken image. Further, a vector (Rpj, Gpj, Bpj) represents the pixel values of red, green, and blue of the image displayed by the projector100A. For example, it is assumed that the pixel values of red, green, and blue at the reference position in the first primary-color taken image are (Rr, Gr, Br)=(87, 20, 6). It is assumed that the pixel values of red, green, and blue at the reference position in the second primary-color taken image are (Rg, Gg, Bg)=(38, 144, 23). Further, it is assumed that the pixel values of red, green, and blue at the reference position in the third primary-color taken image are (Rb, Gb, Bb)=(3, 16, 71). Further, it is assumed that the pixel values of red, green, and blue of the first pattern taken image are (57, 148, 90), respectively. In this case, the control section170A substitutes the values in Formula (1) described above to convert the pixel values of red, green, and blue of the first pattern taken image into the pixel values of red, green, and blue of the image displayed by the projector100A. The conversion equation is described as Formula (2) below. (RpjGpjBpj)=(87383201441662371)-1(5714890)(2) Due to the calculation of Formula (2) described above, the pixel values of red, green, and blue of the image displayed by the projector100A are obtained as (0.23, 0.89, 0.96). The control section170A compares the pixel values of red, green, and blue with the respective threshold values to thereby binarize the pixel values. The pixel value of red is 0.23, and is smaller in value than the threshold value Rt=0.45. The pixel value of green is 0.89, and is larger in value than the threshold value Gt=0.48. The pixel value of blue is 0.96, and is larger in value than the threshold value Bt=0.49. Therefore, the control section170A determines that the first color code of the target feature region250is (0, 1, 1). The control section170A determines the color code of each of the feature regions250with respect to the second pattern taken image through fifth pattern taken image in substantially the same manner. The control section170A arranges the values of the first color code, the second color code, the third color code, the fourth color code, and the fifth color code in this order to restore the spatial code, and then obtains the coordinate corresponding to the spatial code with reference to the pattern information table175A. Thus, the calibration data of converting the taken image coordinate into the panel coordinate is generated. FIG.9is a block diagram showing a configuration of a position detection system1B according to a modified example. The position detection system1B shown inFIG.9has a configuration provided with imaging devices500A,500B,500C, and500D and a control device700. The control device700functions as the image supply device300in the embodiment described above, and supplies the projectors100A,100B,100c, and100D with the image data. Further, the control device700functions as the projector100A in the embodiment described above, and controls operations of the projectors100A,100B,100c, and100D. When generating the calibration data, the control device700controls the projectors100A,100B,100C, and100D to display the monochrome images of the primary colors and the pattern image230in the projection areas10A,10B,10C, and10D, respectively. Further, the control device700controls the imaging devices500A,500B,500C, and500D to image the projection surface SC. The imaging range of the imaging device500A is a range including the projection area10A and the projection area10B adjacent to the projection area10A. The imaging device500A images the imaging range to generate the taken image in response to an instruction signal input from the control device700. The imaging device500A outputs the taken image thus generated to the control device700. The imaging range of the imaging device500B is a range including the projection area10B, and the projection areas10A and10C adjacent to the projection area10B. The imaging device500B images the imaging range to generate the taken image in response to the instruction signal input from the control device700. The imaging device500B outputs the taken image thus generated to the control device700. The imaging range of the imaging device500C is a range including the projection area10C, and the projection areas10B and10D adjacent to the projection area10C. The imaging device500C images the imaging range to generate the taken image in response to the instruction signal input from the control device700. The imaging device500C outputs the taken image thus generated to the control device700. The imaging range of the imaging device500D is a range including the projection area10D and the projection area10C adjacent to the projection area10D. The imaging device500D images the imaging range to generate the taken image in response to the instruction signal input from the control device700. The imaging device500D outputs the taken image thus generated to the control device700. The control device700generates the calibration data in which the taken image coordinate and the panel coordinate are associated with each other based on the taken images input from the imaging devices500A,500B,500C, and500D. Even in such a configuration in which the projectors100are not provided with the imaging sections120, and the projection surface SC is imaged by the imaging devices500coupled externally, it is possible to obtain substantially the same advantages as those of the position detection system1A described above. As described hereinabove, the projector100A according to the present embodiment executes the first display process, the first imaging process, the second display process, the second imaging process, the color determination process, the spatial code detection process, and the data generation process. The first display process is the processing of displaying the first pattern image on the projection surface5, wherein the first pattern image has the plurality of feature regions250, and the color associated with the first color code obtained by dividing the spatial code set to each of the feature regions250is formed in corresponding one of the feature regions250. The first imaging process is the processing of obtaining the first pattern taken image obtained by imaging the projection surface5on which the first pattern image is projected. The second display process is the processing of displaying the second pattern image on the projection surface5, wherein the second pattern image has the plurality of feature regions250, and the color associated with the second color code as the information other than the first color code of the spatial code set to each of the feature regions250is formed in corresponding one of the feature regions250. The second imaging process is the processing of obtaining the second pattern taken image obtained by imaging the projection surface5on which the second pattern image is projected. The color determination process is the processing of respectively determining the colors of the plurality of feature regions250imaged in the first pattern taken image, and the colors of the plurality of feature regions250imaged in the second pattern taken image. The spatial code detection process is the processing of obtaining the first partial information and the second partial information respectively set to the corresponding feature regions250of the first pattern taken image and the second pattern taken image based on the determination result of the colors in the plurality of feature regions250, and then detecting the spatial code set to each of the feature regions250based on the first partial information and the second partial information thus obtained. The data generation process is the processing of generating the calibration data of associating the position of the image displayed on the projection surface5and the position of the taken image obtained by imaging the range including the projection surface5with each other based on the spatial code thus detected. An arrangement sequence is defined in the plurality of spatial codes set to the plurality of feature regions250, and the plurality of spatial codes is respectively set to the plurality of feature regions250in accordance with the order set in advance. Therefore, even when some of the spatial codes cannot be detected, it is possible to interpolate the spatial codes which failed to be detected. Therefore, even when it is unachievable to detect all of the spatial codes, it is possible to generate the calibration data of associating the position of the image displayed on the projection surface5and the position of the taken image obtained by imaging the range including the projection surface5with each other. The projector100A executes the threshold value calculation process. In the threshold value calculation process, first, the primary-color images of the colors set to the plurality of feature regions250are displayed on the projection surface5. Then, the plurality of primary-color taken images obtained by imaging the projection surface5on which the primary-color images are displayed is obtained. Then, the threshold value for determining the primary color formed in each of the feature regions250of the first pattern taken image and the second pattern taken image is calculated for each of the feature regions250based on the ratio between the pixel value of the reference point set in advance of the primary-color taken image and the pixel value in corresponding one of the feature regions250imaged in the primary-color taken image. The projector100A determines the color formed in each of the feature regions250in the first pattern taken image and the second pattern taken image using the threshold values thus calculated. Therefore, it is possible to accurately determine the color formed in each of the feature regions250of the first pattern taken image and the second pattern taken image. The spatial code includes the first serial number set to the display device for displaying the first pattern image and the second pattern image on the projection surface5. Therefore, it is possible to identify the images which are displayed on the projection surface5by the plurality of projectors100. The spatial code is an identification number obtained by converting a number constituted by the first serial number set to the projector100for displaying the first pattern image and the second pattern image on the projection surface5, and the second serial number associated with each of the coordinates of the plurality of feature regions250in the first pattern image and the second pattern image into a septenary number. Each of the digits included in the spatial code is associated with a color expressed by a combination of the primary colors of red, green, and blue. Therefore, it is possible to display the color corresponding to the spatial code using the combination of the primary colors of red, green, and blue. The embodiment described above is a preferred embodiment of the present disclosure. It should be noted that the present disclosure is not limited to the embodiment described above, but can be implemented with a variety of modifications within the scope or the spirit of the present disclosure. For example, although in the embodiment described above, the description is presented citing when the identification information set to the feature region150is the spatial code of the serial numbers as an example, the identification information can be characters or symbols the arrangement sequence of which is defined in advance. It is possible to set the characters or the symbols the arrangement sequence of which is defined in advance to the plurality of feature regions250in accordance with the sequence set in advance. Further, for example, in the embodiment described above, the light modulation device provided with the liquid crystal panels163is illustrated, but the liquid crystal panels163can be transmissive liquid crystal panels or can also be reflective liquid crystal panels. Further, the light modulation device can be provided with a configuration using digital mirror devices instead of the liquid crystal panels163. Further, it is also possible to adopt a configuration having the digital mirror devices and a color wheel combined with each other. Further, besides the liquid crystal panels or the digital mirror devices, configurations capable of modulating the light emitted by the light source can also be adopted as the light modulation device. Further, each of the functional sections of the projector100A shown inFIG.2is for showing the functional configuration, and the specific installation forms are not particularly limited. In other words, it is not necessarily required to install the hardware individually corresponding to each of the functional sections, but it is obviously possible to adopt a configuration of realizing the functions of the plurality of functional sections by a single processor executing a program. Further, apart of the function realized by software in the embodiment described above can also be realized by hardware, and a part of the function realized by hardware can also be realized by software. Besides the above, the specific detailed configuration of each of other sections than the projector can arbitrarily be modified within the scope or the spirit of the present disclosure. Further, the processing unit of the flowchart shown inFIG.5is obtained by dividing the process of the projector100in accordance with major processing contents in order to make the processing of the projector100A easy to understand. The scope of the present disclosure is not limited by the way of the division or the names of the processing units shown in the flowchart ofFIG.5. Further, the processing of the control section170A can also be divided into a larger number of processing units, or can also be divided so that one processing unit includes a larger amount of processing in accordance with the processing contents. Further, the processing sequence of the flowchart described above is not limited to the illustrated example. Further, when realizing the position detection method using a computer provided to the projector100, it is also possible to configure the program to be executed by the computer as an aspect of a recording medium, or an aspect of a transmission medium for transmitting the program. As the recording medium, there can be used a magnetic or optical recording medium, or a semiconductor memory device. Specifically, there can be cited a portable or rigid recording medium such as a flexible disk, an HDD (Hard Disk Drive), a CD-ROM, a DVD, a Blu-ray disc, a magnetooptic disc, a flash memory, or a card-type recording medium. Further, the recording medium described above can also be a RAM, or a nonvolatile storage device such as a ROM or the HDD as an internal storage device provided to the server device. Blu-ray is a registered trademark. | 66,807 |
11862056 | DETAILED DESCRIPTION Hereinafter, some embodiments of the present disclosure will be described in detail with reference to the accompanying illustrative drawings. In designating elements of the drawings by reference numerals, the same elements will be designated by the same reference numerals although they are shown in different drawings. Furthermore, in the following description of the present disclosure, a detailed description of known functions and configurations incorporated herein will be omitted when it may make the subject matter of the present disclosure rather unclear. In a case in which terms “include,” “have,” “comprise,” and the like described in the present specification are used, another part may be added unless a more limiting term, such as “only,” is used. The terms of a singular form may include plural forms unless referred to the contrary. In addition, terms, such as first, second, A, B, (a), (b) or the like may be used herein when describing components of the present disclosure. Each of these terminologies is not used to define an essence, order or sequence of a corresponding component but used merely to distinguish the corresponding component from other component(s). In the case that it is described that a certain structural element “is connected to,” “is coupled to,” “is in contact with” another structural element, or the like, it should be interpreted that another structural element may “be connected to,” “be coupled to,” or “be in contact with” the structural elements as well as that the certain structural element is directly connected to or is in direct contact with another structural element. Here, other components may be included in one or more of two or more components that are “connected,” “coupled” or “connected” to each other. In description of a time relationship related to components, an operation method, a production method, etc., for example, when a temporal order or a flow order is described as “after,” “subsequent,” “next,” “before,” etc., an instance which is not continuous may be included unless “immediate” or “direct” are used. Meanwhile, when a numerical value or corresponding information (e.g., level, etc.) for a component are mentioned, even though there is no explicit description separately, the numerical value or the corresponding information may be interpreted as including an error range that may be caused by various factors. Hereinafter, various embodiments of the present disclosure will be described in detail with reference to the accompanying drawings. FIG.1is a system configuration diagram of a display device100according to embodiments of the present disclosure. Referring toFIG.1, a display driving system of the display device100according to embodiments of the present disclosure may include a display panel110and a driving (120,130) circuit for driving the display panel110. The display panel110may include a display area DA in which an image is displayed and a non-display area NDA in which an image is not displayed. The display panel110may include a plurality of sub-pixels SP disposed on a substrate SUB for image display. For example, the plurality of sub-pixels SP may be disposed in the display area DA. In some cases, at least one sub-pixel SP may be disposed in the non-display area NDA. At least one sub-pixel SP disposed in the non-display area NDA is also referred to as a dummy sub-pixel. The display panel110may include a plurality of signal lines disposed on the substrate SUB for driving the plurality of sub-pixels SP. For example, the plurality of signal lines may include a plurality of data lines DL and a plurality of gate lines GL. The signal lines may further include signal lines other than the plurality of data lines DL and the plurality of gate lines GL according to a structure of the sub-pixel SP. For example, the other signal lines may include driving voltage lines, reference voltage lines, and the like. The plurality of data lines DL and the plurality of gate lines GL may cross each other. Each of the plurality of data lines DL may be disposed to extend in a first direction. Each of the plurality of gate lines GL may be disposed to extend in a second direction. Here, the first direction may be a column direction and the second direction may be a row direction. In this specification, the column direction and the row direction are relative. For example, the column direction may be a vertical direction and the row direction may be a horizontal direction. As another example, the column direction may be the horizontal direction and the row direction may be the vertical direction. Hereinafter, for convenience of description, it is assumed that each data line DL is disposed to extend in the vertical direction, and each gate line GL is disposed to extend in the horizontal direction. The driving circuit may include a data driving circuit120for driving the plurality of data lines DL and a gate driving circuit130for driving the plurality of gate lines GL. The driving circuit may further include a controller140for controlling the data driving circuit120and the gate driving circuit130. The data driving circuit120may be a circuit for driving the plurality of data lines DL and may output data signals (also referred to as data voltages) corresponding to an image signal to the plurality of data lines DL. The gate driving circuit130may be a circuit for driving the plurality of gate lines GL and may generate gate signals to output the gate signals to the plurality of gate lines GL. The controller140may start a scan according to a timing implemented in each frame and control data driving at an appropriate time according to the scan. The controller140may convert input image data input from the outside so as to match a data signal format used in the data driving circuit120and supply the converted image data to the data driving circuit120. The controller140may receive display driving control signals from an external host system150together with the input image data. For example, the display driving control signals may include a vertical synchronization signal (VSYNC), a horizontal synchronization signal (HSYNC), an input data enable signal (DE), a clock signal, and the like. The controller140may generate data driving control signals DCS and gate driving control signals GCS based on the display driving control signals input from the host system150. The controller140may control a driving operation and a driving timing of the data driving circuit120by supplying the data driving control signals DCS to the data driving circuit120. The controller140may control a driving operation and a driving timing of the gate driving circuit130by supplying the gate driving control signals GCS to the gate driving circuit130. The data driving circuit120may include one or more source driver integrated circuits (SDICs). Each source driver integrated circuit (SDIC) may include a shift register, a latch circuit, a digital to analog converter (DAC), an output buffer, and the like. Each source driver integrated circuit (SDIC) may further include an analog to digital converter (ADC) in some cases. For example, each source driver integrated circuit (SDIC) may be connected to the display panel110by a tape automated bonding (TAB) method, may be connected to a bonding pad of the display panel110by a chip on glass (COG) or chip on panel (COP) method, or may be implemented in a chip on film (COF) method and connected to the display panel110. The gate driving circuit130may output a gate signal of a turn-on level voltage or a gate signal of a turn-off level voltage according to the control of the controller140. The gate driving circuit130may sequentially drive the plurality of gate lines GL by sequentially supplying the gate signal of the turn-on level voltage to the plurality of gate lines GL. The gate driving circuit130may be connected to the display panel110by a tape automated bonding (TAB) method, may be connected to the bonding pad of the display panel110by a chip on glass (COG) method or a chip on panel (COP) method, or may be connected to the display panel110by a chip on film (COF) method. Alternatively, the gate driving circuit130may be formed in the non-display area NDA of the display panel110as a gate in panel (GIP) type. The gate driving circuit130may be disposed on or connected to the substrate. That is, the gate driving circuit130may be disposed in the non-display area NDA of the substrate in a case of the GIP type. The gate driving circuit130may be connected to the substrate in cases of the chip on glass (COG) type, the chip on film (COF) type, or the like. Meanwhile, at least one driving circuit of the data driving circuit120and the gate driving circuit130may be disposed in the display area DA. For example, at least one driving circuit of the data driving circuit120and the gate driving circuit130may be disposed so as not to overlap the sub-pixels SP, and some or all of the driving circuits may be disposed so as to overlap the sub-pixels SP. The data driving circuit120may be connected to one side (e.g., an upper side or a lower side) of the display panel110. Depending on a driving method, a panel design method, etc., the data driving circuit120may be connected to both sides (e.g., the upper and lower sides) of the display panel110or may be connected to two or more of four sides of the display panel110. The gate driving circuit130may be connected to one side (e.g., a left side or a right side) of the display panel110. Depending on a driving method, a panel design method, etc., the gate driving circuit130may be connected to both sides (e.g., the left and right sides) of the display panel110or may be connected to two or more of four sides of the display panel110. The controller140may be implemented as a separate component from the data driving circuit120, or may be implemented as an integrated circuit by being integrated with the data driving circuit120. The controller140may be a timing controller used in a conventional display technology, a control device capable of further performing other control functions including the timing controller, a control device different from the timing controller, or a circuit in the control device. The controller140may be implemented as various circuits or electronic components such as an integrated circuit (IC), a field programmable gate array (FPGA), an application specific integrated circuit (ASIC), or a processor. The controller140may be mounted on a printed circuit board, a flexible printed circuit, or the like and may be electrically connected to the data driving circuit120and the gate driving circuit130through the printed circuit board, the flexible printed circuit, or the like. The controller140may transmit and receive signals to and from the data driving circuit120according to one or more predetermined interfaces. Here, for example, the interface may include a low voltage differential signaling (LVDS) interface, an EPI interface, a serial peripheral interface (SP), and the like. The display device100according to embodiments of the present disclosure may be a self-luminous display device in which the display panel110emits light by itself. When the display device100according to embodiments of the present disclosure is the self-luminous display device, each of the plurality of sub-pixels SP may include a light-emitting element. For example, the display device100according to embodiments of the present disclosure may be an organic light-emitting display device in which the light-emitting element is implemented as an organic light-emitting diode (OLED). For another example, the display device100according to embodiments of the present disclosure may be an inorganic light-emitting display device in which the light-emitting element is implemented as an inorganic material-based light-emitting diode. For still another example, the display device100according to embodiments of the present disclosure may be a quantum dot display device in which the light-emitting element is implemented as a quantum dot which is a semiconductor crystal that emits light by itself. FIG.2shows an equivalent circuit of the sub-pixel SP in the display panel110according to embodiments of the present disclosure, andFIG.3shows another equivalent circuit of the sub-pixel SP in the display panel110according to embodiments of the present disclosure. Referring toFIG.2, in the display device100according to embodiments of the present disclosure, each sub-pixel SP may include a light-emitting element ED, a driving transistor DRT for driving the light-emitting element ED by controlling a current flowing to the light-emitting element ED, a scan transistor SCT for transmitting a data voltage Vdata to a first node N1that is a gate node of the driving transistor DRT, and a storage capacitor Cst for maintaining a voltage for a certain period. The light-emitting element ED may include a pixel electrode PE, a common electrode CE, and an emission layer EL positioned between the pixel electrode PE and the common electrode CE. The pixel electrode PE of the light-emitting element ED may be an anode or a cathode. The common electrode CE may be the cathode or the anode. The light-emitting element ED may be, for example, an organic light-emitting diode (OLED), an inorganic material-based light-emitting diode (LED), a quantum dot light-emitting element, or the like. A base voltage EVSS may be applied to the common electrode CE of the light-emitting element ED. Here, the base voltage EVSS may be, for example, a ground voltage or a voltage similar to the ground voltage. The driving transistor DRT may be a transistor for driving the light-emitting element ED and may include the first node N1, a second node N2, and a third node N3. The first node N1of the driving transistor DRT may be a node corresponding to a gate node and may be electrically connected to a source node or a drain node of the scan transistor SCT. The second node N2of the driving transistor DRT may be the source node or the drain node and may be electrically connected to the pixel electrode PE of the light-emitting element ED. The third node N3of the driving transistor DRT may be the drain node or the source node and may be electrically connected to a driving voltage line DVL supplying a driving voltage EVDD. Hereinafter, for convenience of description, it is possible to describe as an example that the second node N2of the driving transistor DRT is the source node and the third node N3is the drain node. The scan transistor SCT may switch a connection between the data line DL and the first node N1of the driving transistor DRT. In response to a scan signal SCAN supplied from the gate line GL, the scan transistor SCT may control the connection between the first node N1of the driving transistor DRT and a corresponding data line DL of the plurality of data lines DL. The drain node or the source node of the scan transistor SCT may be electrically connected to the corresponding data line DL. The source node or the drain node of the scan transistor SCT may be electrically connected to the first node N1of the driving transistor DRT. A gate node of the scan transistor SCT may be electrically connected to the gate line GL to receive the scan signal SCAN. The scan transistor SCT may be turned on by the scan signal SCAN of a turn-on level voltage to transmit the data voltage Vdata supplied from the corresponding data line DL to the first node N1of the driving transistor DRT. The scan transistor SCT is turned on by the scan signal SCAN of the turn-on level voltage and is turned off by the scan signal SCAN of a turn-off level voltage. Here, when the scan transistor SCT is an n-type transistor, the turn-on level voltage may be a high level voltage, and the turn-off level voltage may be a low level voltage. When the scan transistor SCT is a p-type transistor, the turn-on level voltage may be a low level voltage and the turn-off level voltage may be a high level voltage. The storage capacitor Cst may be electrically connected between the first node N1and the second node N2of the driving transistor DRT to maintain a data voltage Vdata corresponding to an image signal voltage or a voltage corresponding thereto for one frame time. The storage capacitor Cst may not be a parasitic capacitor (e.g., Cgs or Cgd) that is an internal capacitor existing between the first node N1and the second node N2of the driving transistor DRT but may be an external capacitor intentionally designed outside the driving transistor DRT. Since the sub-pixel SP illustrated inFIG.2has two transistors DRT and SCT and one capacitor Cst in order to drive the light-emitting element ED, the sub-pixel SP is said to have a 2T (transistor) 1C (capacitor) structure. Referring toFIG.3, in the display device100according to embodiments of the present disclosure, each sub-pixel SP may further include a sensing transistor SENT for an initialization operation and a sensing operation. In this case, since the sub-pixel SP illustrated inFIG.3has three transistors DRT, SCT, and SENT and one capacitor Cst to drive the light-emitting element ED, the sub-pixel SP is said to have a 3T(transistor)1C (capacitor) structure. The sensing transistor SENT may switch a connection between the second node N2of the driving transistor DRT and a reference voltage line RVL. The sensing transistor SENT may control the connection between the second node N2of the driving transistor DRT electrically connected to the pixel electrode PE of the light-emitting element ED and a corresponding reference voltage line RVL among the plurality of reference voltage lines RVL in response to a sensing signal SENSE. A drain node or a source node of the sensing transistor SENT may be electrically connected to the reference voltage line RVL. The source node or the drain node of the sensing transistor SENT may be electrically connected to the second node N2of the driving transistor DRT and may be electrically connected to the pixel electrode PE of the light-emitting element ED. A gate node of the sensing transistor SENT may receive the sensing signal SENSE. The sensing transistor SENT may be turned on to apply a reference voltage Vref supplied from the reference voltage line RVL to the second node N2of the driving transistor DRT. The sensing transistor SENT is turned on by the sensing signal SENSE of a turn-on level voltage and is turned off by the sensing signal SENSE of a turn-off level voltage. Here, when the sensing transistor SENT is an n-type transistor, the turn-on level voltage may be a high level voltage, and the turn-off level voltage may be a low level voltage. When the sensing transistor SENT is a p-type transistor, the turn-on level voltage may be a low level voltage and the turn-off level voltage may be a high level voltage. Each of the driving transistor DRT, the scan transistor SCT, and the sensing transistor SENT may be an n-type transistor or a p-type transistor. All of the driving transistor DRT, the scan transistor SCT, and the sensing transistor SENT may be n-type transistors or p-type transistors. At least one of the driving transistor DRT, the scan transistor SCT, and the sensing transistor SENT may be an n-type transistor (or a p-type transistor), and the other may be a p-type transistor (or an n-type transistor). The gate node of each of the scan transistor SCT and the sensing transistor SENT may be connected to the same single gate line GL. Alternatively, the gate node of each of the scan transistor SCT and the sensing transistor SENT may be connected to different gate lines GL. The reference voltage line RVL may be disposed for one sub-pixel column. Alternatively, the reference voltage line RVL may be disposed for two or more sub-pixel columns. When the reference voltage line RVL is disposed for two or more sub-pixel columns, the plurality of sub-pixels SP may receive the reference voltage Vref from one reference voltage line RVL. For example, one reference voltage line RVL may be disposed for four sub-pixel columns. That is, one reference voltage line RVL may be shared by the sub-pixels SP included in the four sub-pixel columns. The driving voltage line DVL may be disposed for one sub-pixel column. Alternatively, the driving voltage line DVL may be disposed for two or more sub-pixel columns. When the driving voltage line DVL is disposed for two or more sub-pixel columns, the plurality of sub-pixels SP may receive the driving voltage EVDD from one driving voltage line DVL. For example, one driving voltage line DVL may be disposed for four sub-pixel columns. That is, one driving voltage line DVL may be shared by the sub-pixels SP included in the four sub-pixel columns. The 3T1C structure of the sub-pixel SP illustrated inFIG.3is merely an example for description and may further include one or more transistors, or in some cases, one or more capacitors. Alternatively, each of the plurality of sub-pixels may have the same structure, and some of the plurality of sub-pixels may have a different structure. Meanwhile, the display device100according to embodiments of the present disclosure may have a top emission structure or a bottom emission structure. Meanwhile, when the display device100according to embodiments of the present disclosure is a self-luminous display device such as an organic light-emitting display device, due to various causes in a process, the display device100may have optical characteristics different from actually desired optical characteristics (e.g., luminance, color coordinates, and the like), and thus have color coordinates or luminance different from the desired color coordinates or luminance of an image. Accordingly, some embodiments of the present disclosure may provide an optical compensation system and an optical compensation method based on artificial intelligence that can also perform display driving by predicting a data voltage optimized for optical characteristics (e.g., luminance, and the like) of the display panel110and it is possible to provide an optical compensation method, and a display device to which the artificial intelligence-based optical compensation is applied. In consideration of the optical characteristics (e.g., luminance, color coordinates, and the like) of the display panel110, an optical compensation system and an optical compensation method based on artificial intelligence as a more accurate and faster optical compensation technology, and a display device to which the artificial intelligence-based optical compensation is applied will be described in more detail. FIG.4shows an optical compensation system400based on artificial intelligence according to embodiments of the present disclosure. Referring toFIG.4, the optical compensation system400based on artificial intelligence according to embodiments of the present disclosure is a system that performs optical compensation using artificial intelligence, and the optical compensation system400may include a measuring device410and an artificial intelligence-based optical compensation controller420. The measuring device410may measure the optical characteristics of the display panel110to output measurement result data of the optical characteristics. For example, the measuring device410may include a luminance meter or the like. The artificial intelligence-based optical compensation controller420may predict and generate optical compensation result data corresponding to the measurement result data of the optical characteristics based on an artificial intelligence neural network using previous optical compensation result data for at least one other display panel. The artificial intelligence-based optical compensation controller420may predict current optical compensation result data using all of the previous optical compensation result data for at least one other display panel, which is a sample for which artificial intelligence-based optical compensation has already been completed and using the artificial intelligence neural network. The artificial intelligence-based optical compensation controller420may store the optical compensation result data generated by predicting using artificial intelligence in a memory430corresponding to the display panel110. For example, the optical compensation result data predicted and generated by the artificial intelligence-based optical compensation controller420, may include information on the data voltage predicted for each desired target. For example, the desired target may include a desired band, luminance, or color coordinates. Here, the band is also referred to as a luminance mode (brightness mode), and the luminance (brightness) of the display panel110may be controlled in one of various bands. For example, the previous optical compensation result data for at least one other display panel may be data obtained through a result of a previous optical compensation process completed for at least one other display panel, and may include information such as a data voltage, a gamma voltage, and the like. The artificial intelligence-based optical compensation controller420may generate machine learning result data by performing machine learning (ML) using the previous optical compensation result data for at least one other display panel that is a sample for which artificial intelligence-based optical compensation has already been completed. The artificial intelligence-based optical compensation controller420may predict and generate the optical compensation result data corresponding to the measurement result data of the optical characteristics based on the artificial intelligence neural network using the machine learning result data and the measurement result data of the optical characteristics. The artificial intelligence-based optical compensation controller420may execute a log file collecting process collecting a log file that is big data using the previous optical compensation result data for the at least one other display panel, execute a data processing process that selects learning data for machine learning from the collected log files, and perform machine learning based on the selected learning data to generate the machine learning result data. The artificial intelligence-based optical compensation controller420may perform a preprocessing to optimize and set a driving voltage by controlling so as to primarily measure the optical characteristics of the display panel110through the measuring device410before measuring (main measurement) the optical characteristics of the display panel110for obtaining the measurement result data of the optical characteristics through the measuring device410. The driving voltage may be a voltage used when the display panel110is driven while the optical characteristics of the display panel110are measured (main measurement) through the measuring device410. For example, the driving voltage may include the base voltage EVSS or a black data voltage supplied to the sub-pixels SP included in the display panel110, or may include a luminance weight for each region in the display panel110. FIG.5is a flowchart of an optical compensation method based on artificial intelligence according to embodiments of the present disclosure. Referring toFIG.5, the optical compensation system400based on artificial intelligence according to embodiments of the present disclosure may perform an optical compensation method based on artificial intelligence. Referring toFIG.5, the optical compensation method based on artificial intelligence according to embodiments of the present disclosure may include a main measurement operation (S520), an artificial intelligence process execution operation (S560), a data voltage prediction operation (S570), a prediction information storage operation (S590), and the like. In the main measurement operation (S520), the artificial intelligence-based optical compensation controller420of the optical compensation system400based on artificial intelligence may measure the optical characteristics of the display panel110through the measuring device410to generate the measurement result data of the optical characteristics. In the artificial intelligence process execution operation (S560), the artificial intelligence-based optical compensation controller420of the optical compensation system400based on artificial intelligence may execute an artificial intelligence process based on the artificial intelligence neural network using the previous optical compensation result data for at least one other display panel. In the data voltage prediction operation (S570), the artificial intelligence-based optical compensation controller420of the optical compensation system400based on artificial intelligence may predict and generate a data voltage for each band or gradation as the optical compensation result data corresponding to the measurement result data of the optical characteristics according to the result of executing the artificial intelligence process. In the prediction information storage operation (S590), the artificial intelligence-based optical compensation controller420of the optical compensation system400based on artificial intelligence may store information on the data voltage generated by prediction in the data voltage prediction operation (S570) in the memory430corresponding to the display panel110. Referring toFIG.5, the optical compensation method based on artificial intelligence according to embodiments of the present disclosure may further include a machine learning progress operation (S550) of generating the machine learning result data by performing machine learning using the previous optical compensation result data for at least one other display panel by the artificial intelligence-based optical compensation controller420of the optical compensation system400based on artificial intelligence before the operation of executing the artificial intelligence process (S560). In the operation of executing the artificial intelligence process (S560), the artificial intelligence-based optical compensation controller420of the optical compensation system400based on artificial intelligence may predict and generate the optical compensation result data corresponding to the measurement result data of the optical characteristics based on the machine learning result data and the measurement result data of the optical characteristics. Referring toFIG.5, the optical compensation method based on artificial intelligence the according to embodiments of the present disclosure may further include a log file collecting operation (S530) in which the artificial intelligence-based optical compensation controller420of the optical compensation system400collects a log file that is big data using the previous optical compensation result data for at least one other display panel before the machine learning performing operation (S550). Referring toFIG.5, the optical compensation method based on artificial intelligence according to embodiments of the present disclosure may further include a data processing operation (S540) in which the artificial intelligence-based optical compensation controller420of the optical compensation system400selects learning data for machine learning from the collected log files after the log file collecting operation (S530). Referring toFIG.5, the optical compensation method based on artificial intelligence according to embodiments of the present disclosure may further include a preprocessing operation (S510) in which the artificial intelligence-based optical compensation controller420of the optical compensation system400sets a driving voltage by controlling so as to primarily measure the optical characteristics of the display panel110through the measuring device410before the operation of generating the measurement result data of the optical characteristics (S520). Referring toFIG.5, the optical compensation method based on artificial intelligence according to embodiments of the present disclosure may further include a loop control operation (S580) in which the intelligence-based optical compensation controller420of the optical compensation system400based on artificial intelligence changes a band and a point after the operation of predicting and generating the data voltage (S570) After the loop control operation (S580), the artificial intelligence-based optical compensation controller420of the optical compensation system400based on artificial intelligence may repeatedly execute the operation of generating the measurement result data of the optical characteristics (S520), the operation of executing the artificial intelligence process (S560), and the operation of predicting and generating the data voltage (S570). FIG.6shows an affine layer neural network600as an artificial intelligence neural network for artificial intelligence-based optical compensation according to embodiments of the present disclosure. Referring toFIG.6, for example, the artificial intelligence neural network for artificial intelligence-based optical compensation may be the affine layer neural network600. Referring toFIG.6, the affine layer neural network600may include an input layer Lin including a plurality of input nodes R, G, and B corresponding to processing information of a first preprocessing, a first intermediate layer Lm1including a plurality of first intermediate nodes R1, G1, and B1corresponding to processing information of a second preprocessing, a second intermediate layer Lm2including a plurality of second intermediate nodes R2, G2, and B2corresponding to processing information of a third preprocessing, and an output layer Lout including a plurality of output nodes R3, G3, and B3corresponding to processing information of a main processing. The plurality of input nodes R, G, and B may be connected to all or some of the plurality of first intermediate nodes R1, G1, and B1, the plurality of first intermediate nodes R1, G1, and B1may be connected to all or some of the plurality of second intermediate nodes R2, G2, and B2, and the plurality of second intermediate nodes R2, G2, and B2may be connected to all or some of the plurality of output nodes R3, G3, and B3. For example, the plurality of input nodes R, G, and B, the plurality of first intermediate nodes R1, G1, and B1, the plurality of second intermediate nodes R2, G2, and B2, and the plurality of output nodes R3, G3, and B3may correspond to a red image signal (red data), a green image signal (green data), and a blue image signal (blue data). Referring toFIG.6, in relation to the affine layer neural network600, the first preprocessing may be a pre-optical compensation processing for each color coordinate or luminance, the second preprocessing may be a pre-optical compensation processing for each color using a first luminance value, the third preprocessing may be a pre-optical compensation processing for each color using a second luminance value higher than the first luminance value, and the main processing may be a processing for obtaining the measurement result data of the optical characteristics. Alternatively, in relation to the affine layer neural network600, the main processing may correspond to an optical compensation processing, and the processing information of the main processing may correspond to optical compensation result data. The second preprocessing and the third preprocessing may be substantially the same processing as the optical compensation processing or may be a pre-optical compensation processing performed before the optical compensation processing. An optimization of the driving voltage may be performed through the second preprocessing and/or the third preprocessing. The artificial intelligence-based optical compensation controller420of the optical compensation system400based on artificial intelligence may update the artificial intelligence neural network based on the predicted and generated optical compensation result data. FIG.7shows machine learning for artificial intelligence-based optical compensation according to embodiments of the present disclosure. Referring toFIG.7, the artificial intelligence-based optical compensation controller420of the optical compensation system400based on artificial intelligence may store and manage while updating log files for N logs, which is a predetermined number, in real time (S700) and perform machine learning using the log files for N logs (S710). Here, the real-time update, storage, and management of the log file may correspond to operations S530and S540in the artificial intelligence-based optical compensation process ofFIG.5. The machine learning may correspond to operation S550in the artificial intelligence-based optical compensation process ofFIG.5. Referring toFIG.7, the artificial intelligence-based optical compensation controller420may perform artificial intelligence-based optical compensation for a new display panel using the machine learning result data obtained according to the result of performing machine learning (S720). Here, the artificial intelligence-based optical compensation may correspond to operations S560and S570in the artificial intelligence-based optical compensation process ofFIG.5. Referring toFIG.7, the artificial intelligence-based optical compensation controller420may generate a command CMD_ML to proceed with machine learning (ML) when the artificial intelligence-based optical compensation for the new display panel is completed. In addition, when the artificial intelligence-based optical compensation for the new display panel is completed, the artificial intelligence-based optical compensation controller420may store the artificial intelligence-based optical compensation result data for the new display panel as a new log to update the log file in real time. Accordingly, the artificial intelligence-based optical compensation controller420may update and store the optical compensation result data stored as the new log as the previous optical compensation result data and delete the log for the oldest previous optical compensation result data to manage log files for N logs, which is a predetermined number (S700). In this case, for example, the maintenance of the log files for N logs may be performed in a first in first out method. Referring toFIG.7, the artificial intelligence-based optical compensation controller420may perform machine learning again using the real-time updated log files according to the command CMD_ML to proceed with the machine learning (ML) (S710). FIG.8is a schematic diagram of a display device100to which artificial intelligence-based optical compensation according to embodiments of the present disclosure is applied. Referring toFIG.8, the display device100to which artificial intelligence-based optical compensation according to embodiments of the present disclosure is applied may include a display panel110including a data line DL, a memory430for storing information on data voltages for each band or gradation, and a data driving circuit120for outputting a data voltage corresponding to display driving information (e.g., a current band or a current gradation) among the data voltages for each band or gradation stored in the memory430to a data line. A controller140may select the data voltage corresponding to the display driving information (e.g., the current band or the current gradation) by referring to the data voltage for each band or gradation stored in the memory430and supply data corresponding to the selected data voltage to the data driving circuit120. For example, the information on the data voltage for each band or gradation stored in the memory430may be information predicted and stored in the memory430as the optical compensation result data corresponding to the measurement result data of the optical characteristic of the display panel110according to the execution result of the artificial intelligence process based on the artificial intelligence neural network. For example, the optical compensation result data stored in the memory430and predicted according to the artificial intelligence-based optical compensation may include information on a data voltage predicted for each desired target. Here, for example, the desired target may include the desired band, luminance, or color coordinates. For example, the optical compensation result data stored in the memory430and predicted according to the artificial intelligence-based optical compensation may further include information such as a gamma voltage and the like. The artificial intelligence-based optical compensation according to embodiments of the present disclosure described above is a process that is performed for implementing the same color coordinates and luminance for each object (display panel) in consideration of the optical characteristics of the self-luminous display such as an OLED display. Since a light emission degree of each of a red sub-pixel, a green sub-pixel, and a blue sub-pixel is different for each object (display panel), the image quality of the self-luminous display such as an OLED display may be greatly improved by applying the artificial intelligence-based optical compensation according to embodiments of the present disclosure. According to the artificial intelligence-based optical compensation technology according to embodiments of the present disclosure, learning data for machine learning used for optical compensation may be automatically updated in a process line. Accordingly, by immediately performing the optical compensation, it is possible to actively respond to changes in characteristics and conditions of each display panel and to significantly reduce an optical compensation processing time. The optical compensation system400based on artificial intelligence according to embodiments of the present disclosure may use the artificial intelligence neural network such as the affine layer neural network600. The structure of the affine layer neural network600is a structure in which all nodes included in the affine layer neural network600are connected to all nodes of a subsequent layer. For example, in a viewpoint of the second intermediate node of the second intermediate layer Lm2, the input nodes of the input layer Lin and the first intermediate nodes of the first intermediate layer Lm1are all nodes of the previous layers, and the output nodes of the output layer (Lout) are all nodes of the subsequent layer. In the structure of the affine layer neural network600, since the nodes of the previous layer are connected to all nodes of the subsequent layer in the optical compensation result, it is possible to predict a result of a current desired point from multiple points. The optical compensation system400based on artificial intelligence according to embodiments of the present disclosure may use results of the previous optical compensation process (previous optical compensation result data) to predict the result of a subsequent optical compensation process (optical compensation result data). Since the optical compensation system400based on artificial intelligence according to embodiments of the present disclosure uses the artificial intelligence neural network such as the affine layer neural network600structure, it is possible to predict the result value (optical compensation result data) of the current optical compensation point (e.g., gradation) by using all of the previous optical compensation result data of the previous optical compensation process (e.g., color coordinates, luminance, data voltage Vdata, base voltage EVSS, etc.) for previous samples (other display panels) and all result data of previous optical compensation points (gradations) (e.g., data voltage Vdata, etc.). For example, the optical compensation result data may include a data voltage or the like. The optical compensation system400based on artificial intelligence according to embodiments of the present disclosure may perform a process of learning with big data in advance in order to perform machine learning, and to this end, may automatically update the learning data in real time. The optical compensation system400based on artificial intelligence according to embodiments of the present disclosure may automatically collect log files at each optical compensation completion time to perform machine learning. A brief description of some of the embodiments of the present disclosure described above is as follows. Some embodiments of the present disclosure may provide an optical compensation system based on artificial intelligence including a measuring device configured to measure optical characteristics of a display panel, and output measurement result data of the optical characteristics, and an artificial intelligence-based optical compensation controller configured to predict and generate optical compensation result data corresponding to the measurement result data of the optical characteristics based on an artificial intelligence neural network using previous optical compensation result data for at least one other display panel, and store the predicted and generated optical compensation result data in a memory corresponding to the display panel. The predicted and generated optical compensation result data may include information on a data voltage predicted for each desired target. The desired target may include the desired band, luminance, or color coordinates. The predicted and generated optical compensation result data may further include information such as a gamma voltage and the like. The artificial intelligence neural network may be an affine layer neural network. The affine layer neural network may include an input layer including a plurality of input nodes corresponding to processing information of a first preprocessing, a first intermediate layer including a plurality of first intermediate nodes corresponding to processing information of a second preprocessing, a second intermediate layer including a plurality of second intermediate nodes corresponding to processing information of a third preprocessing, and an output layer including a plurality of output nodes corresponding to processing information of a main processing. The plurality of input nodes may be connected to all of the plurality of first intermediate nodes, the plurality of first intermediate nodes may be connected to all of the plurality of second intermediate nodes, and the plurality of second intermediate nodes may be connected to all of the plurality of output nodes. The first preprocessing may be a pre-optical compensation processing for each color coordinate or luminance, the second preprocessing may be a pre-optical compensation processing for each color using a first luminance value, the third preprocessing may be a pre-optical compensation processing for each color using a second luminance value higher than the first luminance value, and the main processing may be a processing for obtaining the measurement result data of the optical characteristics. The artificial intelligence-based optical compensation controller may update the artificial intelligence neural network based on the predicted and generated optical compensation result data. The previous optical compensation result data for the at least one other display panel is data obtained through a result of a previous optical compensation process completed for at least one other display panel, and may include information on a data voltage and a gamma voltage. The artificial intelligence-based optical compensation controller may generate machine learning result data by performing machine learning using the previous optical compensation result data for at least one other display panel, and may predict and generate the optical compensation result data corresponding to the measurement result data of the optical characteristics based on the artificial neural network using the machine learning result data and the measurement result data of the optical characteristics. The artificial intelligence-based optical compensation controller may execute a log file collecting process that collects a log file that is big data using the previous optical compensation result data for at least one other display panel, may execute a data processing process that selects learning data for the machine learning from the collected log file, and may generate the machine learning result data by performing the machine learning based on the selected learning data. The artificial intelligence-based optical compensation controller may perform a preprocessing that sets a driving voltage by controlling so as to primarily measure the optical characteristics of the display panel through the measuring device before measuring the optical characteristics of the display panel for obtaining the measurement result data of the optical characteristics through the measuring device. The driving voltage may include a base voltage or a black data voltage supplied to sub-pixels included in the display panel, or may include a luminance weight for each region. Some embodiments of the present disclosure may provide an optical compensation method based on artificial intelligence including operations of measuring optical characteristics of a display panel through a measuring device to generate measurement result data of the optical characteristics, executing an artificial intelligence process based on an artificial intelligence neural network using previous optical compensation result data for at least one other display panel, predicting and generating a data voltage for each band or gradation as optical compensation result data corresponding to the measurement result data of the optical characteristics according to a result of executing the artificial intelligence process, and storing information on the predicted and generated data voltage in a memory corresponding to the display panel. The optical compensation method based on artificial intelligence according to embodiments of the present disclosure may further include a machine learning progress operation of generating machine learning result data by performing machine learning using the previous optical compensation result data for at least one other display panel before the operation of executing the artificial intelligence process. In the operation of executing the artificial intelligence process, the optical compensation result data corresponding to the measurement result data of the optical characteristics may be predicted and generated based on the machine learning result data and the measurement result data of the optical characteristics. The optical compensation method based on artificial intelligence according to embodiments of the present disclosure may further include a log file collecting operation of collecting a log file that is big data using the previous optical compensation result data for at least one other display panel before the machine learning progress operation, and a data processing operation of selecting learning data for the machine learning from the collected log file. The optical compensation method based on artificial intelligence according to embodiments of the present disclosure may further include a preprocessing operation of setting a driving voltage by controlling so as to primarily measure the optical characteristics of the display panel through the measuring device before the operation of generating the measurement result data of the optical characteristics. The optical compensation method based on artificial intelligence according to embodiments of the present disclosure may further include a loop controlling operation of changing a band and a point after the operation of predicting and generating the data voltage. After the loop controlling operation, the operation of generating the measurement result data of the optical characteristics, the operation of executing the artificial intelligence process, and the operation of predicting and generating the data voltage may be repeatedly executed. The embodiments of the present disclosure may provide a display device including a display panel including a data line, a memory storing information on data voltages for each band or gradation, and a data driving circuit outputting a data voltage corresponding to a current band or a current gradation among the data voltages for each band or gradation to the data line. In the display device according to the embodiments of the present disclosure, the information on the data voltage for each band or gradation stored in the memory may be information predicted and generated as optical compensation result data corresponding to measurement result data of optical characteristics of the display panel according to a result of executing an artificial intelligence process based on an artificial intelligence neural network, and stored in the memory. In the display device according to the embodiments of the present disclosure, the predicted and generated optical compensation result data may include information on a data voltage predicted for each desired target. In the display device according to the embodiments of the present disclosure, the desired target may include the desired band, luminance, or color coordinates, and the predicted and generated optical compensation result data may further include information such as a gamma voltage and the like. According to the above-described embodiments of the present disclosure, it is possible to provide an optical compensation system, an optical compensation method, and a display device based on artificial intelligence as accurate and fast optical compensation technology. According to the embodiments of the present disclosure, it is possible to provide an optical compensation system, an optical compensation method, and a display device based on artificial intelligence that can also perform display driving by predicting a data voltage optimized for optical characteristics of a display panel. According to the embodiments of the present disclosure, it is possible to provide an optical compensation system and an optical compensation method based on artificial intelligence capable of actively and quickly responding to characteristic and condition changes of each display panel, and a display device to which artificial intelligence-based optical compensation is applied. The above description provide an example of the technical idea of the present disclosure for illustrative purposes only. Various modifications and changes will be possible without departing from the essential features of the present disclosure by those skilled in the art to which the present disclosure pertains. In addition, the embodiments disclosed in the present disclosure are intended to illustrate the scope of the technical idea of the present disclosure, and the scope of the present disclosure is not limited by the embodiments. The protection scope of the present disclosure should be construed on the basis of the accompanying claims in such a manner that all of the technical ideas included within the scope equivalent to the claims belong to the present disclosure. The various embodiments described above can be combined to provide further embodiments. All of the U.S. patents, U.S. patent application publications, U.S. patent applications, foreign patents, foreign patent applications and non-patent publications referred to in this specification and/or listed in the Application Data Sheet are incorporated herein by reference, in their entirety. Aspects of the embodiments can be modified, if necessary to employ concepts of the various patents, applications and publications to provide yet further embodiments. These and other changes can be made to the embodiments in light of the above-detailed description. In general, in the following claims, the terms used should not be construed to limit the claims to the specific embodiments disclosed in the specification and the claims, but should be construed to include all possible embodiments along with the full scope of equivalents to which such claims are entitled. Accordingly, the claims are not limited by the disclosure. | 56,322 |
11862057 | DETAILED DESCRIPTION The advantages and features of the present disclosure and methods for accomplishing the same will be more clearly understood from embodiments described below with reference to the accompanying drawings. However, the present disclosure is not limited to the following embodiments but may be implemented in various different forms. Rather, the present embodiments will make the disclosure of the present disclosure complete and allow those skilled in the art to completely comprehend the scope of the present disclosure. The present disclosure is only defined within the scope of the accompanying claims. The shapes, sizes, ratios, angles, numbers, and the like illustrated in the accompanying drawings for describing the embodiments of the present disclosure are merely examples, and the present disclosure is not limited thereto. Like reference numerals generally denote like elements throughout the present specification. Further, in describing the present disclosure, detailed descriptions of known related technologies may be omitted to avoid unnecessarily obscuring the subject matter of the present disclosure. The terms such as “comprising,” “including,” “having,” and “comprising” used herein are generally intended to allow other components to be added unless the terms are used with the term “only.” Any references to singular may include plural unless expressly stated otherwise. Components are interpreted to include an ordinary error range even if not expressly stated. When the position relation between two components is described using the terms such as “on,” “above,” “below,” and “next,” one or more components may be positioned between the two components unless the terms are used with the term “immediately” or “directly.” The terms “first,” “second,” and the like may be used to distinguish components from each other, but the functions or structures of the components are not limited by ordinal numbers or component names in front of the components. The same reference numerals may refer to substantially the same elements throughout the present disclosure. The following embodiments can be partially or entirely bonded to or combined with each other and can be linked and operated in technically various ways. The embodiments can be carried out independently of or in association with each other. Hereinafter, various embodiments of the present disclosure will be described in detail with reference to the accompanying drawings. FIG.1is a block diagram illustrating a display device according to an embodiment of the present disclosure, andFIG.2is a diagram illustrating a cross-sectional structure of the display panel shown inFIG.1according to an embodiment of the present disclosure. Referring toFIGS.1and2, the display device according to an embodiment of the present disclosure includes a display panel100, a display panel driver for writing pixel data to pixels of the display panel100, and a power supply140for generating power necessary for driving the pixels and the display panel driver. The display panel100may be a display panel having a rectangular structure having a length in an X-axis direction, a width in a Y-axis direction, and a thickness in a Z-axis direction. The display panel100includes a pixel array AA that displays an input image. The pixel array AA includes a plurality of data lines102, a plurality of gate lines103intersected with the data lines102, and pixels101arranged in a matrix form. The display panel100may further include power lines commonly connected to pixels. The power lines may include a power line to which a pixel driving voltage EVDD is applied, a power line to which an initialization voltage Vinit is applied, a power line to which a reference voltage Vref is applied, and a power line to which a low potential power voltage EVSS is applied. These power lines are commonly connected to the pixels. The pixel array AA includes a plurality of pixel lines L1to Ln. Each of the pixel lines L1to Ln includes one line of pixels arranged along a line direction X in the pixel array AA of the display panel100. Pixels arranged in one pixel line share the same gate line103. Sub-pixels arranged in a column direction Y along a data line direction share the same data line102. One horizontal period1H is a time obtained by dividing one frame period by the total number of pixel lines L1to Ln. The display panel100may be implemented as a non-transmissive display panel or a transmissive display panel. The transmissive display panel may be applied to a transparent display device in which an image is displayed on a screen and an actual background may be seen. The display panel100may be implemented as a flexible display panel. The flexible display panel may be made of a plastic OLED panel. An organic thin film may be disposed on a back plate of the plastic OLED panel, and the pixel array AA and light emitting element may be formed on the organic thin film. To implement color, each of the pixels101may be divided into a red sub-pixel (hereinafter referred to as “R sub-pixel”), a green sub-pixel (hereinafter referred to as “G sub-pixel”), and a blue sub-pixel (hereinafter referred to as “B sub-pixel”). Each of the pixels may further include a white sub-pixel. Each of the sub-pixels includes a pixel circuit. The pixel circuit is connected to the data line, the gate line and power line. The pixels may be arranged as real color pixels and pentile pixels. The pentile pixel may realize a higher resolution than the real color pixel by driving two sub-pixels having different colors as one pixel101using a preset pixel rendering algorithm. The pixel rendering algorithm may compensate for insufficient color representation in each pixel with a color of light emitted from an adjacent pixel. Touch sensors may be disposed on the display panel100. A touch input may be sensed using separate touch sensors or may be sensed through pixels. The touch sensors may be disposed as an on-cell type or an add-on type on the screen of the display panel or implemented as in-cell type touch sensors embedded in the pixel array AA. As shown inFIG.2, when viewed from a cross-sectional structure, the display panel100may include a circuit layer12, a light emitting element layer14, and an encapsulation layer16stacked on a substrate10. The circuit layer12may include a pixel circuit connected to wirings such as a data line, a gate line, and a power line, a gate driver (GIP) connected to the gate lines, and the like. The wirings and circuit elements of the circuit layer12may include a plurality of insulating layers, two or more metal layers separated with the insulating layer therebetween, and an active layer including a semiconductor material. The light emitting element layer14may include a light emitting element EL driven by a pixel circuit. The light emitting element EL may include a red (R) light emitting element, a green (G) light emitting element, and a blue (B) light emitting element. The light emitting element layer14may include a white light emitting element and a color filter. The light emitting elements EL of the light emitting element layer14may be covered by a protective layer including an organic film and a passivation film. The encapsulation layer16covers the light emitting element layer14to seal the circuit layer12and the light emitting element layer14. The encapsulation layer16may have a multilayered insulating structure in which an organic film and an inorganic film are alternately stacked. The inorganic film blocks or at least reduces the penetration of moisture and oxygen. The organic film planarizes the surface of the inorganic film. When the organic film and the inorganic film are stacked in multiple layers, a movement path of moisture or oxygen becomes longer compared to a single layer, so that penetration of moisture and oxygen affecting the light emitting element layer14can be effectively blocked or at least reduced. A touch sensor layer may be disposed on the encapsulation layer16. The touch sensor layer may include capacitive type touch sensors that sense a touch input based on a change in capacitance before and after the touch input. The touch sensor layer may include metal wiring patterns and insulating layers forming the capacitance of the touch sensors. The capacitance of the touch sensor may be formed between the metal wiring patterns. A polarizing plate may be disposed on the touch sensor layer. The polarizing plate may improve visibility and contrast ratio by converting the polarization of external light reflected by metal of the touch sensor layer and the circuit layer12. The polarizing plate may be implemented as a polarizing plate in which a linear polarizing plate and a phase delay film are bonded, or a circular polarizing plate. A cover glass may be adhered to the polarizing plate. The display panel100may further include a touch sensor layer and a color filter layer stacked on the encapsulation layer16. The color filter layer may include red, green, and blue color filters and a black matrix pattern. The color filter layer may replace the polarizing plate and increase the color purity by absorbing a part of the wavelength of light reflected from the circuit layer and the touch sensor layer. In this embodiment, by applying the color filter layer20having a higher light transmittance than the polarizing plate to the display panel, the light transmittance of the display panel PNL can be improved, and the thickness and flexibility of the display panel PNL can be improved. A cover glass may be adhered on the color filter layer. The power supply140generates direct current (DC) power required for driving the pixel array AA and the display panel driver of the display panel100by using a DC-DC converter. The DC-DC converter may include a charge pump, a regulator, a buck converter, a boost converter, and the like. The power supply140may adjust a DC input voltage from a host system (not shown) and thereby generate DC voltages such as a gamma reference voltage VGMA, gate-on voltages VGH and VEH, gate-off voltages VGL and VEL, a pixel driving voltage EVDD, a pixel low-potential power supply voltage EVSS, a reference voltage Vref, an initial voltage Vinit, an anode voltage Vano, and the like. The gamma reference voltage VGMA is supplied to a data driver110. The gate-on voltages VGH and VEH and the gate-off voltages VGL and VEL are supplied to a gate driver120. The pixel driving voltage EVDD and the pixel low-potential power supply voltage EVSS, a reference voltage Vref, an initial voltage Vinit, an anode voltage Vano, and the like are commonly supplied to the pixels. The display panel driver writes pixel data (digital data) of an input image to the pixels of the display panel100under the control of a timing controller (TCON)130. The display panel driver includes a data driver110and the gate driver120. A display panel driver may further include a demultiplexer array112disposed between the data driver110and data lines102. The demultiplexer array112sequentially supplies data voltages output from channels of the data driver110to the data lines102using a plurality of demultiplexers (DEMUXs). The demultiplexers may include a plurality of switch elements disposed on the display panel100. When the demultiplexers are disposed between output terminals of the data driver110and the data lines102, the number of channels of the data driver110may be reduced. The demultiplexer array112may be omitted. The display panel driving circuit may further include a touch sensor driver for driving the touch sensors. The touch sensor driver is omitted fromFIG.1. The touch sensor driver may be integrated into one drive integrated circuit (IC). In a mobile device or wearable device, the timing controller130, the power supply140, the data driver110, the touch sensor driver, and the like may be integrated into one drive integrated circuit (IC). A display panel driver may operate in a low-speed driving mode under the control of a timing controller (TCON)130. The low-speed driving mode may be set to reduce power consumption of a display device when there is no change in an input image for a preset number of frames in analysis of the input image. In the low-speed driving mode, the power consumption of the display panel driving circuit and a display panel100may be reduced by lowering a refresh rate of pixels when a still image is input for a predetermined time or longer. A low-speed driving mode is not limited to a case in which a still image is input. For example, when the display device operates in a standby mode or when a user command or an input image is not input to a display panel driver for a predetermined time or more, the display panel driver may operate in the low-speed driving mode. The data driver110generates a data voltage Vdata by converting pixel data of an input image received from the timing controller130with a gamma compensation voltage every frame period by using a digital to analog converter (DAC). The gamma reference voltage VGMA is divided for respective gray scales through a voltage divider circuit. The gamma compensation voltage divided from the gamma reference voltage VGMA is provided to the DAC of the data driver110. The data voltage Vdata is outputted through the output buffer AMP in each of the channels of the data driver110. The gate driver120may include a scan driver121, and an emission (EM) driver122. The gate driver120may be implemented as a gate in panel (GIP) circuit formed directly on a circuit layer12of the display panel100together with the TFT array of the pixel array AA. The gate in panel (GIP) circuit may be disposed on a bezel area BZ that is a non-display area of the display panel100or dispersed in the pixel array on which an input image is reproduced. The gate driver120sequentially outputs gate signals to the gate lines103under the control of the timing controller130. The gate driver120may sequentially supply the gate signals to the gate lines103by shifting the gate signals using a shift register. The gate signal may include scan pulses, emission control pulses (hereinafter referred to as “EM pulses”), initial pulses, and sensing pulses. The shift register of the gate driver120outputs a pulse of the gate signal in response to a start pulse and a shift clock from the timing controller130, and shifts the pulse according to the shift clock timing. The timing controller130receives, from a host system (not shown), digital video data DATA of an input image and a timing signal synchronized therewith. The timing signal includes a vertical synchronization signal Vsync, a horizontal synchronization signal Hsync, a main clock CLK, a data enable signal DE, and the like. Because a vertical period and a horizontal period can be known by counting the data enable signal DE, the vertical synchronization signal Vsync and the horizontal synchronization signal Hsync may be omitted. The data enable signal DE has a cycle of one horizontal period (1H). A host system may be any one of a television (TV) system, a tablet computer, notebook computer, a navigation system, a personal computer (PC), a home theater system, a mobile device, and a vehicle system. The host system may scale an image signal from a video source according to the resolution of the display panel100and transmit the image signal to a timing controller130together with the timing signal. The timing controller130multiplies an input frame frequency by i and controls the operation timing of the display panel driving circuit with a frame frequency of the input frame frequency×i (i is a positive integer greater than 0) Hz. The input frame frequency is 60 Hz in the National Television Standards Committee (NTSC) scheme and 50 Hz in the (phase-alternating line (PAL) scheme. The timing controller130may lower a driving frequency of the display panel driver by lowering a frame frequency to a frequency between 1 Hz and 30 Hz to lower a refresh rate of pixels in the low-speed driving mode. Based on the timing signals Vsync, Hsync, and DE received from the host system, the timing controller130generates a data timing control signal for controlling the operation timing of the data driver110, a control signal for controlling the operation timing of the demultiplexer array112, and a gate timing control signal for controlling the operation timing of the gate driver120. The timing controller130controls an operation timing of the display panel driver to synchronize the data driver110, the demultiplexer array112, a touch sensor driver, and a gate driver120. The voltage level of the gate timing control signal outputted from the timing controller130may be converted into the gate-on voltages VGH and VEH and the gate-off voltages VGL and VEL through a level shifter (not shown) and then supplied to the gate driver120. That is, the level shifter converts a low level voltage of the gate timing control signal into the gate-off voltages VGL and VEL and converts a high level voltage of the gate timing control signal into the gate-on voltages VGH and VEH. The gate timing signal includes the start pulse and the shift clock. Due to process variations and device characteristic variations caused in a manufacturing process of the display panel100, there may be a difference in electrical characteristics of the driving element between the pixels, and this difference may increase as a driving time of the pixels elapses. An internal compensation technology or an external compensation technology may be applied to an organic light-emitting diode display to compensate for the variations in electrical characteristics of a driving element between the pixels. The internal compensation technology samples a threshold voltage of the driving element for each sub-pixel using an internal compensation circuit implemented in each pixel circuit to compensate a gate-source voltage Vgs of the driving element as much as the threshold voltage. The external compensation technology senses in real time a current or voltage of the driving element which changes according to the electrical characteristics of the driving element using an external compensation circuit. The external compensation technology compensates the variation (or change) in the electrical characteristics of the driving element in each pixel in real time by modulating the pixel data (digital data) of the input image as much as the electric characteristic variation (or change) of the driving element sensed for each pixel. The display panel driver may drive the pixels using the external compensation technology and/or the internal compensation technology. FIG.3is a circuit diagram illustrating a pixel circuit connected to an external compensation circuit according to one embodiment of the present disclosure. Referring toFIG.3, the pixel circuit includes a light-emitting element EL, a driving element DT which supplies a current to the light-emitting element EL, a first switch element M01which connects a pixel driving voltage line41in response to an emission control signal EM, a second switch element M02which connects a data line40to node n2in response to a scan signal SCAN, a capacitor Cst connected to a gate electrode of the driving element DT, a third switch element M03which connects a reference voltage line43to node n3in response to a sensing signal SENSE, and a fourth switch element M04which connects an initialization voltage line44to node n2in response to an initialization signal INIT. A pixel driving voltage EVDD is applied to a first electrode of the driving element DT through a first power line41. The driving element DT drives the light emitting element OLED by supplying a current to the light emitting element OLED according to a gate-source voltage Vgs. The light emitting element OLED is turned on and emits light when a forward voltage between an anode and a cathode of the light emitting element OLED is greater than or equal to a threshold voltage. A low potential voltage EVSS is applied to a cathode of the light-emitting element EL. The capacitor Cst is connected between the gate electrode and a second electrode of the driving element DT to maintain a gate-source voltage Vgs of the driving element DT. The first switch element M01is turned on according to a gate-on voltage of the emission control signal EM applied from a gate line to connect the pixel driving voltage line41to a first node n1. The second switch element M02is turned on according to a gate-on voltage of the scan signal SCAN applied from the gate line to connect the data line40to the gate electrode of the driving element DT and the capacitor Cst. The third switch element M03applies a reference voltage VpreR in response to the sensing signal SENSE. The reference voltage VpreR is applied to the pixel circuit through the reference voltage line43. The fourth switch element M04is turned on according to a gate-on voltage of the initialization signal INIT to connect the initialization voltage line44to the gate electrode of the driving element DT and the capacitor Cst. The light-emitting element EL may be implemented as an OLED. The OLED includes an organic compound layer formed between an anode and a cathode. The organic compound layer may include a hole injection layer (HIL), a hole transport layer (HTL), a light-emitting layer (EML), an electron transport layer (ETL), an electron injection layer (EIL), and the like, but is not limited thereto. The switch elements M01and M02may be implemented as n-channel oxide thin film transistors (TFTs). An organic light emitting diode used as the light emitting element may have a tandem structure in which a plurality of light emitting layers are stacked. The organic light emitting diode having the tandem structure may improve the luminance and lifespan of the pixel. In this case, in a sensing mode, a current flowing through a channel of the driving element DT or a voltage between the driving element DT and the light-emitting element EL is sensed through the reference voltage line43. The current flowing through the reference voltage line43is converted to a voltage through an integrator and is converted to digital data through an analog-to-digital converter (ADC). This digital data is sensing data including a threshold voltage or mobility information of the driving element DT. The sensing data is transmitted to a data operation unit. The data operation unit may receive the sensing data from the analog-to-digital converter to compensate for driving deviation and deterioration of the pixels by adding or multiplying a compensation value selected based on the sensing data to the pixel data. FIGS.4to8are views for describing an operation principle of a sensing circuit according to the embodiment. Referring toFIG.4, a chip on film (COF) may be adhered to a display panel PNL. The COF includes a drive IC SIC and connects a source PCB SPCB to the display panel PNL. The drive IC SIC includes a data driver. A timing controller130and a power supply unit150may be mounted on a control PCB CPCB. The control PCB CPCB may be connected to the source PCB SPCB through a flexible circuit film, for example, a flexible printed circuit (FPC). The timing controller130may adjust the reference voltage Vref output from the power supply unit150on the basis of a result of comparing a reference voltage Vref sensed sensed from the display panel PNL and the reference voltage Vref output from the power supply unit150by including the above-described reference voltage controller. The reference voltage Vref output from the power supply unit150may be supplied to the display panel PNL via the FPC, the source PCB SPCB, and the COF. Accordingly, in the display panel PNL, a lead-in unit IN of the reference voltage Vref is close to the drive IC SIC. Reference voltage lines REFL on the display panel PNL may be connected to the power supply unit150via the COF, the source PCB SPCB, and the FPC. The reference voltage lines REFL may be grouped by a shorting bar SB. The shorting bar may be formed on one side of the display panel PNL, and may be formed as a line of glass (LOG) line on the display panel rather than in the drive IC SIC. The reference voltage lines REFL connected to all pixels on the display panel PNL may be connected to the shorting bar SB in one embodiment. A sensing unit160senses a current flowing through a pixel power line to which a high potential voltage EVDD is applied when driving in a sensing mode after the power is turned off in one embodiment. The sensing unit160provides the sensed current to the timing controller130. Referring toFIG.5, the sensing unit160may include a resistor R connected to the pixel power line and an ADC connected to the resistor R in parallel. The sensing unit160may further include a switch SW connected between the pixel power line and the resistor R. The switch SW is turned off in a display mode and turned on in the sensing mode. When the switch SW is turned off in the display mode, the high potential voltage EVDD is applied to a pixel PXL through the pixel power line as shown by the dotted line inFIG.5. When the switch SW is turned on in the sensing mode, the high potential voltage EVDD is applied to the pixel PXL through the pixel power line and the resistor R, and a current flowing through the resistor R is sensed by the ADC. The ADS is configured to configured to convert a voltage difference across the resistor into a digital value during the sensing mode. In one embodiment, the voltage difference is indicative of the current flowing through the pixel power line during the sensing mode. The TCON130receives the digital value and generates a compensation value for a corresponding pixel block to compensate for a change in electrical characteristics of the pixels101included in a corresponding block by adding or multiplying the compensation value to pixel data of the input image. Referring toFIG.6A, in the embodiment, when driving in the sensing mode, the gate-on voltage of the emission control signal EM is applied to the first switch element M01, the gate-on voltage of the scan pulse SCAN is applied to the second switch element M02, and a gate-on voltage of the sensing signal SENSE is applied to the third switch element M03. The gate-on voltages are applied to the first, second, and third switch elements M01, M02, and M03and the first, second, and third switch elements M01, M02, and M03are turned on to form a current path through the driving transistor DT through which the current flowing through the pixel driving voltage line41flows to the reference voltage line43instead of flowing to the light-emitting element EL. Accordingly, in the embodiment, when driving in the sensing mode, current sensing may be performed without emitting light from the light-emitting element, and since light emission of the light-emitting element is suppressed, a visibility problem may be solved. Referring toFIG.6B, in the embodiment, when driving in the sensing mode, since a gate-off voltage of the emission control signal EM is applied to the first switch element M01, a current is prevented from flowing through the pixel driving voltage line41even when gate-on voltages are applied to the second and third switch elements M02and M03and the second and third switch elements M02and M03are turned on. Like the above, the pixel circuit may be selected by the emission control signal EM when driving in the sensing mode. That is, an amount of flowing current may be measured by allowing the current to flow only through a selected pixel circuit. Referring toFIG.7, the sensing unit senses the current in units of blocks including a predetermined number of pixels. Here, the block may have a square shape in which the number of pixels in a line direction X and the number of pixels in a column direction Y are the same, for example, a square shape of 30 pixels×30 pixels. The block is not limited to the square shape and may be implemented in various shapes. The sensing unit160senses the current in units of blocks, and senses the current flowing through each block in a predetermined order. Different currents are sensed according to characteristics and deterioration levels of pixels included in each block. A method of sensing the current in units of blocks may shorten an overall sensing time, and may be implemented with a simple structure compared to a method of sensing the current in units of pixels. In the embodiment, a tact time and consistency will be improved by sensing the current flowing through each block in the column direction Y rather than sensing the current flowing through each block in the line direction X. Referring toFIG.8, in the embodiment, a pixel structure for sensing the current in units of blocks is shown. The reference voltage lines and the high potential voltage line are connected to all pixels on the display panel to be shared, and a data voltage line is connected to each of the pixels in the column direction Y. Accordingly, a block in which sensing is performed according to whether data is applied may be selected even when the reference voltage and the high potential voltage are applied to all pixels on the display panel. For example, white data (e.g., white image data or first image data) is applied to all pixels in a first block ONBLK in which the sensing is performed, and black data (e.g., black image data or black image data) is applied to all pixels in a second block OFFBLK in which the sensing is not performed. Here, while the white data is applied to one block on the display panel, the black data is applied to the remaining blocks. When the white data is applied to all pixels in the first block in which the sensing is performed, the sensing unit160senses the current flowing through the pixel driving voltage line. In this case, since the current flowing through the pixel driving voltage line has a large value in units of blocks, an integrator is not required in the sensing unit. FIGS.9A and9Bare views for comparatively describing a total sensing time. Referring toFIG.9A, in the embodiment, when driving in the sensing mode, sensing data, that is, white data, may be applied to each block in the column direction Y, and the current flowing through each block may be sensed. In this case, a total sensing time Ttotal may be defined as in the following Equation 1. Ttotal=[Taddressing+(Tsensing×N_Vblock)×N_subpxl×N_Hblock [Equation 1] Here, Taddressing is a period of time required to apply the sensing data, Tsensing is a time period for sensing the current flowing through each block, N_Vblock is the number of blocks located in the column direction Y, N_subpxl is the number of sub-pixels in the blocks located in the column direction Y, and N_Hblock is the number of blocks located in the line direction X. For example, when the total number of blocks is 36×64 and the number of pixels in each block is 30×30, for FHD 120 hz RGB, the total sensing time Ttotal is [8.33 ms+(2 ms×36)]×3×64, that is, 15.42 seconds. Referring toFIG.9B, in a comparative example, when driving in the sensing mode, sensing data, that is, white data, may be applied to each block in the line direction X, and the current flowing through each block may be sensed. In this case, the total sensing time Ttotal may be defined as in the following Equation 2. Ttotal=(Taddressing+Tsensing)×N_subpxl×N_Hblock×N_Vblock [Equation 2] For example, when the number of entire blocks is 36×64 and the number of pixels in each block is 30×30, for FHD 120 hz RGB, the total sensing time Ttotal is 8.33 ms+2 ms)×3×64×36, that is, 71.4 seconds. TABLE 1ClassificationComparative exampleEmbodimentAddressingTaddressing × N_Hblock × N_VblockTaddressing × N_HblockSensingN_Hblock × N_Vblock × Tsensing As shown in Table 1, since there is a large difference in an addressing time between the embodiment and the comparative example, it can be seen that the total sensing time is significantly reduced in the embodiment compared to the comparative example. FIGS.10A to10Dare views illustrating a case in which a shape of the block is variously changed according to one embodiment. Referring toFIGS.10A and10B, a case in which a size of the block to be sensed is changed is shown. InFIG.10A, a block is 10×10 pixels whereas inFIG.10Ba block is 20×20 pixels. In this case, the tact time may be shortened according to the size of the block as in the following Table 2. TABLE 2Tact time for size of block (sec)BlockComparative exampleEmbodiment10 × 1064912920 × 201613430 × 30711560 × 60184 Referring toFIGS.10C and10D, the number of blocks to which data is applied can be changed. For example, the data voltage may be applied to each block in the column direction Y (e.g.,FIG.10C) or the blocks in the column direction Y may be divided into a plurality of groups (e.g.,FIG.10D) to apply the data voltage in units of each group. Like the above, since the tact time may be shortened compared to the comparative example with respect to the same block size, and the block size may become smaller with respect to the same tact time, consistency may increase. Accordingly, in the embodiment, various configurations for current sensing are possible, and it may be possible to change a design to an optimal configuration in consideration of tact time, block size, consistency, and the like. FIGS.11A to11Dare views for describing a principle of selecting a sensing region according to one embodiment. Referring toFIG.11A, in the embodiment, the sensing data, that is, white data, may be applied to pixels in a sensing region M1to be sensed in a vertical direction or the column direction Y along the direction of the data line, and black data may be applied to pixels in non-sensing regions M2to M8not to be sensed. In the embodiment, the sensing region to be sensed may be selected by applying the white data. A current may be sensed for each block included in the selected sensing region. Referring toFIG.11B, the current should be sensed in units of one block among the blocks included in the sensing region. In this case, the block may be selected using an emission control signal. In the embodiment, the emission control signal for selecting blocks N1to N6included in the sensing region M1disposed in the column direction Y along the data line may be sequentially applied. Referring toFIG.11C, when a first block N1included in the sensing region M1is selected, a high voltage level of the emission control signal is applied to the first block N1and thus a pixel driving voltage EVDD flows through the driving element, and a low voltage level of the emission control signal may be sequentially applied to second to sixth blocks N2to N6included in the sensing region M1. In this case, since each of the sub-pixels of the first block N1to be sensed in a block group is implemented with the circuit shown inFIG.3, and thus the first switch element M01is turned on by the high voltage level of the emission control signal, the pixel driving voltage EVDD may be applied to form a current path. However, since each of the sub-pixels of the remaining blocks in the sensing region to be sensed is implemented with the circuit shown inFIG.3, and thus the first switch element M01is turned off by the low voltage level of the emission control signal, the pixel driving voltage EVDD is not applied and thus the current path may not be formed. Referring toFIG.11D, during a sensing section after an addressing section during which the white data is applied to the blocks N1to N6in the sensing region, the blocks N1, N2, N3, N4, N5, and N6in the sensing region to be sensed may be sequentially driven to sense the current. FIG.12is a view illustrating a shift register of a gate driver according to the embodiment of the present disclosure,FIG.13is a view illustrating a signal transmission unit of a sensing driver according to the embodiment,FIG.14is a view illustrating a signal transmission unit of an EM driver according to the embodiment, andFIG.15is a waveform diagram illustrating an output signal of the signal transmission unit shown inFIG.14. Referring toFIG.12, a gate driver120according to the embodiment includes a plurality of signal processing units STG1, STG2, STG3, STG4, STG5, STG6and STG7which are cascade-connected via a carry line through which a carry signal is transmitted. The timing controller130may adjust a width and a multi-output of an output signal GOUT of the gate driver using a start pulse Vst input to the gate driver120. Each of the signal processing units STG1, STG2, STG3, STG4, STG5, STG6and STG7receives a start pulse or a carry signal output from a previous odd-numbered or even-numbered signal processing unit and clock signals CLK1, CLK2, CLK3, and CLK4. A first signal processing unit STG1starts to be driven according to the start pulse Vst, and the other signal processing units STG2, STG3, STG4, STG5, STG6and STG7receive the carry signal from the previous odd-numbered or even-numbered signal processing unit and start to be driven. Referring toFIG.13, each signal processing unit of the gate driver120according to the embodiment includes a first circuit unit210and a second circuit unit220. The first circuit unit210charges or discharges a first control node (hereinafter referred to as a “Q node”) and a second control node (hereinafter referred to as a “Qb node”). In this case, the first circuit unit210includes a control circuit which serves to control charging and discharging of the Q node Q and the Qb node Qb and an inverter circuit which inverts a voltage of the Q node Q and applies the voltage to the Qb node Qb. The inverter circuit includes a Qb node charging unit and a Qb node discharging unit. The second circuit unit220outputs sensing signal SEOUT(n) in response to potentials of the Q node Q and the Qb node Qb. The second circuit unit220includes first buffer transistors T1and T2which output the sensing signal SEOUT(n). The first buffer transistors T1and T2are divided into a first pull-up transistor T1that is turned on based on the potential of the Q node Q and a first pull-down transistor T2that is turned on based on the potential of the Qb node Qb. In the first pull-up transistor T1, a gate electrode is connected to the Q node Q, a first electrode is connected to a clock signal line SECLK(n), and a second electrode is connected to a first output terminal SEOUT(n). In the first pull-down transistor T2, a gate electrode is connected to the Qb node Qb, a first electrode is connected to the first output terminal SEOUT(n), and a second electrode is connected to a low potential voltage line SEGVSS0. The first buffer transistors T1and T2output the sensing signal SEOUT(n) based on a clock signal applied through the clock signal line SECLK(n) and a low potential voltage applied through the low potential voltage line SEGVSS0. In this case, as shown inFIG.6A, in the embodiment, when driving in the sensing mode, a voltage of the sensing signal is set to maintain a high voltage level so that a current path is formed to bypass the light-emitting element. For example, in the embodiment, when driving in the sensing mode, voltages applied to the clock signal line SECLK(n) and the low potential voltage line SEGVSS0may be set to be high voltage levels. Referring toFIG.14, each signal transmission unit of the gate driver according to the embodiment includes a first circuit unit211and a second circuit unit221. The first circuit unit211charges or discharges a first control node (hereinafter referred to as a “Q node”) and a second control node (hereinafter referred to as a “Qb node”). In this case, the first circuit unit211includes a control circuit which serves to control charging and discharging of the Q node Q and the Qb node Qb and an inverter circuit which inverts a voltage of the Q node Q and applies the voltage to the Qb node Qb. The inverter circuit includes a Qb node charging unit and a Qb node discharging unit. The second circuit unit221outputs emission control signal EMOUT(n) in response to potentials of the Q node Q and the Qb node Qb. The second circuit unit221includes first buffer transistors T1and T2which output the emission control signal EMOUT(n). The first buffer transistors T1and T2are divided into a first pull-up transistor T1that is turned on based on the potential of the Q node Q and a first pull-down transistor T2that is turned on based on the potential of the Qb node Qb. In the first pull-up transistor T1, a gate electrode is connected to the Q node Q, a first electrode is connected to a clock signal line EMCLK(n), and a second electrode is connected to a first output terminal EMOUT(n). In the first pull-down transistor T2, a gate electrode is connected to the Qb node Qb, a first electrode is connected to the first output terminal EMOUT(n), and a second electrode is connected to a low potential voltage line EMGVSS0. The first buffer transistors T1and T2output the emission control signal EMOUT(n) based on a clock signal applied through the clock signal line EMCLK(n) and a low potential voltage applied through the low potential voltage line EMGVSS0. Referring toFIG.15, each of the signal processing units STG1, STG2, STG3, STG4, STG5, STG6and STG7sequentially outputs the emission control signal by shifting the start pulse or the carry signal output from a previous signal processing unit according to a timing of the clock signal. In this case, in the embodiment, the signal processing units may sequentially output the emission control signal in units of blocks. Here, a case in which five pixel lines are included in one block is shown as an example. For example, in a first sensing section ({circle around (1)}), an emission control signal having a high voltage level may be applied according to a clock signal EMCLK(ON) from signal transmission units connected to the first block, and in a second sensing section ({circle around (2)}), an emission control signal having a high voltage level may be applied according to the clock signal EMCLK(ON) from signal transmission units connected to the second block. The emission control signal applied to the first block may be applied at a high voltage level according to a rising edge of the clock signal EMCLK(ON) in the first sensing section, and may be applied at a low voltage level according to the rising edge of the clock signal EMCLK(OFF) in the second sensing section. That is, the emission control signal from the signal transmission unit may be applied at the high voltage level only during a section in which a current amount of the corresponding block is sensed. Accordingly, as shown inFIGS.6A and6B, in the embodiment, when driving in the sensing mode, since a voltage of the emission control signal is applied at a high voltage level to a pixel circuit located in a selected block in the sensing region, and at a low voltage level to a pixel circuit located in a non-selected block in the sensing region, the block may be selected by the emission control signal. In one embodiment, a display device comprises: a plurality of pixels that are connected to a power line to which a pixel driving voltage is supplied, the plurality of pixels divided into a plurality of columns of pixel blocks that extend along a first direction and each pixel block including a different subset of pixels from the plurality of pixels, a plurality of data lines that extend in the first direction and are connected to the plurality of pixels, the plurality of data lines applying a plurality of data voltages of pixel data of an image to the plurality of pixels; a plurality of gate lines that are connected to the plurality of pixels and extend in a second direction that intersects the first direction, the plurality of gate lines applying gate signals to the plurality of pixels; a data driver configured to supply the plurality of data voltages of the image to the plurality of data lines during a display mode, and to supply sensing data to the plurality of data lines during a sensing mode; a gate driver configured to supply the gate signals to the plurality of gate lines; and a sensing circuit configured to sense current flowing through the power line that is connected to a respective subset of pixels included in each pixel block included in a column of pixel blocks from the plurality of columns of pixel blocks during the sensing mode, each of the respective subset of pixels included in each pixel block supplied the sensing data during the sensing mode. In one embodiment, the sensing circuit sequentially senses each pixel block included in the column of pixel blocks during the sensing mode such that the respective subset of pixels included in each pixel block are supplied the sensing data and are sensed based on the current flowing through the power line according to the sensing data. In one embodiment, the sensing data comprises white image data and the display panel driver is configured to supply the white image data to each pixel block from the column of pixel blocks that is being sensed and supplies black image data to remaining pixel blocks included in other columns of pixel blocks that are not being sensed. In one embodiment, each of the plurality of pixels includes: a driving element including a first electrode of the driving element that is connected a first node, a gate electrode of the driving element that is connected to a second node, and a second electrode of the driving element that is connected to a third node; a first switch element including a first electrode of the first switch element that is connected to the power line to which the pixel driving voltage is applied, a gate electrode of the first switch element to which an emission signal is applied, and a second electrode of the first switch element that is connected to the first node; a light emitting element including an anode connected to the third node and a cathode to which a low-potential power supply voltage is applied; a capacitor between the second node and the third node; a second switch element including a first electrode of the second switch element that is connected to a data line to which a data voltage from the plurality of data voltages is applied, a gate electrode of the second switch element to which a scan pulse is applied, and a second electrode of the second switch element that is connected to the second node; and a third switch element including a first electrode of the third switch element that is connected to the third node, a gate electrode of the third switch element to which a sensing pulse is applied, and a second electrode of the third switch element that is connected to a reference line to which a reference voltage is applied. In one embodiment, a respective first switch element included in each pixel of a target pixel block from the column of pixel blocks being sensed is turned on during the sensing mode responsive to the gate electrode of the respective first switch element being applied the emission signal at an on level, and a respective first switch element included in each pixel of remaining pixel blocks from the column of pixel blocks being sensed is turned off responsive to the gate electrode of the responsive first transistor being applied the emission signal at an off level. In one embodiment, a respective first switch element included in each pixel of the remaining pixel blocks included in the other columns of pixel blocks that are supplied the black image data due to not being sensed is turned on during the sensing mode responsive to the gate electrode of the respective first switch element being applied the emission signal at the on level. In one embodiment, a current flows through light emitting elements included in the plurality of pixels during the display mode, but the current does not flow through light emitting elements included in pixel blocks from the column of pixel blocks being sensed during the sensing mode. In one embodiment, the sensing circuit includes: a resistor; a switch configured to connect the resistor to the power line in series during the sensing mode and configured to disconnect the resistor from the power line during the display mode; and an analog-to-digital converter connected to the resistor in parallel, the analog-to-digital converter configured to convert a voltage difference across the resistor into a digital value during the sensing mode, the voltage difference indicative of the current flowing through the power line during the sensing mode. In one embodiment, the pixel data of the image is adjusted by a compensation value based on the digital value. In one embodiment, the gate driver includes: a shift register configured to output the sensing pulse, the shift register including a plurality of signal processing units that each include: a first transistor including a gate electrode of the first transistor that is connected to a first control node of the signal processing unit, a first electrode of the first transistor connected to a clock node, and a second electrode of the first transistor connected to an output node from which the sensing pulse is outputted; and a second transistor including a gate electrode of the second transistor coupled to a second control node of the signal processing unit, a first electrode of the second transistor connected to the output node, and a second electrode of the second transistor connected to a voltage node, and wherein during the display mode, a clock that switches between an on voltage and an off voltage is inputted to the clock node, a low-potential reference voltage is applied to the voltage node, and during the sensing mode, the on voltage is applied to each of the clock node and the voltage node. In one embodiment, a display device comprises: a plurality of pixels that are connected to a power line to which a pixel driving voltage is supplied; a plurality of data lines that extend in a first direction and are connected to the plurality of pixels, the plurality of data lines applying a plurality of data voltages of pixel data of an image to the plurality of pixels; a plurality of gate lines that are connected to the plurality of pixels and extend in a second direction that intersects the first direction, the plurality of gate lines applying gate signals to the plurality of pixels; a data driver configured to supply the plurality of data voltages of the image to the plurality of data lines during a display mode, and to supply sensing data to the plurality of data lines during a sensing mode; a gate driver configured to supply the gate signals to the plurality of gate lines; and a sensing circuit configured to sense current flowing through the power line that is connected to a subset of pixels from the plurality of pixels during the sensing mode, the subset of pixels arranged along the first direction. In one embodiment, the plurality of pixels are divided into a plurality of columns of pixel blocks that extend along the first direction and each pixel block includes a different subset of pixels from the plurality of pixels. In one embodiment, the subset of pixels is included in a pixel block from a column of pixel blocks being sensed, and pixels included in the column of pixel blocks are provided the sensing data including white image data, and pixels included in remaining columns of pixel blocks from the plurality of columns of pixel blocks that are not being sensed during the sensing mode are provided black image data. In one embodiment, the sensing circuit sequentially senses each pixel block included in the column of pixel blocks being sensed during the sensing mode such that respective subset of pixels included in each pixel block are supplied the white image data and are sensed based on the sensed current flowing through the power line according to the white image data. In one embodiment, each of the plurality of pixels includes: a driving element including a first electrode of the driving element that is connected a first node, a gate electrode of the driving element that is connected to a second node, and a second electrode of the driving element that is connected to a third node; a first switch element including a first electrode of the first switch element that is connected to the power line to which the pixel driving voltage is applied, a gate electrode of the first switch element to which an emission signal is applied, and a second electrode of the first switch element that is connected to the first node; a light emitting element including an anode connected to the third node and a cathode to which a low-potential power supply voltage is applied; a capacitor between the second node and the third node; a second switch element including a first electrode of the second switch element that is connected to a data line to which a data voltage from the plurality of data voltages is applied, a gate electrode of the second switch element to which a scan pulse is applied, and a second electrode of the second switch element that is connected to the second node; and a third switch element including a first electrode of the third switch element that is connected to the third node, a gate electrode of the third switch element to which a sensing pulse is applied, and a second electrode of the third switch element that is connected to a second power line from the plurality of power lines to which a reference voltage is applied. In one embodiment, a respective first switch element included in each pixel of a target pixel block from the column of pixel blocks being sensed is turned on during the sensing mode responsive to the gate electrode of the respective first switch element being applied the emission signal at an on level, and a respective first switch element included in remaining pixel blocks from the column of pixel blocks being sensed is turned off responsive to the gate electrode of the responsive first element being applied the emission signal at an off level. In one embodiment, a respective first switch element included in each pixel of the remaining pixel blocks included in the remaining columns of pixel blocks that are supplied the black image data due to not being sensed is turned on during the sensing mode responsive to the gate electrode of the respective first switch transistor being applied the emission signal at the on level. In one embodiment, a sensing circuit comprises: a resistor; and a switch configured to serially connect the resistor to a power line that supplies a pixel driving voltage to a plurality of pixels of a display panel that are divided into a plurality of columns of pixel blocks during a sensing period, and configured to disconnect the resistor from the power line during a display period during which an image is displayed by the display panel, wherein the sensing circuit is configured to sequentially sense each pixel block included in a column of pixel blocks during the sensing period by measuring a current flowing through the power line connected to a subset of pixels from the plurality of the pixels included in a target pixel block from the column responsive to sensing data being applied to the subset of pixels during the sensing mode. In one embodiment, the sensing circuit further comprises: an analog-to-digital converter connected to the resistor in parallel, the analog-to-digital converter configured to convert a voltage difference across the resistor into a digital value during the sensing mode responsive to the current flowing through the power line. In one embodiment, pixel data of the image is adjusted by a compensation value based on the digital value. Although the embodiments of the present disclosure have been described in more detail with reference to the accompanying drawings, the present disclosure is not limited thereto and may be embodied in many different forms without departing from the technical concept of the present disclosure. Therefore, the embodiments disclosed in the present disclosure are provided for illustrative purposes only and are not intended to limit the technical concept of the present disclosure. The scope of the technical concept of the present disclosure is not limited thereto. Therefore, it should be understood that the above-described embodiments are illustrative in all aspects and do not limit the present disclosure. The protective scope of the present disclosure should be construed based on the following claims, and all the technical concepts in the equivalent scope thereof should be construed as falling within the scope of the present disclosure. | 56,687 |
11862058 | DESCRIPTION OF THE EMBODIMENTS The invention provides a load driving circuit capable of suppressing the increase in device scale, and sensitively, accurately, and quickly detecting the abnormality of a current output to a load to drive the load, a display driver including multiple load driving circuits, a display panel including the display driver, and a semiconductor device. According to the invention, by adopting a simple configuration of a detection circuit without using a resistor or a comparator, the abnormality of a current output to the load can be quickly, accurately, and sensitively detected without increasing the device scale. Embodiment 1 FIG.2Ais a circuit diagram illustrating a configuration of a load driving circuit100according to a first embodiment of the invention. The load driving circuit100shown inFIG.2Ais formed in a semiconductor IC chip as a semiconductor device, and is a circuit for driving a capacitive load90such as a data line of a liquid crystal display panel or an organic EL display panel. It is noted that a load component other than the display panel or a load circuit such as an electric circuit realizing various functions may also serve as the load90. The load driving circuit100includes a load driving voltage generation circuit VGC, an output amplifier10, and a detection circuit40. The load driving voltage generation circuit VGC, the output amplifier10, and the detection circuit40receive a load driving power potential AVDD via an AVDD power terminal as well as a load driving ground potential AVSS via an AVSS power terminal. The load driving voltage generation circuit VGC generates a driving voltage AMPIN having a voltage value for driving the load90, and supplies the driving voltage AMPIN to the output amplifier10. The output amplifier10outputs an output current to the load90connected to an output terminal P1via the output terminal P1, so that the voltage of the output terminal P1is equal to the driving voltage AMPIN. The detection circuit40detects whether the output current transmitted by the output amplifier10to the capacitive load90is in an output stable state. That is, since the voltage of the output end of the output amplifier10is consistent with the voltage value corresponding to the driving voltage AMPIN, the detection circuit40detects whether the output current is changed from a stable state (referred to as “reference state” in the following) within a predetermined range close to zero, including a state in which the output current is zero. The detection circuit40outputs a determination signal JD indicating that the output current is abnormal in the case where a change from the reference state (output stable state) in the output current is detected and that the output current is normal in the case where such change is not detected. The output amplifier10has a push-pull output-stage and a differential stage15. The push-pull output-stage is formed by a Pch transistor11(referred to as “Pch” in the following) and an Nch transistor12(referred to as “Nch” in the following) as a first transistor and a second transistor whose conductivity types are different from each other. The differential stage15is an operational amplifier that receives the driving voltage AMPIN by using its own non-inverting input end (+) and receives the voltage (referred to as “output voltage”) of the output terminal P1by using its own inverting input terminal (−). The differential stage15generates signals PG and NG having levels corresponding to the difference between the driving amplifier AMPIN and the output voltage. That is, the different stage15generates the signal PG whose level is lower as the driving voltage AMPIN is higher than the output voltage and the difference is greater, and generates the signal NG whose level is higher as the driving voltage AMPIN is lower than the output voltage and the difference is greater. The differential stage15supplies the signal PG to the gate of the transistor11via a node n1, and supplies the signal NG to the gate of the transistor12via a node n2. The source of the transistor11serving as a push-pull output-stage (referred to as the output-stage transistor11) is applied with the load driving power potential AVDD, and the drain of the transistor11is connected to a node n0, the output terminal P1, and the drain of the transistor12. The source of the output-stage transistor12is applied with the ground potential AVSS. The output-stage transistor11transmits the output current corresponding to the signal PG received by its own gate to the node n0 via its own drain. The source of the transistor12serving as a push-pull output-stage (referred to as the output-stage transistor12) is applied with the load driving ground potential AVSS, and the drain of the transistor12is connected to the output terminal P1and the drain of the transistor11. The output-stage transistor12extracts the output current corresponding to the signal NG received by its own gate from the node n0. At the push-pull output-stage, when one of the current (charging current) transmitted by the output-stage transistor11to the output terminal P1based on the power potential AVDD and the current (discharging current) extracted by the output-stage transistor12from the output terminal P1to the terminal side of the ground potential AVSS generates an increasing effect, the other generates a decreasing effect. That is, a push-pull operation is performed. According to the above configuration, the output current obtained by subtracting the discharging current from the charging current is transmitted to the load90via the node n0 and the output terminal P1. Accordingly, an output driving signal having the driving voltage AMPIN is generated at the node n0, and the load90is driven by the output driving signal. The detection circuit40includes an active/inactive switching circuit20, a coupling circuit50, and a determination circuit60. The active/inactive switching circuit20includes switches21and23set to one of the ON state and the OFF state complementary to each other and switches22and24set to one of the ON state and the OFF state complementary to each other. The active/inactive switching circuit20receives a control signal CNT that prompts various kinds of operation control. Here, in the case of receiving the control signal CNT that prompts to activate the detection circuit40, the active/inactive switching circuit20sets the switches21and22to the OFF state, and the switches23and24to the ON state. Accordingly, the node n1 of the output amplifier10is connected with a node n5 of the coupling circuit50, the node n2 of the output amplifier10is connected with a node n6 of the coupling circuit50, and the detection circuit40becomes active (enabled) and performs the detection operation of the output current to be described afterwards. Meanwhile, in the case of receiving the control signal CNT that prompts to deactivate the detection circuit40, the active/inactive switching circuit20sets the switches23and24to the OFF state, and the switches21and22to the ON state. Accordingly, the node n5 is applied with the load driving power potential AVDD, the node n6 is applied with the load driving ground potential AVSS, and the connection between the node n1 and the node n5 and the connection between the node n2 and the node n6 are cut off together. Accordingly, the detection circuit40becomes inactive (disabled), and the detection operation of the output current to be described afterwards is stopped. Switches set to the ON state or the OFF state in association with the switches23and24may be further provided between a node n7 and the AVSS power terminal and between a node n8 and the AVDD power terminal. The coupling circuit50is formed by a Pch transistor51as the first transistor, a Pch transistor52as the second transistor, an Nch transistor53as the third transistor, and an Nch transistor54as the fourth transistor. The source of each of the transistors51and52is applied with the power potential AVDD, and the gate of each of the transistors51and52is connected to the node n5. The drain of the transistor51is connected to the drain of the transistor53via the node n7. The drain of the transistor52is connected to the drain of the transistor54via the node n8. The gate of each of the transistors53and54is connected to the node n6, and the source of each of the transistors53and54is applied with the ground potential AVSS. With such configuration, in the coupling circuit50, a first mirror current pair (I1, I3) with respect to the currents flowing in the output-stage transistors11and12of the output amplifier10is generated by using the transistors51and53. In addition, a second mirror current pair (I2, I4) with respect to the currents flowing in the output-stage transistors11and12of the output amplifier10is generated by using the transistors52and54. That is, the transistor51generates the first current I1of the source type including a mirror current with respect to the current flowing in the output-stage transistor11, and the transistor52generates the second current I2of the source type including a mirror current with respect to the current flowing in the output-stage transistor11. The transistor53generates the third current I3of the sink type including a mirror current with respect to the current flowing in the output-stage transistor12, and the transistor54generates the fourth current I4of the sink type including a mirror current with respect to the current flowing in the output-stage transistor12. Here, in the coupling circuit50, the first current I1and the third current I3are coupled at the node n7. That is, the first current I1is transmitted to the node n7, and the third current I3is extracted from the node n7. Accordingly, in the coupling circuit50, the voltage generated at the node n7 is output as a first voltage O1. In addition, in the coupling circuit50, the second current I2and the fourth current I4are coupled at the node n8. That is, the second current I2is transmitted to the node n8, and the fourth current I4is extracted from the node n8. Accordingly, in the coupling circuit50, the voltage generated at the node n8 is output as a second voltage O2. That is, the node n7 serves as the first output node of the coupling circuit50, and the node n8 serves as the second output node of the coupling circuit50. It is noted that, for the transistors51to54, transistors whose current output capabilities are respectively set, so that in the reference state (output stable state) of the output current, the third current I3is greater than the first current I1, and the fourth current I4is smaller than the second current I2, are used. The determination unit60receives the first voltage and the second voltage (O1, O2), and, based on the logic values of the first voltage and the second voltage, detects whether there is a change from the reference state (output stable state) with respect to the output current of the output amplifier10and the tendency of the change. In addition, based on whether there is a detected change and the tendency of the change, in the case where the determination circuit60determines whether the output current of the output amplifier10is normal or abnormal and determines that the output current is abnormal, the determination circuit60determines that an abnormal current flows in one of the output-stage transistors11and12. The determination circuit60outputs a determination signal JD indicating the determination result. It is noted that the determination circuit60is formed in a logic circuit operated by using a power potential VDD and a ground potential VSS for a logic circuit. FIG.2Bis a diagram illustrating a determination operation of the determination circuit60based on the first voltage and the second voltage (O1, O2) in the load driving circuit100shown inFIG.2A. As shown inFIG.2B, in the case where the binary logic values (L or H) represented by the first voltage O1and the second voltage O2are different from each other, the determination circuit60outputs the determination signal JD indicating that the output current of the output amplifier10is normal. Meanwhile, in the case where the logic values represented by the voltages O1and O2are equal, such as the case where the logic values are both the logic values L, the determination circuit60outputs the determination signal JD indicating that there is abnormality in the output current output by the output-stage transistor12of the output amplifier10. Also, in the case where the logic values represented by the voltages O1and O2are both H, the determination circuit60outputs the determination signal JD indicating that there is abnormality in the output current output by the output-stage transistor11of the output amplifier10. In the following, the operation of the load driving circuit100shown inFIGS.2A and2Bwill be further described with details. As shown inFIG.2A, the voltages supplied to the sources and the gates of the transistors51and52of the coupling circuit50and the output-stage transistor11are the same. Therefore, the transistors51and52generate the mirror currents I1and I2corresponding to the current flowing in the output-stage transistor11. Likewise, the voltages supplied to the sources and the gates of the transistors53and54and the output-stage transistor12are the same. Therefore, the transistors53and54generate the mirror currents I3and I4corresponding to the current flowing in the output-stage transistor12. In addition, the currents I1and I3generated by using the transistors51and53form the first mirror current pair with respect to the currents flowing in the output-stage transistors11and12. In addition, the currents I2and I4generated by using the transistors52and54form the second mirror current pair with respect to the currents flowing in the output-stage transistors11and12. The four currents I1, I2, I3, and I4generated in the coupling circuit50are formed by the source-type currents I1and I2mirroring the current of the output-stage transistor11and the sink-type currents I3and I4mirroring the current of the output-stage transistor12. The coupling circuit50couples the source-type current I1and the sink-type current I3at the first output end (node n7) and outputs the voltage O1, and couples the source-type current I2and the sink-type current I4at the second output end (node n8) and outputs the voltage O2. In the coupling circuit50, the respective current output capabilities of the transistors51to54are set, so that, in the reference state of the output amplifier10, I1<I3 andI2>I4 among the currents I1, I2, I3, and I4. Accordingly, in the reference state, the voltage of the first output end (node n7) of the coupling circuit50is at a low level (AVSS), the voltage of the second output end (node n8) is at a high level (AVSS), and the logic values of the output voltages (O1, O2) is (L, H). Here, in the case where an abnormal current flows from the output amplifier10to the load90, the current of one of the output-stage transistors11and12increases, and the current of the other decreases. In this case, the voltages (O1, O2) output from the coupling circuit50becomes (L, L) or (H, H). The determination circuit60receives two voltages (O1, O2) output from the coupling circuit50, and, based on the logic values of the voltages (O1, O2), determines whether the output current of the output amplifier10is changed from the reference state and outputs the determination signal JD. The determination circuit60performs determination based on the logic values of the output voltages (O1, O2) of the coupling circuit50, as shown inFIG.2B. The determination circuit60determines that the output current is in the reference state when the voltages (O1, O2) are (L, H). Meanwhile, when the voltages (O1, O2) are (L, L), the determination circuit60detects that the current of the output-stage transistor12increases by a predetermined amount or more and determines that the output current is abnormal, and when the voltages (O1, O2) are (H, H), the determination circuit60detects that the current of the output-stage transistor11increases by a predetermined amount or more and determines the output current is abnormal. Then, in the coupling circuit50, the setting of the magnitude of each of the currents I1, I2, I3, and I4in the reference state is described. The ratio of the current values of the currents I1, I2, I3, and I4in the reference state can be set by using a channel width ratio of the transistors51,52,53, and54, for example. At the time of the reference state as described above, currents (m·Io) respectively flowing in the output-stage transistors11and12, that is, idling currents, are equal. Here, the channel width of each of the transistors51to54is set as follows in order to suppress the current consumption of the detection circuit40. For example, in the case where the output-stage transistor11is set as a transistor of a total channel width (m·Wp) formed by connecting in parallel m (m being an integer of 1 or more) transistors having a predetermined channel width Wp, the channel width of each of the transistors51and52is set to the channel width Wp. In addition, in the case where the output-stage transistor12is set as a transistor of a total channel width (m·Wn) formed by connecting in parallel m transistors having a predetermined channel width Wn, the channel width of each of the transistors53and54is set to the channel width Wn. Accordingly, each current of the detection circuit40is suppressed to one-mthof the current flowing in the output-stage transistors11and12. Specifically, the channel widths of the transistors51and54are respectively set as Wp and Wn, the channel width of the transistor53is set as Wn+ greater than Wn, and the channel width of the transistor52is set as Wp+ greater than Wp. Accordingly, the relationship among the magnitudes of the current values of the currents I1, I2, I3, and I4in the reference state can be set as follows: I1<I3 andI2>I4. It is noted that, regarding the channel length, it is preferable that the channel lengths of transistors of the same conductivity type are the same in order to match the threshold voltage. In addition, regarding the setting of the magnitude of the currents I1and I3and the magnitude of the currents I2and I4, such magnitudes are set by taking into consideration the sensitivity and the accuracy for detecting the change from the reference state in the currents flowing in the output-stage transistors11and12. That is, in the case where the currents of the output-stage transistors11and12are changed, and the actual currents I1and I3are changed by reversing the current magnitude relationship of I1<I3in the reference state into I1>I3, or the actual currents I2and I4are changed by reversing the current magnitude relationship of I2>I4in the reference state into I2<I4, the logic values of the voltages (O1, O2) are changed from the values in the reference state, and the output current is determined as abnormal by the determination circuit60. It is noted that, with respect to a slight change due to element manufacturing variation or the ambient temperature within a predetermined range, the amplitudes of the currents in the reference state can be set within a range of the reference state by setting I1<I3and I2>I4. In the following, the effects of the detection circuit40shown inFIG.2Awill be described. The detection circuit40adopts a configuration in which the mirror currents of the currents flowing in the output-stage transistors11and12forming the push-pull output-stage are generated, the mirror currents of the source type and the sink type are coupled, and the output voltages (O1, O2) are extracted from the coupling points thereof. Therefore, in the case where an abnormal current flows from the output amplifier10to the load90, the current of one of the output-stage transistors11and12increases, and the current of the other decreases. At the same time, the mirror currents of the output-stage transistors11and12generate the same current change. Therefore, even if a relatively large current difference is set in the currents I1and I3and the currents I2and I4in the reference state, in the case where an abnormal current flows through, the output voltages (O1, O2) quickly transition to the logical values (L, L) or (H, H) indicating the determination result. Therefore, the influence of the transistor manufacturing variation on the detection circuit40is reduced, and the detection circuit40is capable of sensitively and accurately detecting an output current abnormality in a quick response. It is noted that, in the detection circuit40shown inFIG.2A, whether the current amounts of the currents flowing in the output-stage transistors11and12are changed from the reference state is detected, and an analog/digital conversion circuit converting the detection into a two-bit digital value is realized. In the following, an example of an abnormal current detection operation by the load driving circuit100shown inFIG.2Ais described with reference toFIG.3. In the example shown inFIG.3, the configuration of the load driving circuit100and the size of each transistor are the same as those shown inFIG.2A. In addition, in the example shown inFIG.3, the load90is set as a data line (capacitive load) of the display panel, and the data line of the load90is short-circuited with an adjacent load (or a peripheral wiring)99due to a crack, etc., generated in the display panel. Because of the short circuit, a current is discharged from the data line of the load90to the adjacent data line or a power system wiring via the short-circuited part. At this time, among the respective currents of the output-stage transistors11and12, the current of the output-stage transistor11is increased from the reference state, and the current of the output-stage transistor12is decreased from the reference state. Accordingly, in the case where the detection circuit40is activated, the currents I1and I2of the transistors51and52mirroring the current of the output-stage transistor11also increase, and the currents I3and I4of the transistors53and54mirroring the output-stage transistor12decrease. At this time, although the amplitude relationship among the currents I1to I4in the reference state is as follows: I1<I3,I2>I4, the relative amplitudes of the currents I1and I3are reversed as follows due to the occurrence of the abnormal current: I1>I3, and the difference between the currents I2and I4further increases. Accordingly, the output voltages (O1, O2) of the coupling circuit50become (H, H), and the determination circuit60outputs the determination signal JD indicating that the output current is abnormal. InFIG.3, an example in which a current is discharged from the data line of the load90to the data line of the adjacent load99via the short-circuited part. However, in an example in which a current flows, in a reverse direction, to the data line of the load90via the short-circuited part, compared with the case of the reference state, the current of the output-stage transistor11decreases, and the current of the output-stage transistor12increases. At this time, in the coupling circuit50, the currents I3and I4increase, and the currents I1and I2decrease. Therefore, the output voltages (O1, O2) of the coupling circuit50become (L, L), and the determination circuit60outputs the determination signal JD indicating that the output current is abnormal. It is noted that, differing from the load driving circuit100shown inFIG.1or2Athat assumes a capacitive load, in the case of a driving circuit in which the output amplifier10outputs a constant current to the load90at the time of the reference state, such as a power amplifier, the currents of the output-stage transistors11and12at the time of the reference state are not the same. However, even in such case, the mirror ratio with respect to the currents of the output-stage transistors11and12are adjusted, and by setting the current output capability of each of the transistors51to54so that the amplitudes of the currents I1, I2, I3, and I4at the time of the reference state of the coupling circuit50are as follows: I1<I3,I2>I4, it is possible to detect the change (abnormal current) of the output current in the reference state. In addition, in the load driving circuit100, by providing the active/inactive switching circuit20and activating the detection circuit40only during a predetermined detection operation period, the current consumption of the detection circuit40can be suppressed to the minimum. As explicated in the above, the load driving circuit100includes the output amplifier10and the detection circuit40, the output amplifier10includes the push-pull output-stage formed by the first output-stage transistor11and the second output-stage transistor12of different conductivity types, and the detection circuit40detects an abnormality of the output current output by the output amplifier10to the load. The detection circuit40includes the coupling circuit50and the determination circuit60as follows. That is, the coupling circuit50respectively generates the first current and the second current (I1, I2) that are mirror currents with respect to the current flowing in the first output-stage transistor11, and respectively generates the third current and the fourth current (I3, I4) that are mirror currents with respect to the current flowing in the second output-stage transistor12. In addition, the coupling circuit50couples the first current (I1) and the third current (I3) at the first output node (n7), and outputs the voltage generated at the first output node (n7) as the first voltage (O1). In addition, the coupling circuit50couples the second current (I2) and the fourth current (I4) at the second output node (n8), and outputs the voltage generated at the second output node (n8) as the second voltage (O2). It is noted that, in the coupling circuit50, the first current to the fourth current are respectively generated, so that, at the time of the reference state in which the output current is stable within the predetermined range, the third current (I3) is greater than the first current (I1), and the second current (I2) is greater than the fourth current (I4). The determination circuit60detects whether the output current is changed from the reference state that is stable within the predetermined range based on the first voltage and the second voltage (O1, O2), and, in the case of detecting a change, outputs the determination signal JD indicating that the output current is abnormal and, in the case of not detecting a change, outputs the determination signal JD indicating that the output current is normal. At this time, between the currents flowing in the first output-stage transistor11and the second output-stage transistor12of the push-pull stage, if one of the currents increases, the other decreases. Accordingly, the voltage (O1) of the coupling point (n7) of the mirror currents (I1and I3) and the voltage (O2) of the coupling point (n8) of the mirror currents (I2, I4), which serve as the basis for generating the determination signal JD, follow the output current output by the output amplifier10, and quickly transition to the logic value indicating the determination result (whether the current is abnormal or not). Moreover, according to the configuration of the coupling circuit50, the influences of the manufacture variation of the transistors respectively generating the first to fourth currents (I1to I4) are reduced. Thus, according to the load driving circuit100, compared with the case where the current abnormality is determined by using a resistor or a comparator, it is possible to sensitively, accurately, and quickly detect the abnormality of the output current. Embodiment 2 FIG.4Ais a circuit diagram illustrating a configuration of a load driving circuit100A according to a second embodiment of the invention. The load driving circuit100A shown inFIG.4Ais formed by a semiconductor IC chip that is a semiconductor device, and includes the load driving voltage generation circuit VGC, the output amplifier10, and a detection circuit40A (including mirror current generation parts41and42). The load driving voltage generation circuit VGC generates a driving voltage AMPIN having a voltage value for driving the load90, and supplies the driving voltage AMPIN to the output amplifier10. The output amplifier10and the detection circuit40A shown inFIG.4Areceive the power potential AVDD for load driving via the AVDD power terminal, and receive the ground potential AVSS for load driving via the AVSS power terminal. In addition, the detection circuit40A receives the power potential VDD for the logic circuit via the VDD power terminal, and receives the ground potential VSS for the logic circuit via the VSS power terminal. Although the ground potential AVSS for load driving and the ground potential VSS for the logic circuit are generally configured as a common potential, they may also be configured as different power potentials depending on the purpose of application. The relationship among the magnitudes of the respective power potentials is, for example, as follows: AVSS≤VSS<VDD≤AVDD. Although the mirror current generation parts41and42are shown in a region of a broken line indicating the output amplifier10inFIG.4A, the mirror current generation parts41and42are components included in the detection circuit40A. Like the configuration shown inFIG.2A, the output amplifier10includes the output transistors11and12as the push-pull output-stage and the differential stage15formed by the operational amplifier. Since the operation thereof is the same as that shown inFIG.2A, the description thereof will be omitted. The detection circuit40A is a detection circuit in which the detection circuit40shown inFIG.2Ais operable by using the power for the logic circuit, and includes the active/inactive switching circuit20, a current folding part30, the mirror current generation parts41and42, the coupling circuit50, and the determination circuit60. In the mirror current generation part41, the source is applied with the power potential AVDD, and the gate is formed by a Pch transistor13connected to the node n1. It is noted that the drain of the transistor13is connected to the detection circuit40A via a node n3. In the mirror current generation part42, the source is applied with the ground potential AVSS, and the gate is formed by an Nch transistor14connected to the node n2. It is noted that the drain of the transistor14is connected to the detection circuit40A via a node n4. The circuit configuration of the active/inactive switching circuit20shown inFIG.4Ais the same as that shown inFIG.2A. However, in the configuration shown inFIG.4A, the switch21of the active/inactive switching circuit20is connected between the drain (node n4) of the transistor14and the common gate (node n5) of the transistors51and52, and the switch22is connected between the drain (node n3) of the transistor13and the common gate (node n6) of the transistors53and54. The switch23is connected between the VDD power terminal and the common gate (node n5) of the transistors51and52, and the switch24is connected between the VSS power terminal and the common gate (node n6) of the transistors53and54. When receiving the control signal CNT with an activating instruction, the active/inactive switching circuit20turns on the switches21and22together, and turns off the switches23and24together. Accordingly, the drain of the transistor13forming the mirror current generation part41is connected to the node n6 via the node n3 and the switch22, and the drain of the transistor14forming the mirror current generation part42is connected to the node n5 via the node n4 and the switch21, and the detection circuit40A becomes enabled. Meanwhile, in the case of receiving the control signal CNT prompting deactivation, the switches21and22are turned off together, and the switches23and24are turned on together. Accordingly, the connection of the drain of each of the transistor13forming the mirror current generation part41and the transistor14forming the mirror current generation part42with the coupling circuit50is cut off, and the detection circuit40A becomes disabled. The current folding part30includes a Pch transistor31as a first folding transistor and an Nch transistor32as a second folding transistor. In the transistor31, the source is applied with the power potential VDD for the logic circuit, and the gate and the drain are connected to the node n5. In the transistor32, the source is applied with the ground potential VSS for the logic circuit, and the gate and the drain are connected to the node n6. Like the coupling circuit50shown inFIG.2A, the coupling circuit50includes the Pch transistors51and52, and the Nch transistors53and54. It is noted that, in the coupling circuit shown inFIG.4A, although the connection of the transistors51to54is the same as that shown inFIG.2A, the source of each of the transistors51and52is applied with the power potential VDD for the logic circuit, and the source of each of the transistors53and54is applied with the ground potential VSS for the logic circuit. The determination unit60receives the first voltage and the second voltage (O1, O2), and, based on the logic values of the first voltage and the second voltage, detects whether the output current of the output amplifier10is changed from the reference state (output stable state). In addition, based on whether there is a detected change, the determination circuit60determines whether the output current of the output amplifier10is normal or abnormal. In the case where the determination circuit60determines that the output current is abnormal, the determination circuit60determines in which of the output-stage transistors11and12the abnormal current flows. The determination circuit60outputs a determination signal JD indicating the determination result. FIG.4Bis a diagram illustrating a determination operation of the determination circuit based on the first voltage O1and the second voltage O2in the load driving circuit100A shown inFIG.4A. As shown inFIG.4B, in the case where binary logic values (L or H) represented by the first voltage O1and the second voltage O2are different from each other, the determination circuit60outputs the determination signal JD indicating that the output current of the output amplifier10is normal. Meanwhile, in the case where the logic values represented by the voltages O1and O2are equal, such as the case where the logic values are both the logic values L, the determination circuit60outputs the determination signal JD indicating that there is abnormality in the output current output by the output-stage transistor11of the output amplifier10. Also, in the case where the logic values represented by the voltages O1and O2are both H, the determination circuit60outputs the determination signal JD indicating that there is abnormality in the output current output by the transistor12of the output amplifier10. In the following, the operation of the load driving circuit100A shown inFIGS.4A and4Bwill be further described with details. It is noted that the load90and the output amplifier are the same as those shown inFIG.2A, and the detailed description about the operation thereof will be omitted. The mirror current generation parts41and42are provided between the AVDD and AVSS power terminals, like the output-stage transistors11and12. Meanwhile, the main configuration of the detection circuit40A, except for the mirror current generation parts41and42, can be provided between power terminals different from the output-stage transistors11and12. InFIG.4A, the current folding part30, the coupling circuit50, and the determination circuit60are provided between the VDD power terminal receiving the power potential VDD and the VSS power terminal receiving the ground potential VSS. In the transistor13forming the mirror current generation part41, its own gate, like the gate of the output-stage transistor11, receives the signal PG output from the differential stage15, and outputs a mirror current Ia of the source type corresponding to the current output from the output-stage transistor11from its own drain to the node n3. In the transistor14forming the mirror current generation part42, its own gate, like the gate of the output-stage transistor12, receives the signal NG output from the differential stage and outputs a mirror current Ib of the sink type corresponding to the current flowing in the output-stage transistor12from the node n4 to the AVSS power terminal via its own drain. It is preferable that the mirror ratio of the mirror current pair (Ia, Ib) with respect to the currents of the output-stage transistors11and12is set to be 1 or less. Accordingly, the current consumption of the detection circuit40A can be suppressed. Specifically, with respect to the channel widths of the output-stage transistors11and12, the channel widths of the transistors13and14are set to be small. Here, in the case where the detection circuit40A is active, the current Ia of the source type generated by the mirror current generation part41is supplied to the drain of the transistor32via the node n3 and the switch22, and is mirrored to the currents I3and I4of the transistors53and54of the coupling circuit50. In the transistors32,53, and54, the sources and the gates form a current mirror in common connection. That is, the transistor32forms a current folding part that folds the current Ia of the source type flowing in the mirror current generation part41and mirrors the current Ia to the currents I3and I4of the sink type. The current Ib of the sink type generated by the mirror current generation part42flows in the transistor31via the node n4 and the switch21, and is mirrored to the currents I1and I2of the transistors51and52of the coupling circuit50. In the transistors31,51, and52, the sources and the gates form a current mirror in common connection. That is, the transistor31forms a current folding part that folds the current Ib of the sink type flowing in the mirror current generation part42and mirrors the current Ib to the currents I1and I2of the source type. The coupling circuit50, like the transistors31and32forming the current folding part, is formed by the four transistors51,52,53, and54between the VDD power terminal and the VSS power terminal. In the coupling circuit50ofFIG.4A, like that ofFIG.2A, the node n7 to which the drains of the transistors51and53are commonly connected is set as the first output end of the coupling circuit50, and outputs the voltage O1. In addition, the node n8 to which the drains of the transistors52and54are commonly connected is set as the second output end of the coupling circuit50, and outputs the voltage O2. In addition, regarding the setting of the magnitude of each of the currents I1, I2, I3, and I4in the reference state in the coupling circuit50, the size of each of the transistors51to54, for example, is determined so that I1<I3 andI2>I4 among the currents I1, I2, I3, and I4. The current output capability of each of the transistors51to54is set in correspondence with the size of each transistor. Specifically, the channel widths of the transistors51and54are respectively set as Wp and Wn, the channel width of the transistor53is set as Wn+ greater than Wn, and the channel width of the transistor52is set as Wp+ greater than Wp. That is, except for the power potential VDD and the ground potential VSS that are supplied, the coupling circuit50shown inFIG.4Ahas the same configuration as the coupling circuit50ofFIG.2A. However, differing from the coupling circuit50shown inFIG.2A, the currents I1and I2of the source type generated by the coupling circuit50ofFIG.4Aare generated as the mirror currents with respect to the current of the output-stage transistor12, and the currents I3and I4of the sink type are generated as the mirror currents with respect to the output-stage transistor11. In the configuration shown inFIG.4A, the determination circuit60is also provided between the VDD power terminal and the VSS power terminal. The determination circuit60, likeFIG.2A, receives the two voltages (O1, O2) output from the coupling circuit50, and, based on the logic values of the voltages (O1, O2), determines whether the output current of the output amplifier10is changed from the reference state and outputs the determination signal JD indicating the determination result. However, compared with the determination circuit60ofFIG.2A, the corresponding relationship between the place where an abnormal current occurs (the output-stage transistor11,12) and the states of the logic values for determining abnormality (State2, State3ofFIG.4B) is reversed in the determination circuit60shown inFIG.4A. In this way, in the configuration shown inFIG.4A, the main configuration part of the detection circuit40A, except for the mirror current generation parts41and42, can be realized by using a power potential range (VDD to VSS) smaller than the power potential range (AVDD to AVSS) of the detection circuit40shown inFIG.2A. For example, when used in a liquid crystal display apparatus, 18V and 0V are respectively supplied for the power potential AVDD and the ground potential AVSS for load driving, and 1.8V and 0V are supplied for the power potential VDD and the ground potential VSS for the logic circuit. Accordingly, the transistors31,32,51to54and the switches23,24can be realized by using low breakdown voltage elements same as the logic circuit. Therefore, it is possible to reduce the power consumption and the area that is taken up. Embodiment 3 FIG.5is a block diagram illustrating a configuration of a load driving circuit100B according to a third embodiment of the invention. The load driving circuit100B shown inFIG.5is formed by a semiconductor IC chip that is a semiconductor device, and has a configuration including a detection circuit40B with respect to multiple output amplifiers driving multiple loads (data line loads). Nevertheless, while not shown inFIG.5, the mirror current generation parts41and42included in the detection circuit40B are provided, in the connection configuration shown inFIG.4A, for each output amplifier. Specifically, as shown inFIG.5, the load driving circuit100B includes output amplifiers10_1,10_2,10_3, . . . ,10_kdriving multiple loads (data line loads)90_1,90_2,90_3, . . .90_k(k being an integer of 2 or more) via output terminals P1, P2, P3, . . . , Pk. It is noted that the configuration of each of the output amplifiers10_1to10_kis the same as the configuration of the output amplifier10shown inFIG.4A. In addition, in the detection circuit40B, an active/inactive switching circuit20B is used in place of the active/inactive switching circuit20. The active/inactive switching circuit20B includes the switches23and24like the active/inactive switching circuit20shown inFIG.4A. However, in the active/inactive switching circuit20B, in place of the switches21and22shown inFIG.4A, selection switches21_1to21_kand22_1to22_kare used. In the following, the output amplifier10_1shown inFIG.5, as the representative of the respective output amplifiers10, and circuits relating to the output amplifier10_1are described. The output amplifier10_1, like the output amplifier10shown inFIG.4, includes the differential stage15and the output-stage transistors11and12. In addition, the mirror current generation parts41and42(not shown inFIG.5) generating the mirror current pairs flowing in the output-stage transistors11and12and the selection switches21_1and21_2controlling (selecting) the supply of the mirror current pairs generated by the mirror current generation parts41and42to the nodes n5 and n6 are connected to the output amplifier10_1. In the load driving circuit100B, the configuration is provided for each output amplifier. The nodes n5 and n6 are common nodes receiving the mirror current pairs from the mirror current generation parts41and42connected to each of the output amplifiers10_1to k. Here, when the detection circuit40B is active, at the time when any set of the selection switches among the selection switches (21_1,22_1), (21_2,22_2), . . . , (21_k,22_k) respectively connected to the output amplifiers10_1,10_2, . . . ,10_kis turned on, whether the output current of the output amplifier10corresponding to the selection switches changes can be detected by using the detection circuit40B. The sink-type current supplied to the node n5 is converted into the source-type currents I1and12by using the transistors31,51, and52in the current folding part30and the coupling circuit50in the detection circuit40B. Similarly, the current of the source type supplied to the node n6 is converted into the sink-type currents I3and I4by using the transistors32,53, and54. The operations and the functions of the coupling circuit50and the determination circuit60are the same as those inFIGS.4A and4B. Each switch of the active/inactive switching circuit20B is controlled by the control signal CNT, and the control for activating/deactivating the detection circuit40B as well as the selection of the output amplifier for detecting whether the output current changes are carried out. Specifically, when the detection circuit40B is instructed to be active according to the control signal CNT, the switches23and24are turned off, and a set of the selection switches among the selection switches (21_1,22_1), (21_2,22_2), . . . , (21_k,22_k) are controlled to be turned on. By shifting the timing of turning on each selection switch to carry out the selection, it is possible to detect, in order, the state of the output current of each of the output amplifiers10_1to10_k. When the detection circuit40B is instructed to be inactive, the switches23and24are both turned off, and the selection switches (21_1,22_1), (21_2,22_2), . . . , (21_k,22_k) are all turned off. As described above, the load driving circuit100B shown inFIG.5includes the output amplifiers10_1to10_krespectively and individually driving the loads90_1to90_k, and includes the detection circuit40B of one system with respect to the output amplifiers10_1to10_k. In the load driving circuit100B, by using the active/inactive switching circuit20B, it is possible to selectively detect the change of the output current of each output amplifier with respect to the output amplifiers. At this time, in the load driving circuit100B, the current abnormality of each of the output amplifiers can be detected by using the detection circuit40B of one common system. Therefore, the area that is taken up can be reduced. In addition, the current folding part30, the coupling circuit50, the determination circuit60, and the switches23and24of the active/inactive switching current20B of the detection circuit40B can be formed by using the power potential range (VDD to VSS) of the logic circuit lower than the power potential range (AVDD to AVSS) of each amplifier, so it is possible to further reduce the area that is taken up. Where necessary, a clamping element in serial connection with each selection switch may also be provided between the nodes n3 and n4 of each output amplifier and the nodes n5 and n6 of the detection circuit40B. The clamping element is provided so that the potentials of the nodes n5 and n6 of the detection circuit40B, for example, are clamped within the power potential range for the logic circuit. For example, in the case where the clamping element is provided between the nodes n3 and n4 of each output amplifier and each selection switch, each selection switch can also be formed in the power potential range (VDD to VSS) of the logic circuit. Embodiment 4 FIG.6is a circuit diagram illustrating a configuration of a coupling circuit50A, which is another specific example of the coupling circuit50shown inFIGS.2A,4A, and5. InFIG.6, a voltage for generating the currents I1and I2of the source type is supplied to the node n5, and a voltage for generating the currents I3and I4of the sink type is supplied to the node n6. It is noted that, in the coupling circuit50A, the current output capabilities of the respective transistors are set, so that, in the reference state of the output amplifier, I1<I3 andI2>I4 are set for the respective currents I1, I2, I3, and I4. The coupling circuit50A includes the Pch transistor51outputting the current I1that is of the source type and in a fixed current amount and the Nch transistor54outputting the current14that is of the sink type and in a fixed current amount in the reference state, a circuit52A outputting the current I2of the source type in the reference state, and a circuit53A outputting the current I3of the sink type in the reference state. It is possible for the circuit52A to adjust the current amount of the current I2based on a control signal CNTA, and it is possible for the circuit53A to adjust the current amount of the current I3based on the control signal CNTA. Accordingly, it is possible for a detection circuit including the coupling circuit50A to adjust a boundary value, that is, a detection sensitivity, at which the output current of the output amplifier is switched from the reference state (output stable state) to the abnormal state, by using the control signal CNTA. It is noted that the transistors51and54shown inFIG.6are the same as the transistors51and54of the coupling circuit50shown inFIG.4A. In the following, the circuits52A and53A are described. The circuit52A is in a configuration in which multiple sets of Pch transistors and switches connected in series with each other are provided in parallel between the VDD power terminal receiving the power potential VDD for the logic circuit and the node n8. In multiple Pch transistors52a_1,52a_2, . . . , arranged in parallel, each source is supplied with the power potential VDD, each gate is commonly connected to the node n5, and mirror currents corresponding to the voltage of the node n5 are respectively generated. Each of switches57_1,57_2, . . . , arranged in parallel is controlled to be turned on and off based on a current ratio indicated in the control signal CNTA supplied from the outside. At this time, among the switches57_1,57_2, . . . , a synthesis current of the transistor connected to the switch controlled to be turned on based on the current ratio is set as the current I2. That is, by variably setting the ratio of the Pch transistors52a_1,52a_2, . . . , controlled to be active or inactive through the ON/OFF control of the switches57_1,57_2, . . . , the current amount of the current I2with respect to the current I4can be optimally adjusted. The circuit53A is in a configuration in which multiple sets of Nch transistors and switches connected in series with each other are provided in parallel between the VSS power terminal receiving the ground potential VSS for the logic circuit and the node n7. In multiple Nch transistors53a_1,53a_2, . . . , arranged in parallel, each source is supplied with the ground potential VSS, each gate is commonly connected to the node n6, and mirror currents corresponding to the voltage of the node n6 are respectively generated. Each of switches55_1,55_2, . . . , arranged in parallel is controlled to be turned on and off based on a current ratio indicated in the control signal CNTA supplied from the outside. At this time, among the switches55_1,55_2, . . . , a synthesis current of the transistor connected to the switch controlled to be turned on based on the current ratio is set as the current I3. That is, by variably setting the ratio of the Nch transistors53a_1,53a_2, . . . , controlled to be active or inactive through the ON/OFF control of the switches55_1,55_2, . . . , the current amount of the current I3with respect to the current I1can be optimally adjusted. In the following, a specific example of setting the current output capability in each transistor shown inFIG.6is described. For example, the current output capability of a Pch transistor which has the channel width Wp and in which the gate is connected to the node n5, and the current output capability of an Nch transistor which has the channel width Wn and in which the gate is connected to the node n6 are equal. The channel width of the Pch transistor51generating the current I1of the source type of the coupling circuit50A is set as Wp, and the channel width of the Nch transistor54generating the current I4of the sink type is set as Wn. The circuit52A generating the current I2of the source type is controlled by the control signal CNTA, so that a synthesis current equivalent to three Pch transistors of the channel width Wp is generated. The circuit53A generating the current I3of the sink type is controlled by the control signal CNTA, so that a synthesis current equivalent to three Nch transistors of the channel width Wn is generated. Accordingly, the ratio among the current amounts of the respective currents I1, I2, I3, and I4in the reference state can be set as follows: I1:I3=1:3; I2:I4=3:1. Here, in the case where the difference between the ratio of the current amount of the current I3with respect to the current I1and the ratio of the current amount of the current I2with respect to the current I4is set to be large, the boundary value for switching the output current of the output amplifier10to the abnormal state is increased. In addition, in the case where the difference between the ratios of the current amounts is set to be small, the boundary value for switching the output current of the output amplifier10to the abnormal state is decreased. By adjusting the ratio of the current amount, it is possible to optimally adjust the detection sensitivity. FIG.7is a circuit diagram illustrating a configuration of a coupling circuit50B as a modified example of the coupling circuit50A shown inFIG.6. In the coupling circuit50B, like the configuration shown inFIG.6, the voltage for generating the currents I1and I2of the source type is supplied to the node n5, and the voltage for generating the currents I3and I4of the sink type is supplied to the node n6. In addition, in the coupling circuit50B, the current output capabilities of the respective transistors are set, so that, in the reference state of the output amplifier, I1<I3 andI2>I4 are set for the respective currents I1, I2, I3, and I4. The coupling circuit50B includes the Pch transistor51in which the current I1of the source type is set to a fixed value in the reference state, the Nch transistor54in which the current I4of the sink type is set to a fixed value in the reference state, and circuits52B and53B capable of individually making variable adjustment to the current amounts of the current I2of the source type and the current I3of the sink type, respectively, in the reference state. It is noted that the transistors51and54shown inFIG.6are the same as the transistors51and54of the coupling circuit50shown inFIG.4A. Here, the configurations of the circuits52B and53B are described in the following. The circuit52B has a configuration in which the Pch transistor52and a variable current source58are provided in parallel between the VDD power terminal receiving the power potential VDD for the logic circuit and the node n8. In the transistor52, the source is supplied with the power potential VDD, the gate is connected to the node n5, and a mirror current corresponding to the voltage of the node n5 is generated. In the variable current source58, a current amount is controlled based on a current ratio indicated in a control signal CNTB supplied from the outside, and a current having the current amount flows between the VDD power terminal and the node n8. Accordingly, a synthesis current of the transistor52and the variable current source58is set as the current amount of the current I2. That is, through the current control of the variable current source58, the difference between the current amount of the current I4and the current amount of the current I2can be set optimally. The circuit53B has a configuration in which the Nch transistor53and a variable current source59are provided in parallel between the VSS power terminal receiving the ground potential VSS for the logic circuit and the node n7. In the transistor53, the source is supplied with the ground potential VSS, the gate is connected to the node n6, and a mirror current corresponding to the voltage of the node n6 is generated. In the variable current source59, a current amount is controlled based on a current ratio indicated in the control signal CNT, and a current having the current amount flows between a node n7 and the VSS power terminal. Accordingly, a synthesis current of the transistor53and the variable current source59is set as the current amount of the current I3. That is, through the current control of the variable current source59, the difference between the current amount of the current I1and the current amount of the current I3can be set optimally. In the following, a specific example of setting the current output capability in the configuration shown inFIG.7is described. For example, the current output capability of a Pch transistor which has the channel width Wp and in which the gate is connected to the node n5, and the current output capability of an Nch transistor which has the channel width Wn and in which the gate is connected to the node n6 are equal. The channel width of the transistor51generating the current I1of the source type of the coupling circuit50B is set as Wp, and the channel width of the transistor54generating the current I4of the sink type is set as Wn. In the circuit52B generating the current I2of the source type, the channel width of the transistor52is set as Wp. In the circuit53B generating the current I3of the sink type, the channel width of the transistor is set as Wn. Accordingly, the difference between the current amounts of the currents I1and I3in the reference state is determined by the current amount of the variable current source59, and the difference between the current amounts of the currents I2and I4is determined by the current amount of the variable current source58. InFIG.7as well, by adjusting the ratio of the current amount of the current I3with respect to the current I1and the ratio of the current amount of the current I2with respect to the current I4, it is possible to optimally adjust the detection sensitivity, likeFIG.6. It is noted that, in each coupling circuit ofFIGS.6and7, it is preferable that the difference between the current amounts of the currents I1and I3in the reference state and the difference between the current amounts of the currents I2and I4in the reference state are set so that slight changes due to the manufacture variation of each transistor, the ambient temperature within a predetermined range, etc., fall within the range of the reference state. In addition, in each coupling circuit inFIGS.6and7, although the transistors51and54respectively generating the currents I1and14are described as individual transistors for the ease of description, the transistors may also be configured by multiple transistors as long as the ratio among the current amounts of the currents I1, I2, I3, and I4in the reference state can be set appropriately. In addition, the sizes of the respective transistors are not limited to those shown inFIGS.6and7. Embodiment 5 In the following, a specific example for the case in which the load driving circuit100B shown inFIG.5is applied to the display apparatus is described. FIG.8is a block diagram illustrating a configuration of a display apparatus provided with a data driver120_1including the load driving circuit100B. The display apparatus shown inFIG.8includes the display panel150and the controller130. The display panel150includes the gate lines GL1to GLr (r being an integer of 2 or more) arranged in the horizontal direction on an insulating substrate, data lines DL1to DLk (k being an integer of 2 or more) arranged in the vertical direction, and the pixel parts154arranged in a matrix at the intersection parts between the respective gate lines and data lines. On the display panel150, the gate driver110driving each gate line and the data driver120_1driving each data line are provided, and the controller130adjusts the output timings of the gate driver110and the data driver120_1. The gate driver110is supplied with a signal group GS form the controller130, and outputs a scan signal supplied to each gate line based on the signal group GS. The data driver120_1is supplied with the video data signal VDS including CLK, and various control signals and video data signals, etc., from the controller130, and, based on the video data signal VDS, outputs gradation signals supplied to the data lines DL1to DLk. It is noted that the data driver120_1is usually formed by using a silicon LSI, and is implemented to an end part of the display panel150by using chip-on-glass (COG) or chip-on-film (COF). In the case where the data driver120_1is formed by multiple individual ICs, the video data signal VDS including various control signals relating to data lines and each responsible for driving is supplied to each data driver IC from the controller130. In the case where the data driver120_1is a single IC or formed by a limited number of ICs, the controller130may be built in the data driver120_1. In such case, the signal group supplied from the outside to the controller130is directly supplied to the data driver120_1. FIG.9is a block diagram illustrating an example of the internal configuration of the data driver120_1. As shown inFIG.9, the data driver120_1includes a control core part80, a timing control part81, a data latch82, a level shifter83, a gradation voltage generation part84, a decoder85, a multiplexer86, an output amplification part87, and the detection circuit40B. In addition, the data driver120_1receives the power potential for the logic circuit and the power potential for driving the data line (load) from the outside. The power potential for the logic circuit is supplied to the control core part80, the timing control part81, the data latch82, and the detection circuit40B, and the power potential for load driving is supplied to the level shifter83, the gradation voltage generation part84, the decoder85, the multiplexer86, and the output amplification part87. The control core part80receives the video data signal VDS in serial arrangement supplied from the outside. The video data signal VDS is a signal including a clock CLK or various signal groups as well as setting information and is a serialized signal. The control core part80applies a serial/parallel conversion process on the video data signal VDS, and extracts a series of video data PD, the clock CLK, the various signal groups (a horizontal synchronization signal, a vertical synchronization signal, and various control signals), and the setting information from the video data signal VDS. The control core part80generates a reference timing signal LOAD and a polarity reversing signal POL based on the horizontal synchronization signal and the vertical synchronization signal. The control core part80supplies the clock CLK, the reference timing signal LOAD, and setting information SEI to the timing control part81. In addition, the control core part80supplies the series of the video data PD, the setting information SEI, and the polarity reversing signal POL to the data latch82. In addition, the control core part80supplies gamma setting information STD to the gradation voltage generation part84, supplies the polarity reversing signal POL to the multiplexer86, and supplies the control signal CNT to the detection circuit40B. In addition, the control signal CNT may also include the control signal CNTA or CNTB corresponding to the coupling circuit ofFIG.6or7. Based on the reference timing signal LOAD, the clock CLK, and the setting information SEI, the timing control part81generates a latch output timing signal group controlling the timing of the gradation signal output from each of the output terminals P1to Pk of the data driver120_1and supplies the latch output timing signal group to the data latch82. For each output respectively corresponding to k pixels per horizontal scan line, the data latch82imports k video data PD from the series of video data PD in accordance with the latch output timing group, and supplies, as video data Q1to Qk, the k video data PD to the level shifter83. The level shifter83includes k level shifting circuits individually performing level shifting on the amplitudes of the respective signal levels of the video data Q1to Qk. The level shifting circuits generate digital video data J1to Jk with respect to the video data Q1to Qk. In the digital data J1to Jk, respective signal level amplitudes are level-shifted to a high amplitude that is greater. The gradation voltage generation part84, based on the gamma setting information STD and for the respective primary colors (red, green, blue) of pixels, generates multiple positive polarity gradation voltage groups POS and negative polarity gradation voltage groups NEG having voltage values in accordance with the gamma conversion characteristics corresponding to the primary colors. The decoder part85includes k decoders individually converting the respective digital video data J1to Jk into analog voltage values. These k decoders use the positive polarity gradation voltage groups POS or the negative polarity gradation voltage groups NEG, convert the respective digital video data J1to Jk into positive or negative polarity analog gradation voltages corresponding to the luminance represented by a video data piece, and supply the k analog gradation signals that are obtained to the multiplexer86. The multiplexer86supplies, to the output amplification part87, k analog gradation signals in which the arrangement in the series of k analog gradation signals, such as the exchange of the even-numbered ones and odd-numbered ones, is changed based on the polarity reversing signal POL. As shown inFIG.5, the output amplification part87includes the output amplifiers10_1to10_keach having the circuit configuration (including41and42) shown inFIG.4A. The output amplifiers10_1to10_krespectively output k gradation signals in which the k analog gradation signals supplied from the multiplexer86are individually amplified to the data lines DL1to DLk via the output terminals P1to Pk. As shown inFIG.5, the detection circuit40B includes the active/inactive switching circuit20B as well as the current folding part30and the coupling circuit50for one system and with the internal configurations shown inFIG.4. The mirror current generation parts41and42mirroring the currents of the output-stage transistors11and12are connected to the respective output amplifiers10_1to10_k, the mirror current pair selected, by the active/inactive switching circuit20B, from the mirror current pairs generated by the respective mirror current generation parts is transmitted to the coupling circuit50of the detection circuit40B, and whether the output current changes with respect to the reference state of the output amplifier is detected to determine whether the current is normal or abnormal. The active/inactive control with respect to the detection circuit40B and the selection control of the active/inactive switching circuit20B is controlled through the control signal CNT from the control core part80. In addition, the determination signal JD of the detection circuit is supplied to the control core part80. It is noted that, at the time when the output current of the output amplifier is determined as abnormal by the detection circuit40B, in the case of transmitting an abnormality detection notification to the user or stopping the display apparatus, for example, it may also be that the control core part80outputs a signal FB indicating such fact to an external controller based on the determination signal JD. It is noted that, if the above configuration is applied to the data driver of an organic EL display apparatus, the polarity reversing signal POL and the multiplexer86shown inFIG.9are omitted. FIG.10is a timing chart illustrating an example of the timing for performing abnormal current detection of a data line in the data driver120_1. InFIG.10, the timing of one frame period from T0to T1corresponding to the re-writing period of one frame is shown. The one-frame period is defined by the vertical synchronization signal (Vsyn). In a period from T0to t0immediately after the one-frame period starts, there is a blanking period reflecting various setting signals. In a video data active period from t0to tk after the blanking period, the analog gradation signals corresponding to the video data are output to data lines in one horizontal period (1H). In addition, according to the scan signal output from the gate driver110, in the video data active period from t0to tk, gate lines are selected in order in association with the timing (Hsyn) of one horizontal period, and the gate lines are set in a non-selected state in periods other than the data active period. The detection circuit40B mounted in the data driver120_1shown inFIG.8is activated and performs a detection operation in an abnormality detection period from ta to tb during the blanking period from T0to t0, for example. In the abnormality detection period from ta to tb, control may also be exerted so that the output amplifier connected to the data line of the detection target is selected in order by the active/inactive switching circuit20B. Alternatively, it may also be that the output amplifiers selected in the abnormality detection period in one frame period are set as a limited number of output amplifiers, and for each frame period, different output amplifiers are selected in order, and whether there is an abnormal current is detected for all the data lines in multiple frame periods. According to the data driver120_1shown inFIG.8, it is possible to provide the data driver with a function of detecting malfunctioning of a display panel by detecting an abnormal current to the data line. | 70,195 |
11862059 | DETAILED DESCRIPTION Hereinafter, example embodiments will be described clearly and in detail using the drawings to the extent that those of skilled in the art may easily implement the present disclosure. FIG.1is a block diagram illustrating a display device100according to example embodiments. Referring toFIG.1, the display device100may include a display panel110, a gate driver120, a source driver130, a gamma voltage generator140, a fast gamma settling circuit (GFS)145, and a timing controller150. The display panel110may include a plurality of pixels PXs arranged in a matrix form. In an embodiment, the display panel110may be implemented to display an image in units of frames. For example, the display panel110may be implemented as one of a liquid crystal display (LCD), a light emitting diode (LED) display, an organic LED (OLED) display, an active-matrix OLED (AMOLED) display, an electrochromic display (ECD), a digital mirror device (DMD), an actuated mirror device (AMD), a grating light valve (GLV), a plasma display panel (PDP), an electro luminescent display (ELD), and a vacuum fluorescent display (VFD), and may also be implemented as other types of flat panel displays or flexible displays. As shown inFIG.1, the display panel110may include gate lines GL1to GLm (m is an integer of 2 or greater) arranged in a row direction, source lines SL1to SLn (n is an integer of 2 or greater) arranged in a column direction, and pixels PX formed at intersections of the gate lines GL1to GLm and the source lines SL1to SLn. In an embodiment, some of pixels, connected to the same gate line, having different colors, and being adjacent to each other may be configured as a unit pixel. Here, each of some pixels of the unit pixel may be referred to as a sub-pixel. The gate driver120is implemented to select gate lines GL1to GLm by supplying a scan clock (or a gate-ON signal) to the gate lines GL1to GLm in response to a first control signal CTRL1provided from the timing controller150. In an embodiment, one of the gate lines GL1to GLm may be selected according to the scan clock output from the gate driver120. A display operation may be performed by applying a pixel signal (or an image signal) corresponding to each of the pixels to the pixels of a horizontal line corresponding to the selected gate line through the source lines SL1to SLn. A source line may also be referred to as a source channel. In an embodiment, the gate lines GL1to GLm may be selected sequentially or non-sequentially. The source driver130may be implemented to convert image data into pixel signals, which are analog signals, (e.g., grayscale voltages or currents corresponding to each pixel data) in response to a second control signal CTRL2and provide the pixel signals to the source lines SL1to SLn to drive the source lines SL1to SLn. For example, the source driver130may charge the source lines SL1to SLn based on the pixel signals. The source driver130may provide pixel signals of one line to the source lines SL1to SLn during one horizontal driving period. Thereafter, when the scan clock is provided, the source driver130may provide pixel signals to pixels of a horizontal line corresponding to the selected gate line through the source lines SL1to SLn. The source driver130may include a plurality of amplifiers. In an embodiment, each of the plurality of amplifiers may provide a pixel signal to at least one corresponding source line. Here, the amplifier may be referred to as a channel amplifier or a source amplifier. In an embodiment, some of the plurality of amplifiers may be turned off and others may be turned on according to pixel data. Here, some amplifiers which are turned on may drive two source lines. The timing controller150may be implemented to control an overall operation of the display device100. For example, the timing controller150may receive image data RGB and timing signals (e.g., a horizontal synchronization signal HSYNC, a vertical synchronization signal VSYNC, a clock signal DCLK, and a data enable signal DE) from an external device (e.g., a host device) and generate the first control signal CTRL1and the second control signal CTRL2for controlling the source driver130and the gate driver120based on the received pixel data RGB and timing signals, respectively. The gamma voltage generator140may be implemented to generate and output gamma voltages corresponding to the image data RGB. In an embodiment, the gamma voltage generator140may generate gamma voltages in a voltage division manner. In an embodiment, the gamma voltage generator140may output gamma voltages to a plurality of corresponding gamma lines GML. The fast gamma settling circuit145may be implemented to quickly settle a gamma voltage corresponding to each of the gamma lines GML. In addition, the timing controller150may convert a format of the image data RGB received from the outside to match an interface specification with the source driver130and transmit the converted image data to the source driver130. For example, the converted image data may include packet data. The display device100may further include an interface circuit. The interface circuit may be implemented to communicate with an external device, e.g., a host processor, and receive the image data RGB and timing signals from the external device. In an embodiment, the interface circuit may include one of an RGB interface, a CPU interface, a serial interface, a mobile display digital interface (MDDI), an inter integrated circuit (I2C) interface, a serial peripheral interface (SPI), and a micro controller unit (MCU) interface, a mobile industry processor interface (MIPI), an embedded display port (eDP) interface, a D-subminiature (D-sub), an optical interface, or a high definition multimedia interface (HDMI). The interface circuit may include various serial or parallel interfaces in addition. InFIG.1, the gate driver120, the source driver130, the gamma voltage generator140, the fast gamma settling circuit145, and the timing controller150are illustrated as different functional blocks. In an embodiment, the respective components may be implemented as different semiconductor chips. In another embodiment, at least two of the gate driver120, the source driver130, the gamma voltage generator140, the fast gamma settling circuit145, and the timing controller150may be implemented as one semiconductor chip. For example, the source driver130and the timing controller150may be integrated into one semiconductor chip. Also, some components may be integrated on the display panel110. For example, the gate driver120may be integrated on the display panel110. FIG.2Aillustrates source driver130ofFIG.1configured as side-by-side source drivers130A and130B. Gamma voltage generator140provides gamma voltages to130A and130B.130A and130B are fed DATA and CTRL2from the TCON, similar toFIG.1. The DATA is latched in buffers referred to as “Data Latch” inFIG.2A. The output of each Data Latch feeds a Decoder. Each Decoder feeds an amplifier which places a gamma voltage onto a source line. Four example source lines are shown inFIG.2A, those being SL1, SLk, SL(k+1) and SLn. FIG.2Billustrates a settling time of a gamma line of a general display device. In general, a 1-line pixel charging time of the panel continuously decreases for high-frequency and high-resolution display driving. In addition, a larger number of source channels are required in a DDI to support high resolution. The increase in the number of source channels increases a gamma load, thereby slowing the settling time of the gamma line. This may deteriorate the settling time of the source line, which may cause problems in a fast operation. A gamma line of the source channel structurally farthest from the gamma voltage generator is the slowest point in settling due to an RC delay. As a fast driving technique, a fast slew technique for improving output slewing characteristics of the source amplifier AMP may be used. Even if such a fast slew technique is used, as shown inFIG.2B, if gamma settling is slow, a gamma settling time is a bottleneck of a source settling time due to an input delay of the source amplifier AMP. Therefore, it is necessary to improve the gamma settling time in order to improve the characteristics of an IC output settling time in a situation where support for high frequency and high resolution is continuously required. FIG.3is a view illustrating a concept of a fast gamma settling circuit GFS according to example embodiments. Referring toFIG.3, the fast gamma settling circuit GFS may include switches SW1and SW2connected in series between gamma tab point lines TAPk and TAPk+1. A tab voltage is a boost voltage to speed arrival of a gamma source line at a proper value. Here, the gamma tab point lines TAPk and TAPk+1 may correspond to gamma lines GMLk and GMLk+1. The first switch SW1may be connected between the gamma tab point line TAPk and a local gamma tab line LTAPk. The second switch SW2may be connected between the gamma tab point line TAPk+1 and the local gamma tab line LTAPk. In an embodiment, the first and second switches SW1and SW2may be turned on in response to a GFS enable signal EN. That is, in response to the GFS enable signal EN, the fast gamma settling circuit GFS may generate a local tab. Here, the generated local tab may generate a high-speed AC path. A local tab point line may also be referred to herein as a local boost line. The switches SW1and SW2may receive voltages from the gamma tab point lines TAPk and TAPk+1, respectively. At a timing at which each switch is turned on, a local tab voltage may be generated through resistance division using a resistance component of the switch. Also, the k-th tab point line TAPk and the (k+1)-th tab point line TAPk+1 may be resistor-divided in the gamma voltage generator. A resistor-divided local tab point line LTAPk may generate a low-speed DC path. As mentioned above, a local tab point line may also be referred to herein as a local boost line. FIG.4is a view illustrating a voltage settling path using a fast gamma settling circuit GFS according to example embodiments. Referring toFIG.4, a gamma line path when a settling path and a settling function are used is illustrated. In existing techniques, a unidirectional settling path is formed in a gamma block. In the case of using the settling function, a high-speed AC path may be generated by dividing a value of a DC voltage stored in a parasitic capacitance CLINE of a line using a low impedance resistance Rt of a switch. That is, in addition to the existing settling DC path, an additional settling-related AC path may be generated. Settling time characteristics of the gamma line may be improved through the DC path and the AC path. Here, the reason for using a switch rather than a resistor to generate a local tab voltage is to reduce a static current according to the use of a resistor through timing control. When such a timing control operation is performed, it is possible to prevent an offset from occurring due to mismatch of the switch resistance in the process of voltage distribution using the switch. Referring to ON/OFF timing control of the switch, the switch may be set, by a register, to be turned on at a point where gamma fluctuation according to data updating occurs. After the ON-timing operation, an OFF-timing operation may be performed. Here, since the switch ON/OFF operation is performed at a time of data change, additional current consumption may be minimized in an operation corresponding to gamma settling and panel charging current. In general, when the amount of data change is large, an operation corresponding to a dynamic current generated at the time of driving the panel is included, so that additional current consumption of the GFS may be minimized. However, when data such as a monochromatic pattern is maintained without a change, a phenomenon in which consumption current increases due to an additional dynamic current caused by a switch “ON/OFF” operation may occur. Thus, a data comparison method may be applied to prevent an increase in current consumption according to the operation of the GFS in a pattern with a small data change. FIGS.5A,5B, and5Care views illustrating embodiments to which a data comparison method of a fast gamma settling circuit is applied. FIG.5Ais a symbolic representation of a screen display in which same-intensity values span the entire screen and change at specific line points. As shown inFIG.5A, when the amount of data change of the channel is large (red line point), a fluctuation of gamma is significantly generated. Accordingly, a source output is also slowed by an input delay of the source amplifier. Considering this the function of the GFS may be activated when the amount of data change is large.FIG.5Aillustrates pixel intensity values “black,” “68 gray”, “black,” and “128 gray.” In addition, as shown inFIG.5B, when there are many data changing channels (red line point, “128 Gray” does not span all channels) than when there are few data changing channels (blue line point, “128 Gray” spans almost all channels), gamma fluctuation increases due to an increase in a load required for settling. Accordingly, a delay of the gamma settling time is increased. In consideration of the characteristics ofFIGS.5A and5B, when the amount of data change is large as shown inFIG.5C, a change in data of the most-significant 2 bits of each channel may be detected. Accordingly, the amount of data change of each channel may be checked. Meanwhile, it should be understood that data change detection is not limited to the most-significant 2 bits of each channel.FIG.5Billustrates pixel intensity values “black,” “128 gray”, “black,” and “128 gray.” A data comparison logic152may compare previous channel data with current channel data, determine a gravity (high/low) of the amount of a data change according to a comparison result, count the number of channels with the large amount of data change by a counter153, and generate a GFS enable signal COMP_EN when a count value is greater than a reference value. FIG.6Ais a view illustrating a fast gamma settling circuit145aof a data comparison method according to example embodiments, andFIG.6Bis a timing diagram of the fast gamma settling circuit145aaccording to example embodiments. Referring toFIG.6A, the fast gamma settling circuit145amay include a first switch SW1, a second switch SW2, and a logic circuit AND. The logic circuit AND may generate a tab division enable signal DIV_ENH by performing a logical operation on the GFS enable signal COMP_EN and a switch signal FS_SW_EN. The first switch SW1and the second switch SW2may be turned on in response to the tab division enable signal DIV_ENH. Data may be updated in response to a data update signal DE, and data may be transmitted to each source channel in response to a horizontal synchronization signal HSYNC. When the count value of the changed number of channels exceeds a reference value, the GFS enable signal COMP_EN has a high level as shown inFIG.6B. When the GFS enable signal COMP_EN has a high level, the switch signal FS_SW_EN determining an ON/OFF timing of the switches SW1/SW1may turn on the switches SW1/SW2of the GFS during a high level timing. When the tab division enable signal DIV_ENH has a high level, a functional operation (a gamma fast settling operation) of the GFS may be performed. The fast gamma settling circuit145aof the data comparison method described above may control the GFS operation according to data patterns, thereby preventing an occurrence of an unnecessary dynamic current. FIGS.7A and7Bare views illustrating an operation of data comparison logic according to example embodiments. In an embodiment, the source drivers may be divided into two left and right groups to be applied. In an embodiment, whether there is a change in an (N−1)-th line data CH_DATA_PRE and an N-th line data CH_DATA_CUR of the most-significant 2 bits of the source channel may be checked. When the number of channels in which there is a change in data of the most-significant 2 bits is greater than the number (e.g., 720) of channels set as registers, the GFS function may be operated. Meanwhile, the switch of the fast gamma settling circuit may be implemented as a transmission gate. FIGS.8A and8Bare views illustrating a fast gamma settling circuit145band a timing diagram thereof according to another embodiment. Referring toFIG.8A, the fast gamma settling circuit145bincludes a first transmission gate TG1, a second transmission gate TG2, a first logic circuit NAND, and a second logic circuit INV. The first transmission gate TG1may be connected between a first tab point line TAPk and a local tab point line LTAPk in response to a tab division enable signal DIV_ENH and an inverted tab division enable signal DIV_ENHB. The second transmission gate TG2may be connected between the second tab point line TAPk+1 and the local tab point line LTAPk in response to the tab division enable signal DIV_ENH and the inverted tab division enable signal DIV_ENHB. The first logic circuit NAND may perform a first operation on a source output enable signal SD_SOUT_EN, a left most-significant bit comparison signal MSB_COMP_EN_L, and a right most-significant bit comparison signal MSB_COMP_EN_R to generate an inverted tab division enable signal DIV_ENHB. The second logic circuit INV may invert the inverted tab division enable signal DIV_ENHB to generate a tab division enable signal DIV_ENH. As shown inFIG.8B, when both the left most-significant bit comparison signal MSB_COMP_EN_L and the right most-significant bit comparison signal MSB_COMP_EN_R have high levels, the tab division enable signal (DIV_ENH) has a low level for a predetermined time in response to the data enable signal DE. The fast gamma settling circuit145baccording to example embodiments may branch a gamma line (gamma tab point) in routing to a gamma to source driver. In the fast gamma settling circuit145b, a switch may be positioned between adjacent tab voltages, thereby making a ½ voltage of a first voltage of the first tab point line TAPk and a second voltage of the second tab point line TAPk+1 and providing the ½ voltage to a center gamma line. FIG.9is a view comparing simulation waveforms of the related art display device and a display device according to example embodiments. The x-axis of the graph inFIG.9is marked with time units of10u,11u,12uand13u. Referring toFIG.9, in the case of using the worst gamma pattern such as a source output waveform, a settling time of approximately 554 ns (improvement rate: 22%) may be improved. Thereby, it is possible to satisfy the same source characteristics at a high frequency of 22%. Therefore, the improvement of the settling time of 22% may have an effect of satisfying the same characteristics as the source characteristics of the worst settling pattern of 120 Hz even at 144 Hz. In the case of GFS, it is a method of improving a settling speed by directly generating a voltage using a switch, in which a peak level of a gamma voltage change according to a data change is lower than that of the related art, exhibiting the best characteristics at an initial speed. The fast gamma settling circuit according to example embodiments may be variously disposed inside the DDI. Hereinafter, the fast gamma settling circuit of some embodiments is described as a tab division switch block in the DDI. FIG.10is a view illustrating a position of a tab division switch block of a display device200according to example embodiments. Referring toFIG.10, two source drivers231-1and231-2may be disposed on the left of the left gamma voltage generator240and two source drivers232-1and232-2may be disposed on the right of the left gamma voltage generator240. A tab division switch block TAB_DIV_SW245may be disposed between the two source drivers231-1and231-2and a tab division switch block TAB_DIV_SW246may be disposed between the two source drivers232-1and232-2. In an embodiment, the tab division switch block245or246may include a first switch block R-SW corresponding to a red gamma R_GAMMA, a second switch block R-SW corresponding to a green gamma G GAMMA, and a third switch block B-SW corresponding to a blue gamma B_GAMMA. In an embodiment, each of the first switch block R-SW, the second switch block G-SW, and the third switch block B-SW may include a plurality of transmissions gates connected in series for tab division. FIG.11is a view illustrating a position of a tab division switch block of a display device300according to example embodiments. Referring toFIG.11, two source drivers331-1and331-2may be disposed on the left of a gamma voltage generator340and two source drivers332-1and332-2may be disposed on the right of the gamma voltage generator340. Tab division switch blocks TAB_DIV_SW345and346may be disposed on the edges of the left and right source drives331-1and331-2and332-1and332-2, compared with those illustrated inFIG.10. InFIGS.10and11, tab division switch blocks are disposed in all of the red gamma R_GAMMA, green gamma G GAMMA, and blue gamma B_GAMMA. However, embodiments are not limited thereto. In some embodiments, the tab division switch block may be disposed in at least one of the red gamma R_GAMMA, green gamma G GAMMA, and blue gamma B_GAMMA. FIG.12is a view illustrating a position of a tab division switch block of a display device400according to example embodiments. Referring toFIG.12, two source drivers431-1and431-2may be disposed on the left of the gamma voltage generator440and two source drivers432-1and432-2may be disposed on the right of the gamma voltage generator440. Tab division switch blocks TAB_DIV_SW445and446may be disposed only in green gamma G GAMMA, compared to that shown inFIG.11. FIG.13is a flowchart illustrating a method of operating a display device according to example embodiments. Referring toFIG.13, the display device may operate as follows. Previous line data CH_DATA_PRE (seeFIG.7A) provided to the display panel110of each channel (seeFIG.1) may be compared with current line data CH_DATA_CUR to be provided to the display panel110(S110). A local tab may be generated between the gamma lines by enabling the fast gamma settling circuit according to a data pattern or the number of variable channels as a comparison result (S120). FIG.14is a view illustrating an electronic device2000according to example embodiments. Referring toFIG.14, the electronic device (or a mobile device)2000may include a processor AP2100, a display driving circuit DDI2200, a panel2300, and a power circuit PMIC2400. The processor2100may be implemented to control an overall operation of a display device. In an embodiment, the processor2100may be implemented as an integrated circuit, a system on a chip, or a mobile application processor (AP). The processor2100may transmit data to be displayed (e.g., image data, video data, or still image data) to the display driving circuit2200. In an embodiment, data may be classified as source data SD units corresponding to horizontal lines (or vertical lines) of the display panel2300. The display driving circuit2200may change the data transmitted from the processor100into a form that may be transmitted to the display panel2300, and transmit the changed data to the display panel2300. The source data SD may be supplied in units of pixels. Also, the display driving circuit2200may be implemented as the fast gamma settling circuit or may include a tab division switch block described above with reference toFIGS.1to13. The processor interface may interface signals or data exchanged between the processor2100and the display driving circuit2200. The processor interface may interface source data SD (line data) transmitted from the processor2100and transmit the interfaced source data to the display driving circuit2200. In an embodiment, the processor interface may be an interface related to a serial interface such as a mobile industry processor interface (MIPI), a mobile display digital interface (MDDI), a display port, or an embedded display port (eDP). The display panel2300may display the source data SD provided by the display driving circuit2200using gate signals GS. The power circuit2400may be implemented to manage power of the display device. In an embodiment, the power circuit2400may include a power management integrated circuit (PMIC), a charger integrated circuit (IC), or a battery or fuel gauge. Also, the power circuit2400may have a wired and/or wireless charging method. The wireless charging method may include, for example, a resonant magnetic coupling method, an inductive coupling method, or an electromagnetic wave method, and may further include an additional circuit for wireless charging, for example, a coil loop, a resonance circuit, or a rectifier. The power circuit2400may receive a command from the processor2100and supply power to each part of the display device. The power circuit2400may supply power to each of the display driving circuit2200and the display panel2300. For example, the power circuit2400may provide an external voltage EV to the display driving circuit2200. Here, the external voltage EV may be processed and used inside the display driving circuit2200. The power interface may interface between the power circuit2400and the display driving circuit2200. For example, the power interface may transmit commands that the display driving circuit2200transmits to the power circuit2400. The power interface may exist separately from the processor interface. The display driving circuit2200may be directly connected to the power circuit2400without going through the processor2100. A dual source driver according to example embodiments may be applied to a foldable smartphone. In general, the foldable smartphone may be implemented in various foldable display types such as C-INFOLD, C+1, G, C-OUTFOLD, S, and the like. In general, the foldable smartphone may be classified into an in-fold structure and an out-fold structure according to a folding method. As set forth above, the display device and the operating method thereof according to example embodiments may more rapidly settle a gamma voltage by performing tab division according to a data pattern. The display device and the operating method thereof according to example embodiments may improve a settling time of a source output by rapidly settling a gamma voltage. The display device and the operating method thereof according to example embodiments may prevent additional power consumption due to gamma tab division by performing tab division through data comparison. The display device and the operating method thereof according to example embodiments do not cause a static current by improving a settling timing of gamma routing using timing control. While example embodiments have been shown and described above, it will be apparent to those skilled in the art that modifications and variations could be made without departing from the scope of the present disclosure as defined by the appended claims. | 27,111 |
11862060 | DETAILED DESCRIPTION OF THE EMBODIMENTS For making the objectives, technical solutions and advantages of embodiments of the present disclosure more obvious, the technical solutions of the embodiments of the present disclosure will be clearly and completely described below in conjunction with the accompanying drawings in the embodiments of the present disclosure. Apparently, the embodiments described are some rather than all of the embodiments of the present disclosure. The embodiments in the present disclosure and features of the embodiments may be combined with each other without conflict. Based on the embodiments of the present disclosure, all other embodiments acquired by those of ordinary skill in the art without making creative efforts fall within the scope of protection of the present disclosure. Unless otherwise defined, technical or scientific terms used in the present disclosure should have ordinary meanings as understood by those of ordinary skill in the art to which the present disclosure belongs. The “first”, “second” and similar words used in the present disclosure do not indicate any order, amount or importance, but are only used to distinguish different components. “Including”, “comprising” or other similar words indicate that the elements or objects before the word include elements or objects after the word and their equivalents, without excluding other elements or objects. “Connected”, “connected” or other similar words are not limited to physical or mechanical connections, but can include electrical connections, which may be direct or indirect. It should be noted that a size and a shape of each figure in the drawings do not reflect a true scale, but only for illustrating the present disclosure. Throughout the drawings, identical or similar reference numerals denote identical or similar elements or elements having identical or similar functions. An embodiment of the present disclosure provides a shift register, as shown inFIG.1and including: a first input circuit1, a second input circuit2, a control circuit3and an output circuit4;the first input circuit1is configured to supply a signal of a first reference signal terminal VREF1to a first node N1in response to a signal of a first input signal terminal IP1;the second input circuit2is configured to supply a signal of a second reference signal terminal VREF2to the first node N1in response to a signal of a second input signal terminal IP2;the control circuit3is configured to control a signal of the first node N1and a signal of a second node N2;the output circuit4is configured to supply a signal of a clock signal terminal CLK to a drive output terminal GOUT in response to the signal of the first node N1, and to supply a signal of a third reference signal terminal VREF3to the drive output terminal GOUT in response to the signal of the second node N2; andone of the first input signal terminal IP1and the second input signal terminal IP2is loaded with an effective signal at an input phase, and the other of the first input signal terminal and the second input signal terminal is loaded with an effective signal at a reset phase. An internal structure of the shift register provided in the embodiments of the present disclosure is adjusted, the first input circuit1and the second input circuit2are designed in a symmetrical structure, and charge and discharge of the first node N1may be designed symmetrically during forward and reverse scanning, thereby realizing a function of bidirectional scanning. For example, during forward scanning, the first input circuit1may serve as a signal input circuit, and the corresponding first input signal terminal IP1is loaded with an effective signal at the input phase, that is, the first input signal terminal IP1receives a signal output by the drive output terminal GOUT of a previous row of shift register, and after the first input circuit1is turned on, the signal of the first reference signal terminal VREF1is supplied to the first node N1for charging. Correspondingly, the second input circuit2may serve as a signal reset circuit, and the corresponding second input signal terminal IP2is loaded with an effective signal at the reset phase, that is, the second input signal terminal IP2receives a signal output by the drive output terminal GOUT of a next row of shift register, and when the next row of shift register outputs an effective signal, after the second input circuit2is turned on, the second reference signal terminal VREF2conducts discharge on the first node N1. On the contrary, during reverse scanning, the second input circuit2may serve as a signal input circuit, and the corresponding second input signal terminal IP2is loaded with an effective signal at the input phase, that is, the second input signal terminal IP2receives a signal output by the drive output terminal GOUT of a next row of shift register, and after the second input circuit2is turned on, the signal of the second reference signal terminal VREF2is supplied to the first node N1for charging. Correspondingly, the first input circuit1may serve as a signal reset circuit, and the corresponding first input signal terminal IP1is loaded with an effective signal at the reset phase, that is, the first input signal terminal IP1receives a signal output by the drive output terminal GOUT of a previous row of shift register, and when the previous row of shift register outputs an effective signal, after the first input circuit1is turned on, the first reference signal terminal VREF1conducts discharge on the first node N1. For example, during forward scanning, the first reference signal terminal VREF1may be loaded with a high level signal, and the second reference signal terminal VREF2may be loaded with a low level signal; and during reverse scanning, the first reference signal terminal VREF1may be loaded with a low level signal, and the second reference signal terminal VREF2may be loaded with a high level signal. Specifically, a structural design of the shift register provided in the embodiments of the present disclosure ensures symmetry of forward and reverse scanning; and compared with a circuit structure of a traditional one-way scanning shift register, there is no obvious difference in duty cycles of thin film transistors (TFTs) inside the circuit structure and charge and discharge of various important nodes, thereby ensuring reliability and stability of the circuit structure. During specific implementation, in an embodiment of the present disclosure, as shown inFIG.2, the second node N2may include: a first sub-node N21and a second sub-node N22. The control circuit3includes a first sub-control circuit31and a second sub-control circuit32, where the first sub-control circuit31is configured to control the signal of the first node N1and the signal of the first sub-node N21; and the second sub-control circuit32is configured to control the signal of the first node N1and the signal of the second sub-node N22. The output circuit4is configured to supply the signal of the third reference signal terminal VREF3to the drive output terminal GOUT in response to the signal of the first sub-node N21and to supply the signal of the third reference signal terminal VREF3to the drive output terminal GOUT in response to the signal of the second sub-node N22. During specific implementation, in the embodiment of the present disclosure, as shown inFIG.3, the first sub-control circuit31may include: a first transistor M1, a second transistor M2, a third transistor M3, a fourth transistor M4and a fifth transistor M5;a gate and a first electrode of the first transistor M1are both electrically connected to a first control terminal VN1, and a second electrode of the first transistor M1is electrically connected to a gate of the second transistor M2;a first electrode of the second transistor M2is electrically connected to the first control terminal VN1, and a second electrode of the second transistor M2is electrically connected to the first sub-node N21;a gate of the third transistor M3is electrically connected to the first node N1, a first electrode of the third transistor M3is electrically connected to the third reference signal terminal VREF3, and a second electrode of the third transistor M3is electrically connected to the first sub-node N21;a gate of the fourth transistor M4is electrically connected to the first node N1, a first electrode of the fourth transistor M4is electrically connected to the third reference signal terminal VREF3, and a second electrode of the fourth transistor M4is electrically connected to the gate of the second transistor M2; anda gate of the fifth transistor M5is electrically connected to the first sub-node N21, a first electrode of the fifth transistor M5is electrically connected to the third reference signal terminal VREF3, and a second electrode of the fifth transistor M5is electrically connected to the first node N1. During specific implementation, in the embodiment of the present disclosure, as shown inFIG.3, the second sub-control circuit32may include: an eighth transistor M8, a ninth transistor M9, a tenth transistor M10, an eleventh transistor M11and a twelfth transistor M12;a gate and a first electrode of the eighth transistor M8are both electrically connected to a second control terminal VN2, and a second electrode of the eighth transistor M8is electrically connected to a gate of the ninth transistor M9;a first electrode of the ninth transistor M9is electrically connected to the second control terminal VN2, and a second electrode of the ninth transistor M9is electrically connected to the second sub-node N22;a gate of the tenth transistor M10is electrically connected to the first node N1, a first electrode of the tenth transistor M10is electrically connected to the third reference signal terminal VREF3, and a second electrode of the tenth transistor M10is electrically connected to the second sub-node N22;a gate of the eleventh transistor M11is electrically connected to the first node N1, a first electrode of the eleventh transistor M11is electrically connected to the third reference signal terminal VREF3, and a second electrode of the eleventh transistor M11is electrically connected to the gate of the ninth transistor M9; anda gate of the twelfth transistor M12is electrically connected to the second sub-node N22, a first electrode of the twelfth transistor M12is electrically connected to the third reference signal terminal VREF3, and a second electrode of the twelfth transistor M12is electrically connected to the first node N1. During specific implementation, in the embodiment of the present disclosure, as shown inFIG.3, the output circuit4may include: a storage capacitor CST, a fifteenth transistor M15, a sixteenth transistor M16and a seventeenth transistor M17;a gate of the fifteenth transistor M15is electrically connected to the first node N1, a first electrode of the fifteenth transistor M15is electrically connected to the clock signal terminal CLK, and a second electrode of the fifteenth transistor M15is electrically connected to the drive output terminal GOUT;a gate of the sixteenth transistor M16is electrically connected to the first sub-node N21, a first electrode of the sixteenth transistor M16is electrically connected to the third reference signal terminal VREF3, and a second electrode of the sixteenth transistor M16is electrically connected to the drive output terminal GOUT;a gate of the seventeenth transistor M17is electrically connected to the second sub-node N22, a first electrode of the seventeenth transistor M17is electrically connected to the third reference signal terminal VREF3, and a second electrode of the seventeenth transistor M17is electrically connected to the drive output terminal GOUT; anda first electrode plate of the storage capacitor CST is electrically connected to the first node N1, and a second electrode plate of the storage capacitor CST is electrically connected to the drive output terminal GOUT. During specific implementation, in the embodiment of the present disclosure, as shown inFIG.3, the first input circuit1may include: an eighteenth transistor M18; anda gate of the eighteenth transistor M18is electrically connected to the first input signal terminal IP1, a first electrode of the eighteenth transistor M18is electrically connected to the first reference signal terminal VREF1, and a second electrode of the eighteenth transistor M18is electrically connected to the first node N1. During specific implementation, in the embodiment of the present disclosure, as shown inFIG.3, the second input circuit2may include: a nineteenth transistor M19; anda gate of the nineteenth transistor M19is electrically connected to the second input signal terminal IP2, a first electrode of the nineteenth transistor M19is electrically connected to the second reference signal terminal VREF2, and a second electrode of the nineteenth transistor M19is electrically connected to the first node N1. To reduce a preparation process, all transistors may be N-type transistors during specific implementation as shown inFIG.3. During forward scanning, the signal of the first reference signal terminal VREF1may be a high level signal, and the signal of the second reference signal terminal VREF2may be a low level signal; and during reverse scanning, the signal of the first reference signal terminal VREF1may be a low level signal, the signal of the second reference signal terminal VREF2may be a high level signal, and the signal of the third reference signal terminal VREF3is always a low level signal. Of course, all transistors may also be P-type transistors during specific implementation, which is not limited herein. During specific implementation, a signal of the first control terminal VN1may be a pulse signal with a high level and a low level switched, a signal of the second control terminal VN2may be a pulse signal with a high level and a low level switched, and a level of the first control terminal VN1is opposite to that of the second control terminal VN2. For example, as shown inFIG.4, at a phase T10, the first control terminal VN1is configured with a high level signal, and the second control terminal VN2is configured with a low level signal. At a phase T20, the first control terminal VN1is configured with a low level signal, and the second control terminal VN2is configured with a high level signal. For example, a duration of the phase T10may be consistent with that of the phase T20. For example, the duration of the phase T10and the duration of the phase T20are set as a duration of one display frame, a duration of a plurality of display frames, 2 s, 1 h, 24 h, etc. respectively, which are not limited herein. During specific implementation, the signal of the first control terminal VN1and the signal of the second control terminal VN2may also be direct current signals respectively. When the first control terminal VN1is loaded with a direct current signal with a high level, the second control terminal VN2is loaded with no signal or a direct current signal with a low level. When the second control terminal is loaded with a direct current signal with a high level, the first control terminal VN1is loaded with no signal or a direct current signal with a low level. For example, at the phase T10, the first control terminal VN1is configured with a direct current signal with a high level, and the second control terminal VN2is configured with a direct current signal with a low level. At the phase T20, the first control terminal VN1is configured with a direct current signal with a low level, and the second control terminal VN2is configured with a direct current signal with a high level. For example, the duration of the T10phase may be consistent with that of the phase T20. For example, the duration of the phase T10and the duration of the phase T20are set as a duration of one display frame, a duration of a plurality of display frames, 2 s, 1 h, 24 h, etc. respectively, which are not limited herein. A sequence of the phase T10and the phase T20may be determined according to actual application. For example, a work process in the phase T10may be executed, and further a work process in the phase T20may be executed. Alternatively, the work process in the phase T20may be executed, and further the work process in the phase T10may be executed. A structure of the shift register shown inFIG.3is taken as an example below, a signal sequence diagram shown inFIG.4is combined, forward scanning is taken as an example, and the work process of the shift register provided in the embodiments of the present disclosure will be described in detail. In the following description, 1 represents a high level signal and 0 represents a low level signal, where1and0represent logic levels of signals, only for better explaining a work process of the shift register provided in the embodiments of the present disclosure, rather than a potential applied to a gate of each transistor during specific implementation. The phases T10and T20in the signal sequence diagram shown inFIG.4are selected. An input phase T11, a reset phase T12and an output phase T13in the phase T10are selected. An input phase T21, a reset phase T22and an output phase T23in the phase T20are selected. At the phase T10, the second control terminal VN2is configured with a low level signal, so the eighth transistor M8is cut off. At the input phase T11, IP1=1, CLK=0 and IP2=0. Since IP2=0, the nineteenth transistor M19is cut off. Since IP1=1, the eighteenth transistor M18is turned on, so as to supply the high level signal of the first reference signal terminal VREF1to the first node N1, and further the first node N1is configured with a high level signal, so that the third transistor M3, the fourth transistor M4, the tenth transistor M10, the eleventh transistor M11and the fifteenth transistor M15are all controlled to be turned on. The turned-on fourth transistor M4may supply a low level signal of the third reference signal terminal VREF3to the gate of the second transistor M2, so as to control the second transistor M2to be cut off. The turned-on third transistor M3may supply the low level signal of the third reference signal terminal VREF3to the first sub-node N21, and further the first sub-node N21is configured with a low level signal, so that the fifth transistor M5and the sixteenth transistor M16are both controlled to be cut off. The turned-on eleventh transistor M11may supply the low level signal of the third reference signal terminal VREF3to the gate of the ninth transistor M9, so as to control the ninth transistor M9to be cut off. The turned-on tenth transistor M10may supply the low level signal of the third reference signal terminal VREF3to the second sub-node N22, and further the second sub-node N22is configured with a low level signal, so that the twelfth transistor M12and the seventeenth transistor M17are both controlled to be cut off. The turned-on fifteenth transistor M15may supply the low level signal of the clock signal terminal CLK to a drive output terminal GOUT, so that the drive output terminal GOUT outputs a low level signal. At the output phase T12, IP1=0, CLK=1 and IP2=0. Since IP2=0, the nineteenth transistor M19is cut off. Since IP1=0, the eighteenth transistor M18is cut off Therefore, the first node N1is in a floating state. The storage capacitor may enable the first node N1to maintain a high level signal. The first node N1is configured with a high level signal, so the third transistor M3, the fourth transistor M4, the tenth transistor M10, the eleventh transistor M11and the fifteenth transistor M15are all controlled to be turned on. The turned-on fourth transistor M4may supply a low level signal of the third reference signal terminal VREF3to the gate of the second transistor M2, so as to control the second transistor M2to be cut off. The turned-on third transistor M3may supply the low level signal of the third reference signal terminal VREF3to the first sub-node N21, and further the first sub-node N21is configured with a low level signal, so that the fifth transistor M5and the sixteenth transistor M16are both controlled to be cut off. The turned-on eleventh transistor M11may supply the low level signal of the third reference signal terminal VREF3to the gate of the ninth transistor M9, so as to control the ninth transistor M9to be cut off. The turned-on tenth transistor M10may supply the low level signal of the third reference signal terminal VREF3to the second sub-node N22, and further the second sub-node N22is configured with a low level signal, so that the twelfth transistor M12and the seventeenth transistor M17are both controlled to be cut off. The turned-on fifteenth transistor M15may supply a high level signal of the clock signal terminal CLK to the drive output terminal GOUT. Since the first node N1is in a floating state, the storage capacitor further pulls up a potential of the first node N1, and further the fifteenth transistor M15may be turned on as thoroughly as possible, so that the high level signal of the clock signal terminal CLK may be supplied to the drive output terminal GOUT without voltage loss as much as possible, and the drive output terminal GOUT outputs a high level signal. At the reset phase T13, IP1=0, CLK=0 and IP2=1. Since IP1=0, the eighteenth transistor M18is cut off. Since IP2=1, the nineteenth transistor M19is turned on, so as to supply the low level signal of the second reference signal terminal VREF2to the first node N1, and further the first node N1is configured with a low level signal, so that the third transistor M3, the fourth transistor M4, the tenth transistor M10, the eleventh transistor M11and the fifteenth transistor M15are all controlled to be cut off. The second sub-node N22maintains a low level signal, so that the twelfth transistor M12and the seventeenth transistor M17are both controlled to be cut off. The first transistor M1is turned on under control of a high level signal of the first control terminal VN1, so as to supply the high level signal of the first control terminal VN1to the gate of the second transistor M2, and further to control the second transistor M2to be turned on. The turned-on second transistor M2may supply the high level signal of the first control terminal VN1to the first sub-node N21, and further the first sub-node N21is configured with a high level signal, so that the fifth transistor M5and the sixteenth transistor M16are both controlled to be turned on. The turned-on fifth transistor M5may supply the low level signal of the third reference signal terminal VREF3to the first node N1, and further the first node N1is configured with a low level signal. The turned-on sixteenth transistor M16may supply the low level signal of the third reference signal terminal VREF3to the drive output terminal GOUT, so that the drive output terminal GOUT outputs the low level signal. At the T20phase, the first control terminal VN1is configured with a low level signal, so the first transistor M1is cut off. At the input phase T21, IP1=1, CLK=0 and IP2=0. Since IP2=0, the nineteenth transistor M19is cut off. Since IP1=1, the eighteenth transistor M18is turned on, so as to supply the high level signal of the first reference signal terminal VREF1to the first node N1, and further the first node N1is configured with a high level signal, so that the third transistor M3, the fourth transistor M4, the tenth transistor M10, the eleventh transistor M11and the fifteenth transistor M15are all controlled to be turned on. The turned-on fourth transistor M4may supply the low level signal of the third reference signal terminal VREF3to the gate of the second transistor M2, so as to control the second transistor M2to be cut off. The turned-on third transistor M3may supply the low level signal of the third reference signal terminal VREF3to the first sub-node N21, and further the first sub-node N21is configured with the low level signal, so that the fifth transistor M5and the sixteenth transistor M16are both controlled to be cut off. The turned-on eleventh transistor M11may supply the low level signal of the third reference signal terminal VREF3to the gate of the ninth transistor M9, so as to control the ninth transistor M9to be cut off. The turned-on tenth transistor M10may supply the low level signal of the third reference signal terminal VREF3to the second sub-node N22, and further the second sub-node N22is configured with the low level signal, so that the twelfth transistor M12and the seventeenth transistor M17are both controlled to be cut off. The turned-on fifteenth transistor M15may supply the low level signal of the clock signal terminal CLK to the drive output terminal GOUT, so that the drive output terminal GOUT outputs the low level signal. At the output phase T22, IP1=0, CLK=1 and IP2=0. Since IP2=0, the nineteenth transistor M19is cut off. Since IP1=0, the eighteenth transistor M18is cut off Therefore, the first node N1is in a floating state. The storage capacitor may enable the first node N1to maintain the high level signal. The first node N1is configured with the high level signal, so the third transistor M3, the fourth transistor M4, the tenth transistor M10, the eleventh transistor M11and the fifteenth transistor M15are all controlled to be turned on. The turned-on fourth transistor M4may supply the low level signal of the third reference signal terminal VREF3to the gate of the second transistor M2, so as to control the second transistor M2to be cut off. The turned-on third transistor M3may supply the low level signal of the third reference signal terminal VREF3to the first sub-node N21, and further the first sub-node N21is configured with the low level signal, so that the fifth transistor M5and the sixteenth transistor M16are both controlled to be cut off. The turned-on eleventh transistor M11may supply the low level signal of the third reference signal terminal VREF3to the gate of the ninth transistor M9, so as to control the ninth transistor M9to be cut off. The turned-on tenth transistor M10may supply the low level signal of the third reference signal terminal VREF3to the second sub-node N22, and further the second sub-node N22is configured with the low level signal, so that the twelfth transistor M12and the seventeenth transistor M17are both controlled to be cut off. The turned-on fifteenth transistor M15may supply the high level signal of the clock signal terminal CLK to the drive output terminal GOUT. Since the first node N1is in a floating state, the storage capacitor further pulls up a potential of the first node N1, and further the fifteenth transistor M15may be turned on as thoroughly as possible, so that the high level signal of the clock signal terminal CLK may be supplied to the drive output terminal GOUT without voltage loss as much as possible, and the drive output terminal GOUT outputs the high level signal. At the reset phase T23, IP1=0, CLK=0 and IP2=1. Since IP2=0, the eighteenth transistor M18is cut off. Since IP2=1, the nineteenth transistor M19is turned on, so as to supply the low level signal of the second reference signal terminal VREF2to the first node N1, and further the first node N1is configured with the low level signal, so that the third transistor M3, the fourth transistor M4, the tenth transistor M10, the eleventh transistor M11and the fifteenth transistor M15are all controlled to be cut off. The first sub-node N21maintains a low level signal, so that the fifth transistor M5and the sixteenth transistor M16are both controlled to be cut off. The eighth transistor M8is turned on under control of a high level signal of the second control terminal VN2, so as to supply the high level signal of the second control terminal VN2to the gate of the ninth transistor M9, and further to control the ninth transistor M9to be turned on. The turned-on ninth transistor M9may supply the high level signal of the second control terminal VN2to the second sub-node N22, and further the second sub-node N22is configured with a high level signal, so that the twelfth transistor M12and the seventeenth transistor M17are both controlled to be turned on. The turned-on twelfth transistor M12may supply the low level signal of the third reference signal terminal VREF3to the first node N1, and further the first node N1further is configured with the low level signal. The turned-on seventeenth transistor M17may supply the low level signal of the third reference signal terminal VREF3to the drive output terminal GOUT, so that the drive output terminal GOUT outputs the low level signal. An embodiment of the present disclosure further provides some structural schematic diagrams of the shift register, and modifies the implementation of the above embodiments as shown inFIG.5. Only differences between the embodiment and the above embodiments will be described below, and similarities will not be repeated herein. During specific implementation, in the embodiment of the present disclosure, as shown inFIG.5, the first sub-control circuit31may further include: a sixth transistor M6and a seventh transistor M7; a gate of the sixth transistor M6is electrically connected to the first input signal terminal IP1, a first electrode of the sixth transistor M6is electrically connected to the third reference signal terminal VREF3, and a second electrode of the sixth transistor M6is electrically connected to the first sub-node N21; and a gate of the seventh transistor M7is electrically connected to the second input signal terminal IP2, a first electrode of the seventh transistor M7is electrically connected to the third reference signal terminal VREF3, and a second electrode of the seventh transistor M7is electrically connected to the first sub-node N21. During specific implementation, in the embodiment of the present disclosure, as shown inFIG.5, the second sub-control circuit32may further include: a thirteenth transistor M13and a fourteenth transistor M14; a gate of the thirteenth transistor M13is electrically connected to the first input signal terminal IP1, a first electrode of the thirteenth transistor M13is electrically connected to the third reference signal terminal VREF3, and a second electrode of the thirteenth transistor M13is electrically connected to the second sub-node N22; and a gate of the fourteenth transistor M14is electrically connected to the second input signal terminal IP2, a first electrode of the fourteenth transistor M14is electrically connected to the third reference signal terminal VREF3, and a second electrode of the fourteenth transistor M14is electrically connected to the second sub-node N22. Through analog computation, the sixth transistor M6added in the first sub-control circuit31and the thirteenth transistor M13added in the second sub-control circuit32may rapidly pull down a potential of the second node N2(that is, the first sub-node N21and the second sub-node N22) during forward scanning, thereby controlling electric leakage of the fifth transistor M5and the twelfth transistor M12and improving signal quality of the first node N1. The seventh transistor M7added in the first sub-control circuit31and the fourteenth transistor M14added in the second sub-control circuit32may rapidly pull down a potential of the second node N2(that is, the first sub-node N21and the second sub-node N22) during reverse scanning, thereby controlling the electric leakage of the fifth transistor M5and the twelfth transistor M12and improving the signal quality of the first node N1. FIG.8shows signals input by the first input signal terminal IP1during forward scanning and input by the second input signal terminal IP2during reverse scanning in the shift register provided in the embodiments of the present disclosure. The signal is a signal {circle around (1)} output by the drive output terminal GOUT of a previous stage of shift register. It may be seen that the signal rises faster and a value of Vmax is higher.FIG.8further shows signals {circle around (2)} output from the eighteenth transistor M18to the first node N1during forward scanning and output from the nineteenth transistor M19to the first node N1during reverse scanning in the shift register provided in the embodiments of the present disclosure.FIG.9shows a schematic diagram of potentials of the first node N1at an input phase {circle around (3)}, an output phase {circle around (4)} and a reset phase {circle around (5)}. It may be seen that compared with a traditional shift register structure (before optimization), a shift register structure (after optimization) provided in the embodiments of the present disclosure may improve signal potential quality of the first node N1.FIG.10shows a schematic diagram of a potential of the drive output terminal GOUT. It may be seen that compared with the traditional shift register structure (before optimization), the shift register structure provided in the embodiments of the present disclosure (after optimization) may solve a trailing problem of the drive output terminal GOUT at a reset phase {circle around (6)}, occurrence of a bad (horizontal black line) situation may be prevented, signal quality of the drive output terminal GOUT may be ensured, and service life of the shift register may be prolonged to a certain extent. Through a simulation test, it may be seen that the shift register provided in the embodiments of the present disclosure is capable of supporting a higher reliable operating condition (−20° C.-70° C.), so as to solve a problem of high temperature life. During specific implementation, in the embodiment of the present disclosure, as shown inFIG.5, the shift register may further include: a twentieth transistor M20, where a gate of the twentieth transistor M20is electrically connected to a first frame reset signal terminal SRE1, a first electrode of the twentieth transistor M20is electrically connected to the third reference signal terminal VREF3, and a second electrode of the twentieth transistor M20is electrically connected to the first node N1. During specific implementation, in the embodiment of the present disclosure, as shown inFIG.5, the shift register may further include: a twenty-first transistor M21, where a gate of the twenty-first transistor M21is electrically connected to a second frame reset signal terminal SRE2, a first electrode of the twenty-first transistor M21is electrically connected to the third reference signal terminal VREF3, and a second electrode of the twenty-first transistor M21is electrically connected to the drive output terminal GOUT. A structure of the shift register shown inFIG.5is taken as an example below, a signal sequence diagram shown inFIG.6is combined, forward scanning is taken as an example, and the work process of the shift register provided in the embodiments of the present disclosure will be described. A work process corresponding to the embodiment is partially consistent with that of the shift register shown inFIG.3, and only differences of the work processes will be described below. At the phase T10, before the input phase T11, a frame reset phase T01may further be included. At the frame reset phase T01, the first frame reset signal terminal SRE1is configured with a high level signal, the twentieth transistor M20may be controlled to be turned on, and further the low level signal of the third reference signal terminal VREF3is supplied to the first node N1, so that the first node N1is pre-reset, and further noise of the drive output terminal GOUT may be reduced. The second frame reset signal terminal SRE2is configured with a high level signal, the twenty-first transistor M21may be controlled to be turned on, and further the low level signal of the third reference signal terminal VREF3is supplied to the drive output terminal GOUT, so that the drive output terminal GOUT is pre-reset, and further the noise of the drive output terminal GOUT may be reduced. At the input phase T11, the sixth transistor M6is turned on under control of the high level signal of the first input signal terminal IP1and further supplies the low level signal of the third reference signal terminal VREF3to the first sub-node N21, so that the first sub-node N21may be configured with a low level signal, and further the noise of the drive output terminal GOUT may be reduced. The thirteenth transistor M13is turned on under control of the high level signal of the first input signal terminal IP1and further supplies the low level signal of the third reference signal terminal VREF3to the second sub-node N22, so that the second sub-node N22may be configured with a low level signal, and further the noise of the drive output terminal GOUT may be reduced. (During reverse scanning, at the input phase T11, the seventh transistor M7is turned on under control of the high level signal of the second input signal terminal IP2and further supplies the low level signal of the third reference signal terminal VREF3to the first sub-node N21, so that the first sub-node N21may be configured with a level signal, and further the noise of the drive output terminal GOUT may be reduced. The fourteenth transistor M14is turned on under control of the high level signal of the second input signal terminal IP2and further supplies the low level signal of the third reference signal terminal VREF3to the second sub-node N22, so that the second sub-node N22may be configured with a low level signal, and further the noise of the drive output terminal GOUT may be reduced.) At the phase T20, before the input phase T21, a frame reset phase T02may further be included. At the frame reset phase T02, the first frame reset signal terminal SRE1is configured with a high level signal, the twentieth transistor M20may be controlled to be turned on, and further the low level signal of the third reference signal terminal VREF3is supplied to the first node N1, so that the first node N1is pre-reset, and further noise of the drive output terminal GOUT may be further reduced. The second frame reset signal terminal SRE2is configured with a high level signal, the twenty-first transistor M21may be controlled to be turned on, and further the low level signal of the third reference signal terminal VREF3is supplied to the drive output terminal GOUT, so that the drive output terminal GOUT is pre-reset, and further the noise of the drive output terminal GOUT may be reduced. At the input phase T11, the sixth transistor M6is turned on under control of the high level signal of the first input signal terminal IP1and further supplies the low level signal of the third reference signal terminal VREF3to the first sub-node N21, so that the first sub-node N21may be configured with a low level signal, and further the noise of the drive output terminal GOUT may be reduced. The thirteenth transistor M13is turned on under control of the high level signal of the first input signal terminal IP1and further supplies the low level signal of the third reference signal terminal VREF3to the second sub-node N22, so that the second sub-node N22may be configured with a low level signal, and further the noise of the drive output terminal GOUT may be reduced. (During reverse scanning, at the input phase T11, the seventh transistor M7is turned on under control of the high level signal of the second input signal terminal IP2and further supplies the low level signal of the third reference signal terminal VREF3to the first sub-node N21, so that the first sub-node N21may be configured with a low level signal, and further the noise of the drive output terminal GOUT may be reduced. The fourteenth transistor M14is turned on under control of the high level signal of the second input signal terminal IP2and further supplies the low level signal of the third reference signal terminal VREF3to the second sub-node N22, so that the second sub-node N22may be configured with a low level signal, and further the noise of the drive output terminal GOUT may be reduced.) An embodiment of the present disclosure further provides a gate drive circuit, which includes a plurality of cascaded shift registers provided in the embodiments of the present disclosure: SR(1), SR(2) . . . SR(n−1), SR(n) . . . SR(N−1), SR(N) (N shift registers in total, 1≤n≤N, and n and N are positive integers) as shown inFIG.7, wherea first input signal terminal IP1of a first stage of shift register SR(1) is electrically connected to a first frame trigger signal terminal STV1, and a second input signal terminal IP2of a last stage of shift register SR(N) is electrically connected to a second frame trigger signal terminal STV2; andin every two stages of shift registers, a first input signal terminal IP1of a next level of shift register of SR(n) is electrically connected to a drive output terminal GOUT of a previous stage of shift register SR(n−1), and a second input signal terminal IP2of the previous stage of shift register SR(n−1) is electrically connected to a drive output terminal GOUT of the next stage of shift register SR(n). It should be noted that inFIG.7is illustrated according to the following example that in every adjacent two stages of shift registers, a first input signal terminal IP1of a next stage of shift register SR(n) is electrically connected to a drive output terminal GOUT of a previous stage of shift register SR(n−1), and a second input signal terminal IP2of the previous stage of shift register SR(n−1) is electrically connected to a drive output terminal GOUT of the next stage of shift register SR(n). In actual application, every two stages of shift registers may be spaced from each other by one or more shift registers, which is not limited herein. Specifically, each of the shift registers in the above gate drive circuit is consistent in function and structure with the shift register provided in the embodiments of the present disclosure, which will not be repeated herein. It should be noted that during forward scanning, the first frame trigger signal terminal STV1is loaded with a frame start signal, and the gate drive circuit starts to sequentially output effective signals from the drive output terminal GOUT of a first stage of shift register SR(1); and during reverse scanning, the second frame trigger signal terminal STV2is loaded with a frame start signal, and the gate drive circuit starts to sequentially output effective signals from the drive output terminal GOUT of a last stage of shift register SR(n). During specific implementation, in the gate drive circuit provided in the embodiments of the present disclosure, as shown inFIG.7, clock signal terminals CLK of odd-numbered stages of shift registers are all electrically connected to the same clock line clk1, and clock signal terminals CLK of even-numbered stages of shift registers are all electrically connected to the same clock line clk2. During specific implementation, in the gate drive circuit provided in the embodiments of the present disclosure, as shown inFIG.7, a first reference signal terminal VREF1of each stage of shift register is electrically connected to the same first reference signal line ref1. A second reference signal terminal VREF2of each stage of shift register is electrically connected to the same second reference signal line ref2. A third reference signal terminal VREF3of each stage of shift register is electrically connected to the same third reference signal line ref3. During forward scanning, the first reference signal line ref1loads a high level signal into a first reference signal terminal VREF1of each stage of shift register, and the second reference signal line ref2loads a low level signal into a second reference signal terminal VREF2of each stage of shift register. During reverse scanning, the first reference signal line ref1loads a low level signal into a first reference signal terminal VREF1of each stage of shift register, and the second reference signal line ref2loads a high level signal into a second reference signal terminal VREF2of each stage of shift register. During forward and reverse scanning, the third reference signal line ref3loads a low level signal into a third reference signal terminal VREF3of each stage of shift register. During specific implementation, when the shift register includes the twentieth transistor M20, in the gate drive circuit provided in the embodiments of the present disclosure, a first frame reset signal terminal SRE1of each stage of shift register may be electrically connected to the same first frame reset terminal. In this way, the first node N1of each stage of shift register may be pre-reset simultaneously. During specific implementation, when the shift register includes the twenty-first transistor M21, in the gate drive circuit provided in the embodiment of the present disclosure, a second frame reset signal terminal SRE2of each stage of shift register may be electrically connected to the same second frame reset terminal. In this way, the drive output terminal GOUT of each stage of shift register may be pre-reset simultaneously. Based on the same inventive concept, an embodiment of the present disclosure further provides a display device, which includes the gate drive circuit provided in the embodiments of the present disclosure. A problem solving principle of the display device is similar to that of the gate drive circuit, so implementation of the display device may be referred to implementation of the gate drive circuit, which will not be repeated herein. During specific implementation, in the embodiments of the present disclosure, the display device may further include: a first reference signal line, a second reference signal line and a third reference signal line which are arranged in a mutually spaced manner; a first reference terminal electrically connected to the first reference signal line; a second reference terminal electrically connected to the second reference signal line; and a third reference terminal electrically connected to the third reference signal line; where a first reference signal terminal VREF1of a shift register in the gate drive circuit is electrically connected to the first reference signal line; a second reference signal terminal VREF2of a shift register in the gate drive circuit is electrically connected to the second reference signal line; and a third reference signal terminal VREF3of a shift register in the gate drive circuit is electrically connected to the third reference signal line. During specific implementation, in the embodiment of the present disclosure, the display device may further include: a driver chip; where the driver chip is bonded to the first reference terminal, the second reference terminal and the third reference terminal separately; and the driver chip is configured to load a signal into the first reference signal terminal VREF1of the shift register in the gate drive circuit through the first reference terminal, load a signal into the second reference signal terminal VREF2of the shift register in the gate drive circuit through the second reference terminal and load a signal into the third reference signal terminal VREF3of the shift register in the gate drive circuit through the third reference terminal. During specific implementation, in the embodiments of the present disclosure, the display device may be any product or component with a display function, such as a mobile phone, a tablet computer, a television, a display screen, a notebook computer, a digital photo frame and a navigator. Other essential components of the display device should be understood by those of ordinary skill in the art, which will not be repeated herein and should not limit the present disclosure. According to the shift register, the gate drive circuit and the display device provided in the embodiments of the present disclosure, during forward scanning, the first input circuit may supply the signal of the first reference signal terminal to the first node in response to the signal of the first input signal terminal at the input phase, and the second input circuit may supply the signal of the second reference signal terminal to the first node in response to the signal of the second input signal terminal at the reset phase. During reverse scanning, the second input circuit may supply the signal of the second reference signal terminal to the first node in response to the signal of the second input signal terminal at the input phase, and the first input circuit may supply the signal of the first reference signal terminal to the first node in response to the signal of the first input signal terminal at the reset phase. The control circuit may control the signals of the first node and the second node. The output circuit may supply the signal of the clock signal terminal to the drive output terminal in response to the signal of the first node, and supply the signal of the third reference signal terminal to the drive output terminal in response to the signal of the second node. The first input circuit and the second input circuit are designed in a symmetrical structure, and charge and discharge of the first node may be designed symmetrically during forward and reverse scanning, thereby realizing a function of bidirectional scanning. Apparently, those skilled in the art may make various modifications and variations to the present disclosure without departing from the spirit and scope of the present disclosure. In this way, if these modifications and variations of the present disclosure fall within the scope of the claims of the present disclosure and their equivalent technologies, the present disclosure is also intended to include these modifications and variations. | 49,799 |
11862061 | The reference numbers are: 1. a transistor;10. a U-shaped unit;100. a channel;101. a gate electrode;102. a gate insulating layer;103. an active layer;104. a first electrode;105. a second electrode;11. a first portion;12. a second portion;141. a first comb handle portion;142. a first comb tooth portion;143. a second comb tooth portion;151. a second comb handle portion;152. a third comb tooth portion;153. a fourth comb tooth portion;154. a fifth comb tooth portion;13. a dummy U-shaped unit;2. a first combination;3. a second transistor;4. a second combination;106. an opening;108. a source electrode;109. a drain electrode;110. a passivation layer;5. a display region;6. a frame region;7. a pixel unit;8. a gate line;9. a data line;14. a shift register;15. a thin film transistor;16. a transistor. DETAIL DESCRIPTION OF EMBODIMENTS In order to enable one of ordinary skill in the art to better understand the technical solutions of the embodiments of the present disclosure, a shift register, a gate driving circuit, and a display panel provided in the embodiments of the present disclosure will be described in further detail with reference to the accompanying drawings and the detailed description. The embodiments of the present disclosure will be described more fully hereinafter with reference to the accompanying drawings, but the embodiments shown may be embodied in different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the present disclosure to one of ordinary skill in the art. The embodiments of the present disclosure are not limited to the embodiments shown in the drawings, but include modifications of configurations formed based on a manufacturing process. Thus, regions illustrated in the figures have schematic properties, and shapes of the regions shown in the figures illustrate specific shapes of the regions, but are not intended to be limiting. In the related art, referring toFIG.1, a display panel generally has a display region5and a frame region6surrounding the display region5; a plurality of pixel units7arranged in an array are disposed in the display region5, and a pixel circuit is disposed in each pixel unit7; the pixel units7in a same row are connected to a same gate line8, and the pixel units7in a same column are connected to a same data line9. A gate driving circuit is disposed in the frame region6, and includes a plurality of cascaded shift registers (GOAs, Gate on Array)14, where the shift registers14are disposed in a one-to-one correspondence with the gate lines8, that is, each shift register14is connected to one gate line8. When each frame of picture is displayed, the plurality of cascaded shift registers14output stage-by-stage a gate scanning signal to the corresponding gate lines8, so as to complete the row-by-row scanning of the pixel circuits; and each data line9writes a data voltage signal into pixel circuits in a corresponding row while each gate line8is scanned, so as to light the pixel units7in the row. Referring toFIGS.2and3, the shift register (GOA) is a gate driving circuit formed by a plurality of thin film transistors15and a capacitor and the like. In the shift register, most of the thin film transistors15each are formed by a plurality of transistors16, which are electrically connected to each other, each having a smaller size and having a common gate electrode (gate electrodes of the transistors16are connected together), a common source electrode (source electrodes108of the transistors16are connected together), and a common drain electrode (drain electrodes109of the transistors16are connected together), respectively, and the active layers of these transistors16having the smaller size are formed by a whole piece of film layer to meet the requirement of a channel length designed for each thin film transistor15. When a width of the frame region is sufficient, the smaller transistors16of the thin film transistor15are mostly of one size type. Referring toFIG.3, the sizes of the smaller transistors16of the thin film transistor15are uniform (same). However, in recent years, in order to improve the performance of the display product, most customers require that the display product (such as an MNT display product, i.e. a display product having a size of 18.5 to 34 inches) has an ultra-narrow frame, that is, a display region of the display product is as large as possible enough to satisfy the visual enjoyment of the customers in the viewing. When the display product is designed with an ultra-narrow frame, in order to satisfy the requirements that the display product is properly driven, the thin film transistors15in the shift register must be provided within a frame with a limited width, so that some thin film transistors15each are usually designed to include smaller transistors16having different sizes, as shown inFIG.4. Referring toFIGS.5and6, in a thin film transistor15, in a transition region G where the transistors16of different sizes are provided, on one hand, a size of the transition region G where the transistors16of different sizes are distributed is different from both a size of a larger region where the transistors having a larger size are distributed and a size of a smaller region where the transistors having a smaller size are distributed, i.e. the transition region G is an irregular region compared with the larger region and the smaller region; on the other hand, the channel length h (i.e., a width of a gap between the source electrode108and the drain electrode109of the transistor directly facing to each other) of each of all the transistors16having different sizes forming the thin film transistor15is constant, and is about 3.5 μm; therefore, when patterns of the source and drain electrodes of the transistors16having different sizes are formed on a pattern of the active layer103by a conventional patterning process, a photoresist (such as a PR photoresist) is easily accumulated in channels of the transistors16having different sizes in the transition region G, so that parts of the source and drain electrode film layer within the channels, which should not be protected by the photoresist, are covered and protected by the photoresist, and the parts of the source and drain electrode film layer within the channels are remained in the etching process, which causes a short circuit (also called a channel short circuit) between the source electrode108and the drain electrode109of each of the transistors16having different sizes in the transition region G, and thus causes an abnormal signal output of the shift register, and causes defects such as horizontal striations and the like when the display product is displaying. In order to solve the problems such as the above channel short circuit of the shift register, the embodiment of the present disclosure provides the following technical solution. In a first aspect, an embodiment of the present disclosure provides a shift register, referring toFIGS.7to9, including: a transistor1; the transistor1includes a gate electrode101, a gate insulating layer102, an active layer103, a first electrode104, and a second electrode105; the first electrode104and the second electrode105are comb-shaped; the first electrode104includes first comb tooth portions142and second comb tooth portions143arranged at intervals, and a first comb handle portion141connecting the first comb tooth portions142and the second comb tooth portions143, and comb tooth electrodes of the first comb tooth portion142and the second comb tooth portion143are different in length; the second electrode105includes third comb tooth portions152and fourth comb tooth portions153arranged at intervals, and a second comb handle portion151connecting the third comb tooth portions152and the fourth comb tooth portions153; the first comb tooth portions142and the third comb tooth portions152form an inter-digital structure, the second comb tooth portions143and the fourth comb tooth portions153form an inter-digital structure, and orthographic projections of the first comb tooth portions142, the third comb tooth portions152, the second comb tooth portions143and the fourth comb tooth portions153on the active layer103are not overlapped with each other; orthographic projections of one first comb tooth portion142of the first comb tooth portions142, one second comb tooth portion143of the second comb tooth portions143, the first comb handle portion141and the second comb handle portion151on the active layer103defines (encloses) a short circuit prevention region H for preventing a short circuit between the first electrode104and the second electrode105. A part of the active layer103in a gap between the first comb tooth portion142and the third comb tooth portion152directly facing to each other, and a part of the active layer103in a gap between the second comb tooth portion143and the fourth comb tooth portion153directly facing to each other, form a part of the channel100of the transistor1. When a turn-on voltage is input to the gate electrode101of the transistor1, a conduction between the first electrode104and the second electrode105of the transistor1is enabled, and the current flows from the first electrode104to the second electrode105through the channel100of the transistor1, or the current flows from the second electrode105to the first electrode104through the channel100of the transistor1, so that the transistor1is turned on. In some embodiments, the gate electrode101is located below the active layer103, i.e. the transistor1is a bottom gate type transistor. In some embodiments, the gate electrode101is located above the active layer103, i.e. the transistor1is a top gate type transistor. In this embodiment, by taking a bottom gate type transistor as an example, the transistor1further includes a passivation layer110disposed on a side of the first electrode104and the second electrode105away from the active layer103. Forming the transistor1includes sequentially forming patterns of the gate electrode101, the gate insulating layer102, and the active layer103, then forming patterns of the first electrode104and the second electrode105on the pattern of the active layer103, and finally forming the passivation layer110on the first electrode104and the second electrode105. The patterns of the gate electrode101, the active layer103, the first electrode104and the second electrode105are formed by conventional patterning processes (including film formation, photoresist coating, exposure, development, etching, etc.). By defining the short circuit prevention region by the orthographic projections of the one first comb tooth portion142and the one second comb tooth portion143of different lengths and the first comb handle portion141and the second comb handle portion151on the active layer103, patterns of the first and second comb tooth portions142and143of the first electrode104having different lengths and the third and fourth comb tooth portions152and153of the second electrode105having different lengths are formed on the active layer103through a conventional patterning process, such that it is possible to prevent the photoresist from accumulating in the channel100formed between the comb tooth portions of the transistor1having different lengths, thereby preventing parts of the film layer forming the patterns of the first and second electrodes104and105within the channel100from remaining during the etching process, and further, it is possible to avoid the short circuit between the first electrode104and the second electrode105in the channel100from occurring (that is, preventing the channel short circuit from occurring), and finally, it is ensured that the signal output of the shift register is normal, and the defects of horizontal striations and the like of a display product adopting the shift register during displaying are eliminated. In some embodiments, the length of the comb tooth electrode of the first comb tooth portion142is greater than that of the second comb tooth portion143; the length of the comb tooth electrode of the third comb tooth portion152is greater than that of the fourth comb tooth portion153. In some embodiments, the first comb tooth portions142include a plurality of comb tooth electrodes arranged in parallel and at equal intervals; the second comb tooth portions143include a plurality of comb tooth electrodes arranged in parallel and at equal intervals; the third comb tooth portions152include a plurality of comb tooth electrodes arranged in parallel and at equal intervals; the fourth comb tooth portions153include a plurality of comb tooth electrodes arranged in parallel and at equal intervals. In some embodiments, the first comb tooth portions142and the third comb tooth portions152are staggered to form a first portion11having an inter-digital structure, and the second comb tooth portions143and the fourth comb tooth portions153are staggered to form a second portion12having an inter-digital structure. In some embodiments, in the first portion11, a gap between the comb tooth electrodes of the first comb tooth portion142and the third comb tooth portion152adjacent to each other is a first gap h1; in the second portion12, a gap between the comb tooth electrodes of the second comb tooth portion143and the fourth comb tooth portion153adjacent to each other is a second gap h2. The first gap h1is a length of the channel100formed between the first comb tooth portion142and the third comb tooth portion152each having a greater length. The second gap h2is a width of the channel100formed between the second comb tooth portion143and the fourth comb tooth portion153each having a smaller length. In some embodiments, the first gap h1is equal to the second spacing h2, i.e., the width of the channel100formed between the comb tooth portions each having a greater length is the same as that of the channel100formed between the comb tooth portions each having a smaller length. In some embodiments, the first gap h1and the second gap h2are both 3.5 μm. In some embodiments, the first gap h1may not be equal to the second gap h2. That is, the width of the channel100formed between the comb tooth portions each having a greater length is different from that of the channel100formed between the comb tooth portions each having a smaller length. In some embodiments, in the short circuit prevention region H, the first comb tooth portions142and the second comb tooth portions143are arranged in parallel, and a gap between the first comb tooth portion142and the second comb tooth portion143is a third gap a; the third gap a is greater than the first gap h1, and the third gap a is greater than the second gap h2. By providing the third gap a having a width greater than the channel width in the short circuit prevention region H, patterns of the first and second comb tooth portions142and143and the third and fourth comb tooth portions152and153having different comb tooth lengths are formed on the active layer103through a conventional patterning process, such that it is possible to prevent the photoresist from accumulating and remaining in the third gap a, thereby preventing parts of the film layer forming the patterns of the first and second electrodes104and105within the third gap a from remaining therein during the etching process, and further, it is possible to prevent the short circuit between the first electrode104and the second electrode105from occurring in the third gap a (that is, the channel short circuit), and finally, it is ensured that the signal output of the shift register is normal, and the defects of horizontal striations and the like of a display product having the shift register during displaying are avoided. In some embodiments, the third gap a has a width greater than or equal to 6 μm. In some embodiments, a pattern of the third gap a on a mask forming a pattern of the third gap a has a width of 6 μm. In this way, on one hand, the minimum manufacturing process precision requirement for the third gap a may be met; on the other hand, the third gap a does not greatly increase a space occupied by the transistor1, so that the display panel adopting the shift register may be ensured to realize a narrow frame; further, it may be ensured that the first electrode104and the second electrode105will not generate channel short circuit in the short circuit prevention region H, thereby ensuring that the signal output of the shift register is normal, and the defects of horizontal striations and the like of a display product adopting the shift register during displaying are avoided. In some embodiments, the pattern of the third gap a has a width of 6 μm or more under current manufacturing process conditions. In some embodiments, the third gap a has a width of 10 μm. In some embodiments, the orthogonal projection of a part of the first comb handle portion141in the short circuit prevention region H on the active layer103is a diagonal line, which may be formed by using a conventional patterning process. In some embodiments, the active layer103is broken at the third gap a. Since the width of the third gap a may be realized under the current manufacturing process conditions, when the active layer103is manufactured, the active layer103may be disconnected at the third gap a by using a conventional patterning process (including film formation, photoresist coating, exposure, development, etching, and the like), so that the active layer103may be patterned; further, the active layer103is disconnected at the third gap a, which further avoid a short circuit between the first electrode104and the second electrode105near the third gap a. In some embodiments, the active layers103of the first portion11of the inter-digital structure are integrally connected (have a one-piece structure), and the active layers103of the second portion12are integrally connected. In some embodiments, the gate electrodes101of the first and second portions11and12of the inter-digital structure are integrally connected, i.e. the gate electrode101of the transistor1is designed as a whole piece of conductive film layer. With the arrangement, not only the transistor1with the parameters for setting performances may be better realized, but also a narrow frame or an ultra-narrow frame of a display product adopting the shift register may be better realized. In some embodiments, the first comb handle portion141is connected to any two adjacent comb tooth electrodes in the first comb tooth portion142to form a U-shaped unit10, the first comb handle portion141is connected to any two adjacent comb tooth electrodes in the second comb tooth portion143to form a U-shaped unit10, and the first electrode104includes a plurality of U-shaped units10connected in series. In some embodiments, the second comb handle portion151is connected to any two adjacent comb tooth electrodes in the third comb tooth portion152to form a U-shaped unit10, the second comb handle portion151is connected to any adjacent two comb tooth electrodes in the fourth comb tooth portion153to form a U-shaped unit10, and the second electrode105includes a plurality of U-shaped units10connected in series. In this way, an area occupied by the transistor1is reduced, thereby reducing an area occupied by the shift register, and further reducing the frame width of the display product adopting the shift register, and thus, realizing a narrow frame or an ultra-narrow frame. In some embodiments, the first electrode104is a source electrode and the second electrode105is a drain electrode. In some embodiments, the first electrode may be a drain electrode, and the second electrode may be a source electrode. In some embodiments, there is at least one transistor1in the shift register, i.e. one or a plurality of transistors1; there is at least one first combination2composed of two transistors1of the plurality of transistors1, in the first combination2, the gate electrodes101of the transistors1are connected with each other, the first electrodes104of the transistors1are connected with each other or the second electrodes105of the transistors1are connected with each other; the two transistors1in the first combination2are arranged to be axisymmetric to each other. In the present embodiment, the transistor1includes one transistor1. In this way, an area occupied by the transistor1is reduced, thereby reducing an area occupied by the shift register, and further reducing the frame width of the display product adopting the shift register, and thus, realizing a narrow frame or an ultra-narrow frame. In some embodiments, the shift register further includes one or more second transistors3; the second transistor3includes a gate electrode, a gate insulating layer, an active layer, a source electrode and a drain electrode; the source electrodes and the drain electrodes are comb-shaped and have the same size and shape; the source electrodes and the drain electrodes form an inter-digital structure. With the arrangement, not only the second transistor3with the parameters for setting performances may be better realized, but also a narrow frame or an ultra-narrow frame of a display product adopting the shift register may be better realized. In some embodiments, there is at least one second combination4composed of two second transistors3of the plurality of second transistors3; in the second combination4, the gate electrodes of the second transistors3are connected with each other, the source electrodes of the second transistors3are connected with each other or the drain electrodes of the second transistors3are connected with each other; the two second transistors3in the second combination4are arranged to be axisymmetric to each other. In the present embodiment, one or more second transistors3include the plurality of second transistors3. In this way, an area occupied by the second transistors3is reduced, thereby reducing an area occupied by the shift register, and further reducing the frame width of the display product with the shift register, and realizing a narrow frame or an ultra-narrow frame. Referring toFIG.2, the shift register is designed in a 21T1C circuit configuration. Transistors M6and M6′ each have the structure of the transistor1in this embodiment, and form the first combination2, and the transistors M6and M6′ are arranged to be axisymmetric to each other. The transistor M6has the function of controlling (pulling down) a potential at a point PD1in the shift register; the transistor M6′ has the function of controlling (pulling down) a potential at a point PD2in the shift register. In the 21T1C circuit of the shift register, transistors M5, M6, M6′, and M5′ are sequentially arranged along a width direction of a frame of a display screen (i.e., a direction of the frame of the display screen away from the display region); the transistors M5and M5′ have the structure of the second transistor3in this embodiment. Lengths of the comb tooth portions of the first electrodes in the transistors M6and M6′ are the same, and lengths of the comb tooth portions of the second electrodes in the transistors M6and M6′ are the same, so that a width of the frame of the display screen at positions where the transistors M5, M6, M6′, and M5′ are arranged is greater. However, the transistors M6and M6′ are designed as having the structure of the transistor1in this embodiment, the width of the frame of the display screen at positions where the transistors M5, M6, M6′, and M5′ are arranged is greatly reduced, thereby realizing a narrow frame of the display screen. Referring toFIG.2, widths of the channels of the transistors M6and M6′ are the same, 800 μm respectively, and widths of the channels of the transistors M5and M5′ are the same, 100 μm respectively. In this embodiment, in order to realize a narrow frame or an ultra-narrow frame of the display screen using the shift register, a transistor having a channel width of 600 micrometers or more may be designed as the structure of the transistor1. The channel width of the transistor refers to a length of a part of the active layer, whose orthographic projection is located in the gap between the source electrode and the drain electrode directly facing to each other, extending along an extending direction of the gap. The manufacturing of the shift register adopts a conventional manufacturing process, such as a patterning process, which will not be described herein again. The embodiment of the present disclosure also provides a shift register which is different from the above embodiment in that, referring toFIGS.10and11, in the short circuit prevention region H, the first comb tooth portion142is adjacent to the second comb tooth portion143, and the first gap h1or the second gap h2is provided between the first comb tooth portion142and the second comb tooth portion143, that is, a value of the third gap a is a channel length of the first portion11or a channel length of the second portion12. In the short circuit prevention region H, comb tooth electrodes of the second electrode105is not provided between the first comb tooth portion142and the second comb tooth portion143, which is equivalent to providing a dummy U-shaped unit13not including the second electrode in the short circuit prevention region H, and the dummy U-shaped unit13does not function as a normal transistor because no channel is formed in the dummy U-shaped unit13. With this arrangement, even if the photoresist accumulates at the dummy U-shaped unit13in the short circuit prevention region H during etching patterns of the first and second electrodes104and105, a short circuit does not occur between the first and second electrodes104and105, that is, a channel short circuit does not occur in the transistor1. In some embodiments, in the short circuit prevention region H, the orthographic projection of the first comb handle portion141on the active layer103is an arc, and a gap between the first comb tooth portion142and the second comb tooth portion143is the first gap h1or the second gap h2, and may be formed by using a conventional patterning process. Other structural arrangements of the shift register in this embodiment are the same as those in the above embodiments, and are not described herein again. The embodiment of the present disclosure further provides a shift register, which is different from the above embodiment in that, referring toFIGS.12and13, the second electrode105further includes a fifth comb tooth portion154connected to the second comb handle portion151; the fifth comb tooth portion154is located in the short circuit prevention region H and on a central line parallel to the first comb tooth portion142and the second comb tooth portion143in the spacing region therebetween; the fifth comb tooth portion154is spaced from the first comb handle portion141by a fourth gap b; the fourth gap b is greater than the first gap h1, and the fourth gap b is greater than the second gap h2. When the first and second electrodes104and105are patterned using a conventional etching process, the photoresist is easily accumulated at a side close to the first comb handle portion141in the short circuit prevention region H. Since a length of the comb tooth electrode of the fifth comb tooth portion154is smaller than that of the third comb tooth portion152, an opening106is formed in the region of the active layer103where an orthographic projection of the fourth gap b overlaps an orthographic projection of the active layer103, and the active layer103in the region where the opening106is located loses the function of the channel, so that a channel short circuit is not formed; on the other hand, since a width of the fourth gap b is large, the photoresist does not remain at the fourth gap b, further avoiding the short circuit between the first electrode104and the second electrode105. Other structural arrangements of the shift register in this embodiment are the same as those in the above embodiments, and are not described herein again. In the shift register provided in the above embodiment, by defining the short circuit prevention region by the orthographic projections of the first comb tooth portion and the second comb tooth portion of different comb tooth lengths and the first comb handle portion and the second comb handle portion on the active layer, patterns of the first and second comb tooth portions of the first electrode having different lengths and the third and fourth comb tooth portions of the second electrode having different lengths are formed on the active layer through a conventional patterning process, such that it is possible to prevent the photoresist from accumulating in the channel formed between the comb tooth portions of the transistor having different lengths, thereby preventing parts of the film layer forming the patterns of the first and second electrodes within the channel from remaining during the etching process, and further, it is possible to avoid the short circuit between the first electrode and the second electrode in the channel, that is, the channel short circuit, and finally, it is ensured that the signal output of the shift register is normal, and the defects of horizontal striations and the like of a display product adopting the shift register during displaying are avoided. In a second aspect, an embodiment of the present disclosure provides a gate driving circuit, including a plurality of shift registers in any of the above embodiments; the plurality of shift registers are cascaded. By adopting the shift register in any embodiment, the area occupied by the gate driving circuit may be reduced, so that the narrow frame or the ultra-narrow frame of the display product adopting the gate driving circuit may be realized, the channel short circuit of the gate driving circuit may be avoided, and the display product adopting the gate driving circuit may be ensured to display normally. In a third aspect, an embodiment of the present disclosure provides a display panel including the gate driving circuit in the foregoing embodiments. In some embodiments, the display panel further includes an array substrate; the array substrate includes a display region and a frame region; and the frame region surrounds the periphery of the display region; the gate driving circuits are arranged on the array substrate and respectively positioned in the frame region at two opposite sides of the display region. The gate driving circuits respectively positioned in the frame regions at the two opposite sides of the display region may realize dual-side driving of each row of pixels in the display region, so that the display brightness of the display panel is more uniform, and the display effect is better. Alternatively, the gate driving circuits respectively located in the frame region at the two opposite sides of the display region may also achieve single-side driving of each row of pixels in the display region. In some embodiments, the gate driving circuits are disposed on the array substrate and located in the frame region at a side of the display region. The gate driving circuits positioned in a frame region at a side of the display region may realize single-side driving of each row of pixels in the display region. By adopting the gate driving circuit in the embodiment, the frame width of the display panel may be reduced, so that the narrow frame or the ultra-narrow frame of the display panel may be realized, poor display of the display panel caused by the channel short circuit in the gate driving circuit may be avoided, and normal display of the display panel may be ensured. The display panel provided by the embodiment of the present disclosure may be any product or component with a display function, such as an LCD panel, an LCD television, a monitor, a mobile phone, a navigator and the like. It should be understood that the above embodiments are merely exemplary embodiments adopted to explain the principles of the present disclosure, and the present disclosure is not limited thereto. It will be apparent to one of ordinary skill in the art that various changes and modifications may be made therein without departing from the spirit and scope of the present disclosure, and such changes and modifications also fall within the scope of the present disclosure. | 32,445 |
11862062 | DETAILED DESCRIPTION Hereinafter, various example embodiments will be described in greater detail with reference to the accompanying drawings. When describing the example embodiments with reference to the accompanying drawings, like reference numerals refer to like elements and a repeated description related may not be provided. When a display device operates even in case no user is present around the display device, unnecessary power consumption may occur. According to various example embodiments, a display device may minimize and/or reduce power consumption by controlling a power mode of the display device based on a motion around the display device. FIG.1is a diagram illustrating an example of a communication environment between a display device110and a wireless router120according to various embodiments. Referring toFIG.1, a wireless router120may be located near a display device110, and wireless communication may be performed between the display device110and the wireless router120through a wireless signal101. The wireless router120may include wired Internet connection, and may connect the display device110to the Internet using wired Internet connection and wireless communication connection. The wireless communication may include wireless fidelity (Wi-Fi), and the wireless signal101may be a Wi-Fi signal. The display device110may retrieve multipath channel characteristic data based on the wireless signal101, and may control the display device110using the multipath channel characteristic data. The multipath channel characteristic data may represent a channel status characteristic of a multipath. For example, the multipath channel characteristic data may include channel status information (CSI) data. The display device110may measure a motion around the display device110using the multipath channel characteristic data, and may control the display device110based on the measured motion. For example, when there is no motion around the display device110, the display device110may set a power mode of the display device110to a power saving mode. In case a user102uses the display device110, for example, the user102is watching video content on the display device110, a motion may be detected around the display device110. On the other hand, in case the user102does not use the display device110, a motion around the display device110may not be detected. In this case, unnecessary power consumption may be reduced by operating the display device110in the power saving mode or shutting off the power of the display device110. For example, the power saving mode may include at least one of reducing brightness of a screen of the display device110, reducing a volume level of the display device110, turning off the screen of the display device110, muting the sound of display device110, and shutting off the power of the display device110. FIG.2is a diagram illustrating an example operation related to measuring a motion using multipath channel characteristic data, according to various embodiments. Referring toFIG.2, in operation210, a display device (for example, the display device110, a display device1100, and a display device1200) may retrieve multipath channel characteristic data. The display device may retrieve the multipath channel characteristic data based on a wireless signal transmitted by a wireless router (for example, the wireless router120). The wireless signal may be a Wi-Fi signal. The multipath channel characteristic data may represent a channel frequency response by an orthogonal frequency division multiplexing (OFDM) subcarrier. The display device may retrieve the multipath channel characteristic data from a response signal of the wireless router to a response request signal of the display device, and may extract an amplitude for a frequency of each subcarrier from the multipath channel characteristic data. In operation220, the display device may perform preprocessing on the multipath channel characteristic data. For example, the preprocessing may include at least one of removing a data gap of a frequency, which does not have an amplitude value, among the frequencies of subcarriers, and removing an outlier from amplitude data. In operation230, the display device may calculate a similarity for each time period of the multipath channel characteristic data based on amplitude data for each frequency of a subcarrier based on the multipath channel characteristic data. The multipath channel characteristic data may be divided by time periods, and the similarity by the time periods may represent a similarity between multipath channel characteristic data of adjacent periods. For example, the similarity by the time periods may include an autocorrelation function (ACF). In case the preprocessing is performed through operation220, the display device may calculate a similarity based on a result of the preprocessing. The display device may determine representative similarity values for each reference time through similarity calculation. Here, the reference time may correspond to one of sampling time points (for example, a time point when a response signal is received) of the multipath channel characteristic data. For example, the display device may determine a first similarity value of a first frequency at a W+1-th time point by calculating a similarity between first amplitude data of the first frequency at a first time point to a W-th time point within a first window and second amplitude data of the first frequency at a second time point to the W+1-th time point within a second window. The display device may determine similarity values of other frequencies, such as a second frequency, in a similar manner. The display device may determine a representative similarity value at the W+1-th time point, based on the first similarity value of the first frequency at the W+1-th time point and a second similarity value of the second frequency at the W+1-th time point. For example, the representative similarity value may correspond to a statistical value (for example, an average value) of the similarity values. The display device may determine representative similarity values at other time points, such as a W+2-th time point, in a similar manner. In operation240, the display device may measure a motion around the display device. The display device may measure the motion based on a comparison result between the representative similarity values and a threshold. For example, the display device may determine that there is a motion in a period in which the representative similarity value is greater than the threshold, and may determine that there is no motion in a period in which the representative similarity value is less than the threshold. The display device may adaptively adjust the threshold based on a distribution of the representative similarity values. The display device may control the display device based on the measured motion. FIG.3is a diagram illustrating an example operation of retrieving multipath channel characteristic data, according to various embodiments. Referring toFIG.3, a display device320may transmit a response request signal to a wireless router310, and the wireless router310may transmit a response signal to the display device320, in response to the response request signal. The display device320may correspond to the display device110, the display device1100, and the display device1200, and the wireless router310may correspond to the wireless router120. The display device320may retrieve multipath channel characteristic data based on the response signal. FIG.4is a diagram illustrating an example of amplitude data based on multipath channel characteristic data according to various embodiments. The amplitude data may represent an amplitude for a frequency at each reference time. A display device (for example, the display device110, the display device1100, and the display device1200) may retrieve the multipath channel characteristic data at each reference time, and may generate the amplitude data by extracting an amplitude for each frequency from the multipath channel characteristic data. The amplitude data may have a data structure as shown in Table410. For example, fimay denote a frequency of a subcarrier having an index i, n may denote a size of a window for calculating a similarity, and a may denote an amplitude. For example, i may have a value between 1 to F. F may denote a total number of subcarriers. For example, t may denote a reference time. For example, a1may denote an amplitude of multipath channel characteristic data retrieved at a reference time t=0. The amplitude data may be represented by graph420. In graph420, the horizontal axis may represent time and the vertical axis may represent amplitude. In graph420, each frequency may be discriminated from another by color. FIG.5is a diagram including graphs illustrating an example result of preprocessing on multipath channel characteristic data according to various embodiments. Referring toFIG.5, graphs510,520and530(which may be referred to as graphs510to530) may represent amplitude data according to preprocessing. In graphs510to530, the horizontal axis may represent time and the vertical axis may represent amplitude. Amplitude data shown in graph520may be derived by removing a data gap511from amplitude data shown in graph510, and amplitude data shown in graph530may be derived by removing an outlier521from the amplitude data shown in graph520. Only one of removing the data gap511and removing the outlier521may be performed. Graph510may represent raw amplitude data of a frequency domain. Graph420ofFIG.4may represent raw amplitude data of a frequency domain. Graph510may be obtained by converting the amplitude data shown in graph420into a frequency domain. The amplitude data may include the data gap511, as shown in graph510. The data gap511may represent a phenomenon in which there is no amplitude value in a predetermined frequency band. The data gap511may occur in a predetermined frequency band based on a characteristic (for example, a modulation method) of a wireless router (for example, the wireless router120). The data gap511may decrease the accuracy of measuring a motion. The display device (for example, the display device110, the display device1100, and the display device1200) may remove the data gap511through a preprocessing operation, and may improve the accuracy of motion measurement by calculating a similarity using amplitude data of which the data gap511has been removed. The display device may detect the data gap511by scanning a frequency band of total subcarriers, and may remove the data gap511by adjusting a disposition of an amplitude value. For example,FIG.5illustrates an example that256OFDM subcarriers are used, and the data gap511has occurred in a frequency band of which a frequency index is 120 to 130. The display device may detect the data gap511in the frequency band, and may remove the data gap511using an amplitude value of another frequency band. For example, the display device may remove the data gap511by shifting an amplitude value of a frequency band of which a frequency index is 130 to 256. Accordingly, the data gap511may be replaced with an amplitude value of an adjacent frequency band (for example, a frequency band of which a frequency index is 130 to 140). Graph520may represent a state in which the data gap511has been removed. The number of subcarriers in which the amplitude data is distributed may decrease as the data gap511has been removed. In other words, the number of subcarriers as a result of preprocessing (for example, graph520or graph530) may be less than the number of subcarriers of multipath channel characteristic data (for example, graph510) before preprocessing. As shown in graph520, the amplitude data may include the outlier(s)521. The outlier521may be removed during a preprocessing process. For example, the outlier521may be removed through a Hampel filter. However, the Hampel filter is an example, and other filters may be used. Graph530may represent a state in which the outlier521has been removed. FIG.6is a diagram illustrating an example operation of deriving representative similarity values, according to various embodiments. Referring toFIG.6, similarity values630may be derived by operation620of calculating a similarity based on amplitude data610. In the amplitude data610, A may denote an amplitude and k may denote a total number of subcarriers. The amplitude data610may correspond to a result of preprocessing, and in case a data gap is removed through preprocessing, k may be less than F. As described above with reference toFIG.4, F may denote a total number of subcarriers before preprocessing. The similarity values630of each frequency with respect to a W+1-th reference time may be derived by performing operation620on each of the frequencies of the amplitude data610at the W+1-th reference time. Each frequency may be represented by fi. For example, i may have a value between 1 to k. For i-th amplitude data of fi, a first window W1and a second window W2may be defined, and an i-th similarity value Si may be determined through operation620between amplitude data A1to AWof the first window W1and amplitude data A2to AW+1of the second window W2. As described with reference toFIG.4, n may denote a size of a window for operation620of calculating a similarity. When n+1 times of data retrieval have performed before operation620, amplitude data A1to Anof fimay configure the amplitude data A1to AWof the first window W1, and amplitude data A2to An+1may configure the amplitude data A2to AW+1of the second window W2. By performing operation620between the amplitude data A1to AWof fiand the amplitude data A2to AW+1of fi, the i-th similarity value Siat the W+1-th reference time may be determined. As the first similarity value Siat the W+1-th reference time to a k-th similarity value Skat the W+1-th reference time are determined, operation640of calculating a representative value may be performed based on a statistical value of the similarity values Sito Sk. For example, based on an average value of the similarity values Sito Skat the W+1-th reference time, a representative similarity value at the W+1-th reference time may be determined. Through the operations, a representative similarity value for each reference time may be determined. FIG.7is a graph illustrating an example operation of measuring a motion, according to various embodiments. Referring toFIG.7, representative similarity values710and a threshold720are shown on graph700. In graph700, the horizontal axis may represent time and the vertical axis may represent a similarity value. The representative similarity values710may vary over time. A motion around a display device (for example, the display device110, the display device1100, and the display device1200) may change a pattern of multipath channel characteristic data, and the representative similarity values710may increase thereby. The display device may measure the motion based on a comparison result between the representative similarity values710and the threshold720. For example, the display device may determine that there is a motion in a period in which the representative similarity values710are greater than the threshold720, and may determine that there is no motion in a period in which the representative similarity values710are less than the threshold720. The threshold720may be adaptively adjusted depending on an installation environment or a surrounding condition. FIG.8is a graph illustrating an example of a threshold that is adaptively adjusted, according to various embodiments. Referring toFIG.8, representative similarity values810, a threshold820, ground truth (GT) data830, and a measurement result840may be displayed on graph800. In graph800, the horizontal axis may represent time and the vertical axis may represent a similarity value. The multipath channel characteristic data may represent a different characteristic, based on a use environment and a situation of a display device (for example, the display device110, the display device1100(refer toFIG.11), and the display device1200(refer toFIG.12)) and a wireless router (for example, the wireless router120), and thus, the display device may adaptively adjust the threshold820, based on the use environment and the situation. The threshold820may initially have a preset initial value, such as 1.0, and thereafter, the threshold820may be adjusted according to a distribution of the representative similarity values810. For example, the display device may calculate an average value and a maximum value of the representative similarity values810of a corresponding time period at an adjustment interval (for example, one minute), and when the threshold820is greater than the average value, the display device may adjust the threshold820based on the maximum value. For example, the threshold820may be set to a value, which is greater than the maximum value by 10% of the maximum value. In the example shown inFIG.8, the threshold820may be gradually adjusted to a smaller value according to the distribution of the representative similarity values810at an initial stage. Accordingly, the measurement result840corresponding to the GT data830may be derived. FIGS.9A,9B,9C and9D(which may be referred to asFIGS.9A to9D) are diagrams illustrating examples of motion measuring ranges according to various embodiments. Referring toFIGS.9A to9D, motion measuring ranges901,902,903and904(which may be referred to as ranges901to904) may be illustrated based on a location of a display device910and a wireless router920. The display device910may correspond to the display device110, the display device1100, and the display device1200, and the wireless router920may correspond to the wireless router120.FIG.9Amay represent an environment in which the wireless router920is installed on a side of the display device910,FIG.9Bmay represent an environment in which the wireless router920is installed in front of the display device910,FIG.9Cmay represent an environment in which the wireless router920is installed in a room in front of the display device910, andFIG.9Dmay represent an environment in which the wireless router920is installed in a room behind the display device910. The motion measuring ranges901to904may be distributed in a space around the display device910, a space around the wireless router920, and a space between the display device910and the wireless router920. FIG.10is a flowchart illustrating an example method of controlling a display device, according to various embodiments. Operations1010to1040ofFIG.10may be performed sequentially or non-sequentially. For example, the order of operations1010to1040may be changed, and/or at least two of operations1010to1040may be performed in parallel. Operations1010to1040may be performed by at least one component (for example, a processor1130and a processor1270) of a display device (for example, the display device110, the display device1100, and the display device1200). Referring toFIG.10, in operation1010, the display device may retrieve multipath channel characteristic data based on a wireless signal transmitted by a wireless router. In operation1020, the display device may perform preprocessing on the multipath channel characteristic data of the retrieved wireless signal. Operation1020may include an operation of detecting a data gap in a first frequency band of the multipath channel characteristic data, and an operation of removing the data gap by replacing the data gap with an amplitude value of a second frequency band that is adjacent to the first frequency band. The number of subcarriers of result data corresponding to the preprocessing result may be less than the number of subcarriers of the multipath channel characteristic data before preprocessing. Operation1020may include an operation of removing an outlier from amplitude data. In operation1030, the display device may determine representative values at each reference time by calculating a similarity for each time period of the result data corresponding to the preprocessing result. Operation1030may include an operation of determining a first similarity value of a first frequency at a W+1-th time point by calculating a similarity between first amplitude data of the first frequency at a first time point to a W-th time point within a first window of the multipath channel characteristic data and second amplitude data of the first frequency at a second time point to the W+1-th time point within a second window. Operation1030may include an operation of determining a representative value at the W+1-th time point, based on the first similarity value of the first frequency at the W+1-th time point and a second similarity value of the second frequency at the W+1-th time point. The representative value at the W+1-th time point may be an average value, based on the first similarity value and the second similarity value. In operation1040, the display device may determine a motion around the display device based on a change in the representative values over time. Operation1040may include an operation of determining a motion based on a comparison result between the representative values and a threshold. The threshold may be adaptively adjusted depending on a distribution of the representative values. In operation1050, the display device may control the display device based on the determined motion around the display device. Operation1050may include an operation of operating the display device in a power saving mode when there is no motion around the display device for a predetermined time. FIGS.11and12are block diagrams illustrating example configurations of display devices according to various embodiments. As shown inFIG.11, the display device1100may include a memory1120, the processor (e.g., including processing circuitry)1130, a communicator (e.g., including communication circuitry)1150, and a sensing unit (e.g., including sensing circuitry and/or a sensor)1191. The communicator1150may include various communication circuitry and receive a wireless signal transmitted by a wireless router. According to various example embodiments, the processor1130may include various processing circuitry and continuously retrieve multipath channel characteristic data based on a wireless signal, may perform preprocessing on the multipath channel characteristic data of the retrieved wireless signal, may determine representative values for each reference time by calculating a similarity for each time period of result data corresponding to the preprocessing result, may determine a motion around the display device based on a change in the representative values over time, and may control the display device based on the determined motion around the display device. The processor1130may detect a data gap in a first frequency band of multipath channel characteristic data, as a form of performing the preprocessing, and may remove the data gap by replacing the data gap with an amplitude value of a second frequency band that is adjacent to the first frequency band. The number of subcarriers of result data may be less than the number of subcarriers of the multipath channel characteristic data before preprocessing. As a form of performing the preprocessing, the processor1130may remove an outlier from the multipath channel measurement data. The processor1130may determine a first similarity value of a first frequency at a W+1-th time point by calculating a similarity between first amplitude data of the first frequency at a first time point to a W-th time point within a first window of the multipath channel characteristic data and second amplitude data of the first frequency at a second time point to the W+1-th time point within a second window, and may determine a representative value at the W+1-th time point, based on the first similarity value of the first frequency at the W+1-th time point and a second similarity value of a second frequency at the W+1-th time point. The representative value at the W+1-th time point may be an average value, based on the first similarity value and the second similarity value. The processor1130may measure a motion based on a comparison result between the representative values and the threshold. The threshold may be adaptively adjusted depending on a distribution of the representative values. The processor1130may operate the display device in a power saving mode when there is no motion around the display device for a predetermined time. According to various example embodiments, the processor1130may continuously retrieve multipath channel characteristic data based on a wireless signal that is transmitted by a wireless router, may perform preprocessing on the multipath channel characteristic data of the retrieved wireless signal, may determine representative values for each reference time by calculating a similarity for each time period of result data corresponding to the preprocessing result, may determine a motion around the display device, based on a comparison result between the representative values and an adaptive threshold, and may control a power mode of the display device based on the determined motion around the display device. Not all components shown inFIG.11are essential components. The display device1100may be implemented by more components than the illustrated components, and the display device1100may be implemented by less components. For example, as shown inFIG.12, the display device1200may include a display1210, a tuner1240, a detector (e.g., including detecting circuitry)1260, an input/output (I/O) unit (e.g., including input/output circuitry)1270, a video processor (e.g., including video processing circuitry)1280, an audio processor (e.g., including audio processing circuitry)1215, an audio output unit (e.g., including audio output circuitry)1226, and a power supply unit (e.g., including a power supply)1290as well as a memory1220, the processor (e.g., including processing circuitry)1230, a communicator (e.g., including communication circuitry)1250, and a sensing unit (e.g., including various sensors)1291. Hereinafter, the components above mentioned are described in greater detail. The processor1230may include various processing circuitry and control overall operations of the display device1200and a flow of a signal between internal components of the display device1200, and may process data. The processor1230may execute various applications and an operation system (OS) stored in the memory1220, in response to a user input or when a preset and stored condition is satisfied. The processor1230may include random access memory (RAM) configured to store data or a signal input from the outside the display device1200or configured to be used as a storage corresponding to various tasks performed by the display device1200, read-only memory (ROM) that stores a control program to control the display device1200, and a processor. The processor1230may include a graphics processing unit (GPU) (not shown) to process a graphic corresponding to a video. The processor1230may be implemented as a System on Chip (SoC) that integrates a core (not shown) and the GPU (not shown). The processor1230may include a single core, a dual core, a triple core, a quad core, and a multi core. The processor1230may include a plurality of processors. For example, the processor may be implemented as a main processor (not shown) and a sub-processor (not shown) that operates in a sleep mode. The processor1230may detect at least one sensed value corresponding to at least one sensor through the sensing unit1291including at least one sensor, by executing one or more instructions stored in the memory1220. The memory1220may store various pieces data, a program, or an application for driving and controlling the display device1200under control by the processor1230. The memory1220may store data or input/output signals corresponding to driving of the video processor1280, the display1210, the audio processor1215, the audio output unit1226, the power supply unit1290, the tuner1240, the communicator1250, the detector1260, and the I/O unit1270. The memory1220may store an operating system1221for controlling the display device1200and the processor1230, an application1222initially provided by a manufacturer or externally downloaded, a graphical user interface (GUI) related to the application, an object (for example, an image, text, an icon, a button, and the like) for providing the GUI, user information, a document, a database, and related data. In addition, the memory1220may include a television (TV) viewer module1223including one or more instructions to receive an input signal from a remote control device (not shown) and thereby perform channel control corresponding to the input signal, or enter a channel scroll user interface mode when the input signal corresponds to a preset input, a text recognition module1224including one or more instructions to recognize information from content received from an external device (not shown), and a membrane bioreactor (MBR) module1225including one or more instructions to control a channel from an external device (not shown). The memory1220may include ROM, RAM, a memory card (for example, a micro secure digital (SD) card and a universal serial bus (USB) memory, which are not shown) mounted to the display device1200. In addition, the memory1220may include non-volatile memory, volatile memory, a hard disk drive (HDD), or a solid state drive (SSD). The memory1220may include at least one type of storage media of a flash memory type, a hard disk type, a multimedia card micro type, a card memory type (for example, SD or extreme digital (XE) memory), RAM, static RAM, ROM, electrically erasable programmable ROM (EEPROM), programmable ROM (PROM), magnetic memory, a magnetic disk, and an optical disk. The display1210may display a video included in a broadcast signal received through the tuner1240on a screen under control by the processor1230. In addition, the display1210may display content (for example, a moving image) input through the communicator1250or the I/O unit1270. The display1210may output an image stored in the memory1220under control by the processor1230. The display1210may generate a driving signal by converting an image signal, a data signal, an on-screen display (OSD) signal, and a control signal processed by the processor1230. The display1210may be implemented as a plasma display panel (PDP), a liquid crystal display (LCD), an organic light-emitting diode (OLED), a cathode ray tube (CRT), and a flexible display, and in addition, the display1210may be implemented as a 3D display. In addition, the display1210may be used as an input device as well as an output device by being configured as a touchscreen. The tuner1240may tune and select a frequency of a channel desired to be received by the display device1200among various radio wave elements through performing amplification, mixing, and resonance on a broadcast signal that is received by wire or wirelessly. The broadcast signal may include audio, a video, and additional information (for example, an electronic program guide (EPG)). The tuner1240may receive the broadcast signal from a frequency band corresponding to a channel number according to a user input (for example, a control signal received from a remote control device (not shown), that is, a channel number input, an up-down input of a channel, and a channel input on an EPG screen). The tuner1240may receive broadcast signals from various sources, such as terrestrial broadcast, cable broadcast, satellite broadcast, and Internet broadcast. The tuner1240may receive the broadcast signal from a source, such as analog broadcast or digital broadcast. The broadcast signal received by the tuner1240may be separated into audio, video, and/or additional information by decoding (for example, audio decoding, video decoding, or additional information decoding). The separated audio, video, and/or additional information may be stored in the memory1220under control by the processor1230. One or a plurality of tuners1240of the display device1200may be provided. The tuner1240may be implemented as all-in-one with the display device1200, or implemented as a separate device (for example, a set-top box, which is not shown, and a tuner, which is not shown, connected to the I/O unit1270) that includes a tuner electrically connected to the display device1200. The communicator1250may include various communication circuitry and connect the display device1200to an external device (for example, an audio device) (not shown) under control by the processor1230. The processor1230may transmit/receive content to/from the external device (not shown) connected through the communicator1250, may download an application from the external device (not shown), or may perform web browsing. The communicator1250may include one of a wireless local area network (LAN)1251, Bluetooth1252, and wired Ethernet1253corresponding to the performance and the structure of the display device1200. In addition, the communicator1250may include a combination of the wireless LAN1251, Bluetooth1252, and the wired Ethernet1253. In addition, the communicator1250may receive a control signal of a remote control device (not shown) under control by the processor1230. The control signal may be implemented as a Bluetooth type, a radio frequency (RF) signal type, or a Wi-Fi type. In addition, the communicator1250may further include another form of local area communication (for example, near field communication (NFC), which is not shown, and Bluetooth low energy (BLE), which is not shown) other than Bluetooth. The detector1260may include various detection circuitry and detect voice, an image, or an interaction of a user, and may include a microphone1261, a camera part1262, and an optical receiver1263. The microphone1261may receive an uttered voice of the user. The microphone1261may convert the received voice into an electrical signal and may output the electrical signal to the processor1230. The user voice may include, for example, voice corresponding to a menu or a function of the display device1200. The camera part1262may obtain an image bezel, such as a still image or a moving image. An image captured by an image sensor may be processed by the processor1230or a separate image processor (not shown). The image bezel processed by the camera part1262may be stored in the memory1220or may be transmitted to the outside through the communicator1250. Two or more camera parts1262may be provided based on the configuration of the display device1200. The optical receiver1263may receive an optical signal (including a control signal) received from an external remote control device (not shown). The optical receiver1263may receive an optical signal corresponding to a user input (for example, a touch, a press, a touch gesture, a voice, or a motion) from a remote control device (not shown). A control signal may be extracted from the received optical signal under control by the processor1230. For example, the optical receiver1263may receive a control signal corresponding to a channel up/down button for changing a channel, from the remote control device (not shown). The I/O unit1270may include various input/output circuitry and receive video (for example, a moving image), audio (for example, voice, music), and additional information (for example, an EPG) from the outside of the display device1200under control by the processor1230. The I/O unit1270may include at least one of a high-definition multimedia interface (HDMI) port1271, a component jack1272, a PC port1273, and a USB port1274. The I/O unit1270may include any combination of the HDMI port1271, the component jack1272, the PC port1273, and the USB port1274. An external image providing device (not shown) may be connected through the HDMI port1271. The video processor1280may include various video processing circuitry and process video data received by the display device1200. In the video processor1280, various image processing may be performed on video data, such as decoding, scaling, noise filtering, bezel rate conversion, and resolution conversion. A graphic processor1281may include various graphics processing circuitry and generate a screen including various objects, such as an icon, an image, and text using an arithmetic unit (not shown) and a renderer (not shown). The arithmetic unit (not shown) may calculate an attribute value, such as a color, a size, a shape, a coordinate value, to display each object based on a layout of a screen using a user input that is detected by the detector1260. The renderer (not shown) may generate screens in various layouts including an object based on the attribute value calculated by the arithmetic unit (not shown). The screen generated by the renderer (not shown) may be displayed on a display area of the display1210. The audio processor1215may include various audio processing circuitry and process audio data. The audio processor1215may perform various processing on the audio data, such as decoding, amplification, and noise filtering. In addition, the audio processor1215may include a plurality of audio processing modules to process audio corresponding to a plurality of contents. The audio output unit1226may include various audio output circuitry and output audio included in the broadcast signal received through the tuner1240under control by the processor1230. The audio output unit1226may output audio (for example, voice, sound) input through the communicator1250or the I/O unit1270. In addition, the audio output unit1226may output audio stored in the memory1220under control by the processor1230. The audio output unit1226may include at least one of a speaker1227, a headphone output terminal1228, and a Sony/Philips digital interface (S/PDIF) output terminal1229. The audio output unit1226may include any combination of the speaker1227, the headphone output terminal1228, and the S/PDIF output terminal1229. The power supply unit1290may include a power supply and supply power input from an external power source to the components inside the display device1200under control by the processor1230. In addition, the power supply unit1290may supply power output from one or more batteries (not shown) placed inside the display device1200to the components inside the display device1200under control by the processor1230. The sensing unit1291may include various sensors and sense a state of the display device1200or a state around the display device1200, and may provide the information obtained by sensing to the processor1230. The sensing unit1291may include at least one of a magnetic sensor1292, an acceleration sensor1293, a temperature/humidity sensor1294, an IR sensor1295, a gyroscope sensor1296, a position sensor (for example, global positioning system (GPS))1297, an atmospheric pressure sensor1298, a proximity sensor1299, and an RGB sensor1301(for example, an illuminance sensor), however, the example is not limited thereto. Since one skilled in the art may intuitively infer a function of each sensor from its name, a detailed description thereof may not be provided here. The sensing unit1291may sense an external impact applied to the display device1200. In addition, a separate external device (for example, a set-top box, which is not shown) including the tuner1240may be electrically connected to the display device1200including the display1210. In addition, the display device1200may be implemented as an analog TV, a digital TV, a 3D-TV, a smart TV, an LED TV, an OLED TV, a plasma TV, and a monitor, however, one skilled in the art will understand that the example is not limited thereto. Moreover, the illustrated block diagram of the display device1200is a block diagram of an example embodiment. Each component of the block diagram may be integrated, added, or omitted based on actually implemented specifications of the display device1200. That is, two or more components may be combined into one component, or one component may be divided into two or more components, as necessary. In addition, a function performed by each block is for describing example embodiments, and a detailed operation thereof or a device does not limit the scope of the present disclosure. It should be understood that various example embodiments of the present disclosure and the terms used therein are not intended to limit the technological features set forth herein to particular embodiments and include various changes, equivalents, or replacements for a corresponding embodiment. In connection with the description of the drawings, like reference numerals may be used for similar or related components. It is to be understood that a singular form of a noun corresponding to an item may include one or more of the things, unless the relevant context clearly indicates otherwise. As used herein, “A or B”, “at least one of A and B”, “at least one of A or B”, “A, B or C”, “at least one of A, B and C”, and “A, B, or C,” each of which may include any one of the items listed together in the corresponding one of the phrases, or all possible combinations thereof. Terms such as “first”, “second”, or “first” or “second” may simply be used to distinguish the component from other components in question, and may refer to components in other aspects (e.g., importance or order) is not limited. It is to be understood that if an element (e.g., a first element) is referred to, with or without the term “operatively” or “communicatively”, as “coupled with,” “coupled to,” “connected with,” or “connected to” another element (e.g., a second element), the element may be coupled with the other element directly (e.g., wiredly), wirelessly, or via a third element. As used in connection with various example embodiments of the disclosure, the term “module” may include a unit implemented in hardware, software, or firmware, or any combination thereof, and may interchangeably be used with other terms, for example, “logic,” “logic block,” “part,” or “circuitry”. A module may be a single integral component, or a minimum unit or part thereof, adapted to perform one or more functions. For example, according to an example embodiment, the module may be implemented in a form of an application-specific integrated circuit (ASIC). Various example embodiments as set forth herein may be implemented as software (for example, the OS1221, the application1222) including one or more instructions that are stored in a storage medium (for example, the memory1120, the memory1220) that is readable by a machine (for example, the display device110, the display device1100, and the display device1200). For example, a processor (for example, the processor1130, the processor1230) of the machine (for example, the display device110, the display device1100, and the display device1200) may invoke at least one of the one or more instructions stored in the storage medium, and execute it. This allows the machine to be operated to perform at least one function according to the at least one instruction invoked. The one or more instructions may include a code generated by a compiler or a code executable by an interpreter. The machine-readable storage medium may be provided in the form of a non-transitory storage medium. The “non-transitory” storage medium is a tangible device, and may not include a signal (e.g., an electromagnetic wave), but this term does not differentiate between where data is semi-permanently stored in the storage medium and where the data is temporarily stored in the storage medium. According to an example embodiment, a method according to various example embodiments of the disclosure may be included and provided in a computer program product. The computer program product may be traded as a product between a seller and a buyer. The computer program product may be distributed in the form of a machine-readable storage medium (e.g., compact disc read only memory (CD-ROM)), or be distributed (e.g., downloaded or uploaded) online via an application store (e.g., PlayStore™), or between two user devices (e.g., smart phones) directly. If distributed online, at least part of the computer program product may be temporarily generated or at least temporarily stored in the machine-readable storage medium, such as memory of the manufacturer's server, a server of the application store, or a relay server. According to various example embodiments, each component (e.g., a module or a program) of the above-described components may include a single entity or multiple entities, and some of the multiple entities may be separately disposed in different components. According to various example embodiments, one or more of the above-described components or operations may be omitted, or one or more other components or operations may be added. Alternatively or additionally, a plurality of components (e.g., modules or programs) may be integrated into a single component. In such a case, according to various example embodiments, the integrated component may still perform one or more functions of each of the plurality of components in the same or similar manner as they are performed by a corresponding one of the plurality of components before the integration. According to various example embodiments, operations performed by the module, the program, or another component may be carried out sequentially, in parallel, repeatedly, or heuristically, or one or more of the operations may be executed in a different order or omitted, or one or more other operations may be added. While the disclosure has been illustrated and described with reference to various example embodiments, it will be understood that the various example embodiments are intended to be illustrative, not limiting. It will be further understood by those skilled in the art that various changes in form and detail may be made without departing from the true spirit and full scope of the disclosure, including the appended claims and their equivalents. It will also be understood that any of the embodiment(s) described herein may be used in conjunction with any other embodiment(s) described herein. | 46,345 |
11862063 | DETAILED DESCRIPTION OF CERTAIN INVENTIVE EMBODIMENTS Recently, demand for a display device having a circular substrate has increased. The circular display device has a pixel arrangement different from a typical display device, such that a new layout of a peripheral area is required. In typical displays, the arrangement of circuitry of the driving portion is not efficient. Hereinafter, the described technology will be explained in detail with reference to the accompanying drawings. In this disclosure, the term “substantially” includes the meanings of completely, almost completely or to any significant degree under some applications and in accordance with those skilled in the art. Moreover, “formed on” can also mean “formed over.” The term “connected” can include an electrical connection. FIG.1is a plan view illustrating a circular display substrate according to an exemplary embodiment.FIG.2is an enlarged view illustrating A area ofFIG.1. Referring toFIGS.1and2, a circular display substrate includes a pixel area PA and a peripheral area SA. The circular display substrate can display an image in the pixel area PA. For example, the circular display substrate is a liquid crystal display substrate, an OLED display substrate and the like. The pixel area PA has a substantially circular shape. The pixel area PA includes a plurality of pixels P to display an image. The pixels P can be arranged in a matrix form along a first direction D1and a second direction D2in the pixel area PA. The second direction D2crosses the first direction D1. The pixel P is electrically connected to a scan line SL and a data line DL. The scan line SL extends in the first direction D1. The data line DL extends in the second direction D2to cross the first direction D1. The peripheral area SA is adjacent to the pixel area PA. The peripheral area SA can surround the pixel area PA, so that can form a ring. A driving portion to drive the pixels P is formed in the peripheral area SA. The driving portion (or driving circuit) includes a scan driving portion (or scan driving circuit) SDR and a data driving portion (or data driving circuit) DDR. The scan driving portion SDR sequentially provides scan signals to the pixels P. The data driving portion DDR provides data signals to the pixels P. The scan driving portion SDR includes a plurality of scan circuits100. The data driving portion DDR include a plurality of data circuits200. The scan circuit100is formed in the peripheral area SA. The scan circuit100is electrically connected to the scan line SL in the pixel area PA through a scan connecting line110which is formed in the peripheral area SA. The data circuit200is formed in the peripheral area SA. The data circuit200is electrically connected to the data line DL in the pixel area PA through a data connecting line210which is formed in the peripheral area SA. Referring again toFIG.2, a boundary between the pixel area PA and the peripheral area SA is substantially circular. Thus, a portion of the boundary has an arc shape. The boundary is formed along an arc direction D3, and the scan circuits100and the data circuits200are formed in the arc direction D3. For example, the scan circuits100and the data circuits200are alternately formed along the arc direction D3. The scan circuit100extends in a fourth direction (or peripheral direction) D4which is substantially perpendicular to or crossing the arc direction D3. The fourth direction D4is substantially perpendicular to the arc direction D3, so that the fourth direction D4can be varied according to a position of the scan circuit100. For example, the scan circuit100overall has a width in a fifth direction D5which is substantially perpendicular to the fourth direction D4, and extend in the fourth direction D4, so that the scan circuit100is substantially rectangular. The scan connecting line110is formed in the peripheral area SA. The scan connecting line110electrically connects the scan circuit100to the scan line SL in the pixel area PA. The scan connecting line110extends in the fourth direction D4. Thus, the scan connecting line110extends in a direction which is substantially perpendicular to the boundary between the pixel area PA and the peripheral area SA. The data circuit200extends in the fourth direction D4which is substantially perpendicular to the arc direction D3. The fourth direction D4is substantially perpendicular to the arc direction D3, so that the fourth direction D4can be varied according to a position of the data circuit200. For example, the data circuit200has a width in a fifth direction D5, and extend in the fourth direction D4, so that the data circuit200has a substantially rectangular shape. The data connecting line210is formed in the peripheral area SA. The data connecting line210electrically connects the data circuit200to the data line DL in the pixel area PA. The data connecting line210extends in the fourth direction D4. Thus, the data connecting line210extends in a direction which is perpendicular to the boundary of the pixel area PA and the peripheral area SA. Accordingly, the scan circuits100and the data circuits200are formed along the boundary of the pixel area PA and the peripheral area SA, and each of the scan circuits100and the data circuits200extends in a substantially perpendicular direction to the boundary. Thus, efficiency of a circuit layout in peripheral area SA can be improved. In addition, the scan connecting line110extends in the perpendicular direction to the boundary, so that the scan lines SL in the pixel area PA and the scan circuits100can be connected to each other substantially uniformly. Thus, resistive load due to a wiring length difference can be reduced. In addition, the data connecting line210extends in the perpendicular direction to the boundary, so that the data lines DL in the pixel area PA and the data circuits200can be connected to each other substantially uniformly. Thus, resistive load due to a wiring length difference can be reduced. FIG.3Ais a partially enlarged view illustrating a scan circuit and a data circuit ofFIG.2.FIG.3Bis a cross-sectional view taken along line I-I′ ofFIG.3A. Referring toFIGS.3A and3B, the scan circuit100includes a scan peripheral transistor STR and a first pattern112. The data circuit200includes a data peripheral transistor DTR and a second pattern212. The circular display substrate includes a base substrate10, an active pattern, a first insulation layer20, a gate metal pattern, a second insulation layer30, a data metal pattern and a third insulation layer40. The base substrate10can include a transparent insulation substrate. For example, the base substrate10includes a glass substrate, a quartz substrate, a transparent resin substrate, etc. Examples of the transparent resin substrate for the base substrate10include polyimide-based resin, acryl-based resin, polyacrylate-based resin, polycarbonate-based resin, polyether-based resin, sulfonic acid containing resin, polyethyleneterephthalate-based resin, etc. Although not shown in figure, at least one buffer layer can be formed on the base substrate10. For example, the buffer layer prevents diffusion of metal atoms and/or impurities from the base substrate10. Additionally, the buffer layer can adjust heat transfer rate of a successive crystallization process for the active pattern, to thereby obtain a substantially uniform active pattern. When the base substrate10has a relatively irregular surface, the buffer layer can improve flatness of the surface of the base substrate10. The buffer layer can be formed of a silicon compound. For example, the buffer layer includes silicon oxide (SiOx), silicon nitride (SiNx), silicon oxynitride (SiOxNy), silicon oxycarbide (SiOxCy), silicon carbon nitride (SiCxNy), etc. These can be used alone or in a mixture thereof. The active pattern is formed on the base substrate10. In one example embodiment, the active pattern is formed of silicon (Si). In another example embodiment, the active pattern includes a semiconductor oxide including a binary compound (ABx), a ternary compound (ABxCy) and/or a quaternary compound (ABxCyDz). For example, the active pattern is formed of indium (In), zinc (Zn), gallium (Ga), Tin (Sn), titanium (Ti), aluminum (Al), hafnium (Hf), zirconium (Zr) and/or magnesium (Mg). The active pattern can include a scan peripheral active area SA, a scan peripheral source area SS and a scan peripheral drain area SD of the scan peripheral transistor STR. In addition, the active pattern can include a data peripheral active area DA, a data peripheral source area DS and a data peripheral drain area DD of the data peripheral transistor DTR. The first insulation layer20can be formed on and cover the active pattern. The first insulation layer20can be formed by a CVD process, a spin coating process, a plasma enhanced chemical vapor deposition (PECVD) process, a sputtering process, a vacuum deposition process, a high density plasma-chemical vapor deposition (HDP-CVD) process, printing process, etc. For example, the first insulation layer20is formed of silicon oxide (SiOx), silicon nitride (SiNx), silicon oxynitride (SiOxNy), aluminum oxide (AlOx), tantalum oxide (TaOx), hafnium oxide (HfOx), zirconium oxide (ZrOx), titanium oxide (TiOx), etc. These can be used alone or in a combination thereof. In addition, the first insulation layer20can have a single layer structure or a multi layer structure formed of silicon oxide and/or silicon nitride. In example embodiments, the first insulation layer20are substantially uniformly formed on the base substrate10along a profile of the active pattern. Here, the first insulation layer20can have a substantially small thickness, such that a stepped portion can be generated at a portion of the first insulation layer20adjacent to the active pattern. In some example embodiments, the first insulation layer20has a relatively large thickness for sufficiently covering the active pattern, so that the first insulation layer20has a substantially level surface. The gate metal pattern includes the scan line SL in the pixel area (referring to PA ofFIG.2), the scan connecting line110in the peripheral area SA, the first pattern112of the scan circuit100, a scan peripheral gate electrode DG of the scan peripheral transistor STR, and a data peripheral gate electrode DG of the data peripheral transistor DTR. The gate metal pattern can be formed on the first insulation layer20. In some example embodiments, a conductive layer (not illustrated) is formed on the first insulation layer20, and then the conductive layer is partially etched by a photolithography process or an etching process using an additional etching mask. Hence, the gate metal pattern can be provided on the first insulation layer20. The conductive layer can be formed by a printing process, a sputtering process, a CVD process, a pulsed laser deposition (PLD) process, a vacuum evaporation process, an atomic layer deposition (ALD) process, etc. The gate metal pattern can be formed of metal, alloy, conductive metal oxide, a transparent conductive material, etc. For example, the gate metal pattern is formed of aluminum (Al), alloy containing aluminum, aluminum nitride (AlNx), silver (Ag), alloy containing silver, tungsten (W), tungsten nitride (WNx), copper (Cu), alloy containing copper, nickel (Ni), alloy containing nickel, chrome (Cr), chrome nitride (CrNx), molybdenum (Mo), alloy containing molybdenum, titanium (Ti), titanium nitride (TiNx), platinum (Pt), tantalum (Ta), tantalum nitride (TaNx), neodymium (Nd), scandium (Sc), strontium ruthenium oxide (SRO), zinc oxide (ZnOx), indium tin oxide (ITO), tin oxide (SnOx), indium oxide (InOx), gallium oxide (GaOx), indium zinc oxide (IZO), etc. These can be used alone or in a combination thereof. In example embodiments, the gate metal layer has a single layer structure or a multi layer structure, which can include a metal film, an alloy film, a metal nitride film, a conductive metal oxide film and/or a transparent conductive film. The scan line SL is electrically connected to the pixels (refers to P ofFIG.2) in the pixel area. The first pattern112is formed in the peripheral area, and includes a portion of the scan circuit110. For example, the first pattern112is electrically connected to the scan drain area SD of the scan peripheral transistor STR through a contact hole formed through the first insulation layer20. The first pattern includes a first side112awhich extends in a fourth direction D4and a second side112bwhich extends in a fifth direction D5which is substantially perpendicular to the fourth direction D4. The fourth direction D4is substantially perpendicular to a boundary of the pixel area and the peripheral area. Thus, the scan circuit100includes a circuit pattern having sides which extend along the fourth direction D4and the fifth direction D5. Accordingly, the scan circuit100can be substantially rectangular. The scan connecting line110is formed in the peripheral area, and electrically connects the first pattern112of the scan circuit100to the scan line SL. The scan connecting line110extends in the fourth direction D4which is substantially perpendicular to the boundary. The scan peripheral gate electrode SG overlaps the scan peripheral active area SA. The scan peripheral active area SA, the scan peripheral source area SS and the scan peripheral drain area SD are included in the scan peripheral transistor STR. The data peripheral gate electrode DG overlaps the data peripheral active area DA. The data peripheral active area DA, the data peripheral source area DS and the data peripheral drain area DD are included in the data peripheral transistor DTR. Thus, the scan line SL in the pixel area and the first pattern112in the peripheral area, the scan peripheral gate electrode SG of the scan peripheral transistor STR, and the data peripheral gate electrode DG of the data peripheral transistor DTR can be formed from the same metal layer by pattering the metal later. The second insulation layer30is formed on the first insulation layer20on which the gate pattern is formed. The second insulation layer30having a substantially uniform thickness can be formed on the first insulation layer20along a profile of the gate metal pattern. Thus, a stepped portion can be generated at a portion of second insulation layer30adjacent to the gate metal pattern. The second insulation layer30can be formed using a silicon compound. For example, the second insulation layer30is formed of silicon oxide, silicon nitride, silicon oxynitride, silicon oxycarbide and/or silicon carbon nitride. These can be used alone or in a mixture thereof. The second insulation layer30can be obtained by a spin coating process, a CVD process, a PECVD process, an HDP-CVD process, an LPCVD process, etc. In example embodiments, the second insulation layer30has a single layer structure or a multi layer structure, which includes a silicon oxide film, a silicon nitride film, a silicon oxynitride film, a silicon oxycarbide film and/or a silicon carbon nitride film. The data metal pattern is formed on the second insulation layer30. The data metal pattern includes a data line DL in the pixel area, data connecting line210in the peripheral area, and the second pattern212of the data circuit200. In one example embodiment, a conductive layer (not illustrated) is formed on the second insulation layer30, and then the conductive layer is partially etched by a photolithography process or an etching process using an additional etching mask. Hence, the data metal pattern can be provided on the second insulation layer30. The conductive layer can be formed by a printing process, a sputtering process, a CVD process, a pulsed laser deposition (PLD) process, a vacuum evaporation process, an atomic layer deposition (ALD) process, etc. The data metal pattern can be formed of metal, alloy, conductive metal oxide, a transparent conductive material, etc. For example, the data metal pattern is formed of aluminum (Al), alloy containing aluminum, aluminum nitride (AlNx), silver (Ag), alloy containing silver, tungsten (W), tungsten nitride (WNx), copper (Cu), alloy containing copper, nickel (Ni), alloy containing nickel, chrome (Cr), chrome nitride (CrNx), molybdenum (Mo), alloy containing molybdenum, titanium (Ti), titanium nitride (TiNx), platinum (Pt), tantalum (Ta), tantalum nitride (TaNx), neodymium (Nd), scandium (Sc), strontium ruthenium oxide (SRO), zinc oxide (ZnOx), indium tin oxide (ITO), tin oxide (SnOx), indium oxide (InOx), gallium oxide (GaOx), indium zinc oxide (IZO), etc. These can be used alone or in a combination thereof. In example embodiments, the data metal layer has a single layer structure or a multi layer structure, which includes a metal film, an alloy film, a metal nitride film, a conductive metal oxide film and/or a transparent conductive film. The data line DL is electrically connected to the pixel in the pixel area. The second pattern212is formed in the peripheral area and is included in a portion of the data circuit200. For example, the second pattern212is electrically connected to the data drain area DD of the data peripheral transistor DTR through a contact hole formed through the first and second insulation layers20and30. The second pattern212includes a first side212awhich extends along a fourth direction D4and a second side212bwhich extends in a fifth direction D5which is substantially perpendicular to the fourth direction D4. The fourth direction D4is substantially perpendicular to a boundary between the pixel area and the peripheral area. Thus, the data circuit200includes a circuit pattern having sides which extend along the fourth direction D4and the fifth direction D5. Accordingly, the data circuit200can be substantially rectangular. The data connecting line210is formed in the peripheral area, and electrically connects the second pattern212of the data circuit200to the data line DL. The data connecting line210extends in the fourth direction D4which is substantially perpendicular to the boundary. The data connecting line210can have substantially the same length as the scan connecting line110, so that the data circuit200and the scan circuit100can be located at substantially the same distance from the boundary between the pixel area and the peripheral area. A distance between the data circuit200and the scan circuit100which is adjacent to the data circuit200is less as the data circuit200is closer to the pixel area. Thus, a first distance L1between the data circuit200and the scan circuit100close to the pixel area is less than a second distance L2between the data circuit200and the scan circuit100far from the pixel area. The data line DL and the second pattern212can be formed from the same metal layer by pattering the same metal layer. The third insulation layer40is formed on the second insulation layer30on which the data pattern is formed. The third insulation layer40can have a single-layered structure or a multi-layered structure including at least two insulation films. In example embodiments, a planarization process is executed on the third insulation layer40to enhance the flatness of the third insulation layer40. For example, the third insulation layer40has a substantially level surface by a chemical mechanical polishing (CMP) process, an etch-back process, etc. The third insulation layer40can be formed using an organic material. For example, the third insulation layer40is formed of photoresist, acryl-based resin, polyimide-based resin, polyamide-based resin, siloxane-based resin, etc. These can be used alone or in a combination thereof. Alternatively, the third insulation layer40can be formed of an inorganic material. For example, the third insulation layer40is formed of silicon oxide, silicon nitride, silicon oxynitride, silicon oxycarbide, aluminum, magnesium, zinc, hafnium, zirconium, titanium, tantalum, aluminum oxide, titanium oxide, tantalum oxide, magnesium oxide, zinc oxide, hafnium oxide, zirconium oxide, titanium oxide, etc. These can be used alone or in a mixture thereof. The third insulation layer40can be formed by a spin coating process, a printing process, a sputtering process, a CVD process, an ALD process, a PECVD process, an HDP-CVD process or a vacuum evaporation process in accordance with ingredients included in the third insulation layer40. The circular display device according to the present example embodiment includes a plurality of circuit pattern formed in a peripheral area and along a boundary of a pixel area and the peripheral area. The circuit pattern has a first side extending along a fourth direction and a second side extending along a fifth direction which is substantially perpendicular to the fourth direction, so that the circuit pattern can be efficiently located along the peripheral area. Thus, the size of the peripheral area can be reduced. In addition, the circuit patterns can be formed at substantially the same distance from the boundary between the pixel area and the peripheral area, resistance due to load resistive caused by scan or data line according to a location difference of pixels can be reduced, so that degradation of displaying quality can be reduced. In addition, the circular display substrate according to the present example embodiment includes a plurality of pixels including a circular pixel formed in a pixel area and a driving portion configured to drive the pixels formed in a peripheral area adjacent to the pixel area. The driving portion can include a plurality of unit circuits repeatedly formed along the peripheral area. A layout of each of the unit circuits can extend toward a center of the circular display substrate. A distance between each of the unit circuit and a center of the pixel area can be substantially uniform. FIG.4Ais a partially enlarged view illustrating a pixel ofFIG.2.FIG.4Bis a cross-sectional view taken along line II-II′ ofFIG.4A. Referring toFIGS.4A and4B, the pixel P includes a data line DL, a scan line SL and a switching transistor SWTR. A base substrate10can include a transparent insulation substrate. A buffer layer (not shown) can be further formed on the base substrate10. An active pattern including a source are S, a drain area D and an active area A of the switching transistor SWTR can be formed on the base substrate10. The first insulation layer20can be formed on and cover the active pattern. The scan line SL is formed on the base substrate10and can extend in a first direction D1. A second insulation layer30is formed on the first insulation layer20on which the scan line SL is formed. The data line DL is formed on the second insulation layer30. The data line DL can be electrically connected to the source area S of the switching transistor SWTR through a contact hole formed through the first and second insulation layers20and30. The third insulation layer40can be formed on the second insulation layer30on which the data line DL is formed. Although not shown in the figures, the pixel P can further include a pixel electrode electrically connected to the drain area D of the switching transistor SWTR, an organic light-emitting layer formed on the pixel electrode, and a common electrode formed on the organic light-emitting layer. In addition, although not shown in figures, the pixel P can further include a pixel electrode electrically connected to the drain area D of the switching transistor SWTR, a liquid crystal layer formed on the pixel electrode, and a common electrode formed on the liquid crystal layer. Referring again toFIGS.3A and4B, the scan line SL and the gate electrode G of the switching transistor SWTR in the pixel area, and the first pattern112, the scan peripheral gate electrode SG of the scan peripheral transistor STR and the data peripheral gate electrode DG of the data peripheral transistor DTR in the peripheral area can be formed from the same metal layer by pattering the metal layer. In addition, the data line DL in the pixel area and the second pattern212in the peripheral area can be formed from the same metal layer by pattering the metal layer. FIG.5is a partially enlarged view illustrating a circular display substrate according to an exemplary embodiment. Referring toFIG.5, a boundary between a pixel area PA and a peripheral area SA is substantially circular. Thus, a portion of the boundary can have an arc shape. The boundary is formed along an arc direction D3. A plurality of scan circuits100and data circuits200are formed along the arc direction D3. For example, the scan circuits100and the data circuits200are alternately formed along the arc direction D3. The scan circuit100and the data circuit200can be arranged at the substantially uniform distance. Thus, the distance between the scan circuit100and another scan circuit100which is adjacent to the scan circuit100can be substantially uniform. In addition, the distance between the data circuit200and another data circuit200which is adjacent to the data circuit200can be substantially uniform. The scan circuit100extends in a fourth direction D4which is substantially perpendicular to the arc direction D3. The fourth direction D4is substantially perpendicular to the arc direction D3, so that the fourth direction D4can be varied according to a position of the scan circuit100. For example, the scan circuit100has a width in a fifth direction D5which is substantially perpendicular to the fourth direction D4, and extend in the fourth direction D4, so that the scan circuit100is substantially rectangular. The scan connecting line110is formed in the peripheral area SA. The scan connecting line110electrically connects the scan circuit100to the scan line SL in the pixel area PA. The scan connecting line110extends in the fourth direction D4. Thus, the scan connecting line110extends in a direction which is substantially perpendicular to the boundary between the pixel area PA and the peripheral area SA. The data circuit200extends in the fourth direction D4which is substantially perpendicular to the arc direction D3, The fourth direction D4is substantially perpendicular to the arc direction D3, so that the fourth direction D4can be varied according to a position of the data circuit200For example, the data circuit200has a width in a fifth direction D5, and extends in the fourth direction D4, so that the data circuit200is substantially rectangular. The data connecting line210is formed in the peripheral area SA. The data connecting line210electrically connects the data circuit200to the data line DL in the pixel area PA. The data connecting line210extends in the fourth direction D4. Thus, the data connecting line210extends in a direction which is substantially perpendicular to the boundary of the pixel area PA and the peripheral area SA. The data line DL and the scan line SL can be bent near the peripheral area SA. Accordingly, the scan circuits100and the data circuits200are formed along the boundary between the pixel area PA and the peripheral area SA, and each of the scan circuits100and the data circuits200extends in a substantially perpendicular direction with respect to the boundary. Thus, efficiency of a circuit layout in peripheral area SA can be improved. In addition, the scan connecting line110extends in the substantially perpendicular direction with respect to the boundary, so that the scan lines SL in the pixel area PA and the scan circuits100can be connected to each other uniformly. Thus, resistive load due to a wiring length difference can be reduced. In addition, the data connecting line210extends in the substantially perpendicular direction with respect to the boundary, so that the data lines DL in the pixel area PA and the data circuits200can be connected to each other substantially uniformly. Thus, resistive load due to a wiring length difference can be reduced. FIG.6is a partially enlarged view illustrating a circular display substrate according to an exemplary embodiment. Referring toFIG.6, a boundary between a pixel area PA and a peripheral area SA is substantially circular. Thus, a portion of the boundary can have an arc shape. The boundary is formed along an arc direction D3. A plurality of scan circuits100and data circuits200are formed along the arc direction D3. For example, the scan circuits100and the data circuits200are alternately formed along the arc direction D3. The scan circuit100extends in a fourth direction D4which is substantially perpendicular to the arc direction D3. The fourth direction D4is substantially perpendicular to the arc direction D3, so that the fourth direction D4can be varied according to a position of the scan circuit100. For example, the scan circuit100can overall have a width in a fifth direction D5which is substantially perpendicular to the fourth direction D4, and extend in the fourth direction D4, so that the scan circuit100has a rectangular shape. The scan connecting line110is formed in the peripheral area SA. The scan connecting line110electrically connects the scan circuit100to the scan line SL in the pixel area PA. The scan connecting line110extends in the fourth direction D4. Thus, the scan connecting line110extends in a direction which is perpendicular to boundary of the pixel area PA and the peripheral area SA. In addition, the scan line SL in the pixel area PA extends form the scan connecting line110in the fourth direction D4in which the scan connecting line110extends. Thus, the scan line SL can be bent near the pixel P. The data circuit200extends in the fourth direction D4which is substantially perpendicular to the arc direction D3. The fourth direction D4is substantially perpendicular to the arc direction D3, so that the fourth direction D4can be varied according to a position of the data circuit200. For example, the data circuit200has a width in a fifth direction D5, and extends in the fourth direction D4, so that the data circuit200has a rectangular shape. The data connecting line210is formed in the peripheral area SA. The data connecting line210electrically connects the data circuit200to the data line DL in the pixel area PA. The data connecting line210extends in the fourth direction D4. Thus, the data connecting line210extends in a direction which is substantially perpendicular with respect to the boundary of the pixel area PA and the peripheral area SA. In addition, the data line DL in the pixel area PA extends form the data connecting line210in the fourth direction D4in which the data connecting line210extends. Thus, the data line DL can be bent near the pixel P. Accordingly, the scan circuits100and the data circuits200are formed along the boundary between the pixel area PA and the peripheral area SA, and each of the scan circuits100and the data circuits200extends in a substantially perpendicular direction with respect to the boundary. Thus, efficiency of a circuit layout in peripheral area SA can be improved. In addition, the scan connecting line110and the data connecting line210can be connected to the pixel P in a shortest path, so that resistive load due to a wiring length difference can be reduced. FIG.7is a partially enlarged view illustrating a circular display substrate according to an exemplary embodiment. Referring toFIG.7, a boundary of a pixel area PA and a peripheral area SA has a circular shape. Thus, a portion of the boundary can have an arc shape. The boundary is formed along an arc direction D3. A plurality of scan circuits100and data circuits200are formed along the arc direction D3. For example, the scan circuits100and the data circuits200are alternately formed along the arc direction D3. The scan circuit100and the data circuit200can be arranged at a substantially uniform distance. Thus, the distance between the scan circuit100and another scan circuit100which is adjacent to the scan circuit100can be substantially uniform. In addition, the distance between the data circuit200and another data circuit200which is adjacent to the data circuit200can be substantially uniform. The scan circuit100extends in a fourth direction D4which is substantially perpendicular to the arc direction D3. The fourth direction D4is substantially perpendicular to the arc direction D3, so that the fourth direction D4can be varied according to a position of the scan circuit100. For example, the scan circuit100has a width in a fifth direction D5which is substantially perpendicular to the fourth direction D4, and extends in the fourth direction D4, so that the scan circuit100is substantially rectangular, The scan connecting line110is formed in the peripheral area SA. The scan connecting line110electrically connects the scan circuit100to the scan line SL in the pixel area PA. A portion of the scan line SL adjacent to the peripheral area SA and the scan connecting line110extends in a straight line from the scan circuit100to the pixel P. Thus, the portion of the scan line SL and the scan connecting line110extend in a shortest path the from scan circuit100to the pixel P which is adjacent to the peripheral area SA. The data circuit200extends in the fourth direction D4which is substantially perpendicular to the arc direction D3. The fourth direction D4is substantially perpendicular to the arc direction D3, so that the fourth direction D4can be varied according to a position of the data circuit200. For example, the data circuit200has a width in a fifth direction D5, and extends in the fourth direction D4, so that the data circuit200is substantially rectangular. The data connecting line210is formed in the peripheral area SA. The data connecting line210electrically connects the data circuit200to the data line DL in the pixel area PA. A portion of the data line DL adjacent to the peripheral area SA and the data connecting line210extends in a substantially straight line from the data circuit200to the pixel P. Thus, the portion of the data line DL and the data connecting line210extend in a shortest path the from data circuit200to the pixel P which is adjacent to the peripheral area SA. Accordingly, the data line and the scan line SL can he bent near the pixel P. Accordingly, the scan circuits100and the data circuits200are formed along the boundary between the pixel area PA and the peripheral area SA, and each of the scan circuits100and the data circuits200extends in a substantially perpendicular direction with respect to the boundary. Thus, efficiency of a circuit layout in peripheral area SA can be improved. In addition, the scan circuit100and the data circuit200can be connected to the pixel P in the shortest path, so that resistive load due to a wiring length difference can be reduced. FIG.8is a partially enlarged view illustrating a circular display substrate according to an exemplary embodiment. Referring toFIG.8, a circular display substrate is substantially same as a circular display substrate ofFIGS.2and5to7, expect that scan and data circuits are arranged in two rows in a plan view. A scan circuit100is formed spaced apart form a boundary between a pixel area PA and a peripheral area SA by a third distance D3, and a data circuit200is formed spaced apart from the boundary by a fourth distance L4. The third distance L3can be less than the fourth distance L4. Accordingly, efficiency of circuit layout in the peripheral area SA can be improved, so that size of the peripheral area SA can be reduced. FIG.9is an exploded perspective view briefly illustrating a circular display device according to an exemplary embodiment. Referring toFIG.9, a circular display device includes a lower receiving container1100, a circular display panel1000, and an upper receiving container1200. The lower receiving container1100and the upper receiving container1200receive the circular display panel1000. The circular display panel1000is received in the lower and upper receiving containers1100and1200, and displays an image. The circular display panel1000can include a circular display ofFIGS.1and5to8. The circular display panel1000includes a pixel area PA in which the image is formed, and a peripheral area SA surrounding the pixel area PA. For example, the circular display panel1000is an OLED display panel. The upper receiving container1200and the lower receiving container1100receive the circular display panel1000. The upper receiving container1200covers the peripheral area SA of the circular display panel1000, so that the peripheral area SA is not seen from the outside. FIG.10is an exploded perspective view briefly illustrating a circular display device according to an exemplary embodiment. Referring toFIG.10, a circular display device includes a lower receiving container1100, a circular display panel1000, a backlight assembly1300and an upper receiving container1200. The lower receiving container1100and the upper receiving container1200receive the circular display panel1000and the backlight assembly1300. The circular display panel1000is received in the lower and upper receiving containers1100and1200, and displays an image. The circular display panel1000can include a circular display ofFIG.1and5to8. The circular display panel1000includes a pixel area PA in which the image is formed, and a peripheral area SA surrounding the pixel area PA. For example, the circular display panel1000is a liquid crystal display panel. The backlight assembly1300is formed under the circular display panel1000, and provides light to the circular display panel1000. The upper receiving container1200and the lower receiving container1100receive the circular display panel1000. The upper receiving container1200covers the peripheral area SA of the circular display panel1000, so that the peripheral area SA is not seen from the outside. Thus, the upper receiving container1200overlaps the peripheral area SA. The described technology can be applied to an OLED display and an electronic device having the OLED display. For example, the described technology is applied to computer monitors, televisions, laptop computers, digital cameras, cellular phones, smartphones, smart pads, personal digital assistants (PDAs), portable multimedia players (PMPs), MP3 players, navigation systems, video phones, etc. The foregoing is illustrative of example embodiments and is not to be construed as limiting thereof. Although a few example embodiments have been described, those skilled in the art will readily appreciate that many modifications are possible in the example embodiments without materially departing from the novel teachings and advantages of the inventive technology. Accordingly, all such modifications are intended to be included within the scope of the present inventive concept as defined in the claims. Therefore, it is to be understood that the foregoing is illustrative of various example embodiments and is not to be construed as limited to the specific example embodiments disclosed, and that modifications to the disclosed example embodiments, as well as other example embodiments, are intended to be included within the scope of the appended claims. | 39,139 |
11862064 | DESCRIPTION OF SPECIFIC EMBODIMENTS OF THE INVENTION The embodiments of the present disclosure are described in detail below. Examples of the embodiments are shown in the drawings, in which the same or similar reference numerals indicate the same or similar components or components having the same or similar functions. The terms “first”, “second”, “third”, etc. (if any) in the description and claims of the present disclosure and the drawings are used to distinguish similar objects, and do not have to be used to describe a specific order or sequence. It should be understood that the objects so described are interchangeable under appropriate circumstances. In the description of the present disclosure, the meaning of “plurality” is two or more, unless otherwise specifically defined. In addition, the terms “including” and “having” and any variations thereof are intended to cover non-exclusive inclusions. Directional terms mentioned in the present disclosure, such as, up, down, left, right, front, back, inside, outside, side, etc., are only directions with reference to the drawings. In the description of the present disclosure, it should be noted that the terms “installation”, “connection” and “coupling” should be understood in a broad sense, unless otherwise clearly specified and defined. For example, it can be a fixed connection, a detachable connection, or integrated connection; it can be a mechanical connection, an electrical connection or can communicate with each other; it can be directly connected or indirectly connected through an intermediary, it can also be the connection between two elements or the interaction between two elements. Those ordinary skilled in the art can understand the specific meanings of the above terms in the present disclosure according to specific situations. The present disclosure proposes an array substrate. A GOA circuit is designed in a display area (AA), which can realize a near bezel-free display panel design to improve product competitiveness. Meanwhile, the GOA circuit is modularly designed. According to connection states of three electrodes of thin film transistors (TFTs) in the GOA circuit, the TFT types can be divided into modules to form an independent layout model, and a layout of the GOA circuit becomes a sequence combination of layout models of the corresponding TFTs and thus improves design efficiency. Please refer toFIG.1, which is a schematic structural diagram of an embodiment of an array substrate of the present disclosure. The array substrate includes a display area101and a non-display area102surrounding the display area101. The array substrate has a plurality of scan lines103extending in a horizontal direction (the row direction) and a plurality of data lines104extending in a vertical direction (the column direction). The scan lines103and the data lines104intersect as an array. A plurality of pixel areas11are defined in the display area101, and each pixel area11is provided with a pixel unit111. At least one gate driver on array (GOA) circuit is arranged in the pixel areas11of a same row. All GOA circuits12in the same row are connected to the same scan line103for driving the scan line103in the same row. Each GOA circuit12is connected to a driving IC109through corresponding driving signal lines121to receive a GOA driving signal. Wherein, the driving signal lines121extend in the vertical direction, which is same as an extending direction of the data lines104. The driving IC109may be disposed at an outer lead bonding area (OLB)190of the display panel where the array substrate is located. In the present embodiment, the GOA driving signal provided by the driving IC109is input to the GOA circuit12along the vertical direction, and in the same row, two GOA circuits12are provided to drive a single row scan line to improve GOA driving ability, and reduce the signal delay of the scan line and prevent the signal from being output incorrectly. It should be noted that, in the same row, one or more than two GOA circuits can be set, and the design can be made according to the GOA driving capability requirements and panel layout space limitations. In a further embodiment, the GOA circuit12is located at a gap between the pixel units111in two adjacent rows. That is, the setting of the GOA circuit reduces the occupation of the area for displaying images, and reduces the influence on the aperture ratio of the pixel. In a further embodiment, the GOA circuit12includes a plurality of thin film transistors to drive the corresponding scan lines in response to the GOA driving signals. In a further embodiment, in the pixel areas11of two adjacent rows, the GOA circuit12in the pixel area11of the first row and the GOA circuit12in the pixel area11of the second row are staggered by at least one pixel unit111. That is, the GOA circuits in odd and even rows are staggered, so that the arrangement positions of the signal lines transmitting the opposite driving signals to the GOA circuits of odd and even rows are staggered, therefore only one data line and one driving signal line are provided between the pixel units111for two adjacent rows; without needing to set up data lines and two signal lines transmitting opposite driving signals simultaneously, this reduces the number of wires and improves layout utilization efficiency. In the present embodiment, by setting the GOA circuit in the display area, an ultra-narrow bezel display panel design can be realized. The GOA driving signal is provided through the driving IC, and multiple GOA circuits can be used to drive a single row scan line, which improves GOA driving capability. By setting the GOA circuit at the gap between pixel units in two adjacent rows, the influence on the aperture ratio of the pixel is reduced. The GOA circuits in odd and even rows are staggered, thus reducing the number of wires and improving layout utilization efficiency. Please refer toFIG.2toFIG.6together.FIG.2is a connection scheme of the layout model of the present disclosure.FIG.3is a schematic diagram of the layout of an embodiment of the array substrate of the present disclosure.FIG.4is an enlarged schematic diagram of part A inFIG.3.FIG.5is an equivalent circuit diagram of the GOA circuit inFIG.3, andFIG.6is a driving timing diagram of the GOA circuit shown inFIG.5. As shown inFIG.2, the connection states of each of the three electrodes (the gate electrode G, the source electrode S, and the drain electrode D of the thin film transistor) can be divided into three types: an input terminal, an intermediate node, and an output terminal. The intermediate node is a node where the thin film transistor is connected to other thin film transistors in the same GOA circuit. Except for the connection states where the three electrodes are connected to the input terminal or the output terminal simultaneously, a corresponding independent layout model can be established according to the remaining connection states, such as, a layout model in which the gate G and the source S both serve as the input terminals, and the drain D serves as the intermediate node; a layout model in which the gate G serves as the input terminal, and the source S and the drain D both serve as the intermediate nodes; a layout model in which the gate G serves as the intermediate node, the source S serves as the input terminal, and the drain D serves as the output terminal; and a layout model in which the gate G and the source S both serve as the input terminals, and the drain D serves as the output terminal. By separately designing an independent layout model for each connection state, the layout of the GOA circuit is designed as a sequential combination of layout models of corresponding thin film transistors. The layout model can cover all the structures in general circuits, so that the layout design of the GOA circuit can be completed through the sequential combination of the layout models, and the design efficiency is improved. In a further embodiment, an electrode used as the input terminal is connected to the corresponding driving signal line, an electrode used as the output terminal is connected to the corresponding scan line, and an electrode used as the intermediate node is connected to a corresponding GOA internal wire, wherein the GOA internal wire is extended along the horizontal direction. As shown inFIG.3, in the present embodiment, the GOA circuit12includes four thin film transistors (NT1-NT4) and a capacitor C1, and the four thin film transistors are sequentially arranged in the horizontal direction. The driving signal lines121include first clock signal lines XCK, an initialization signal line STV, a first level signal line VGH, a second clock signal line CK and a second level signal line VGL. Specifically, a layout model of the first thin film transistor NT1(indicated by a dashed frame) is that, a gate of the first thin film transistor NT1serves as the input terminal and is connected to one of the first clock signal lines XCK, a first electrode of the first thin film transistor NT1serves as the input terminal and is connected to the initialization signal line STV, and a second electrode of the first thin film transistor NT1serves as the intermediate node and is connected to a first GOA internal wire31. An enlarged schematic diagram is shown inFIG.4. Specifically, a layout model of the second thin film transistor NT2is that, a gate of the second thin film transistor NT2serves as the input terminal and is connected to the first level signal line VGH, a first electrode of the second thin film transistor NT2serves as the intermediate node and is connected to the first GOA internal wire31, and a second electrode of the second thin film transistor NT2serves as the intermediate node and is connected to a second GOA internal wire32. Specifically, a layout model of the third thin film transistor NT3is that, a gate of the third thin film transistor NT3serves as the intermediate node and is connected to the second GOA internal wire32, a first electrode of the third thin film transistor NT3serves as the input terminal and is connected to the second clock signal line CK, and a second electrode of the third thin film transistor NT3serves as the output terminal and is connected to a scan line103corresponding to a row where the GOA circuit is located. Specifically, a layout model of the fourth thin film transistor NT4is that, a gate of the fourth thin film transistor NT4serves as the input terminal and is connected to another first clock signal line XCK, a first electrode of the fourth thin film transistor NT4serves as the input terminal and is connected to the second level signal line VGL, and a second electrode of the fourth thin film transistor NT4serves as the output terminal and is connected to the scan line103corresponding to the row where the GOA circuit is located. A first plate C1-1of the capacitor C1is connected to the second GOA internal wire32(that is, connected between the second thin film transistor NT2and the third thin film transistor NT3), and a second plate C1-2is connected to the scan line103corresponding to the row where the GOA circuit is located (that is, connected between the third thin film transistor NT3and the fourth thin film transistor NT4). Wherein, a phase of a second clock signal provided by the second clock signal line CK is opposite to a phase of a first clock signal provided by the first clock signal lines XCK, and a first level signal provided by the first level signal line VGH is greater than a second level signal provided by the second level signal line VGL. In a further embodiment, the GOA circuit further includes a voltage stabilizing capacitor C2, a first plate of the voltage stabilizing capacitor C2is connected to the first GOA internal wire31, and a second plate is connected to a fixed voltage signal line (not shown in the figure). The fixed voltage signal line is, for example, a common voltage signal line COM, and the common voltage signal line COM provides a stable common voltage. By adding the voltage stabilizing capacitor at the junction of the internal nodes, the node stability of the GOA circuit is improved. In a further embodiment, the first GOA internal wire31and the second GOA internal wire32are formed by patterning a same GOA internal wire. For example, multiple GOA internal wires are formed by etching one GOA internal wire. By applying the above mentioned sequential combination layout model settings to each pixel area, and only adjusting the transmission method of the clock signal, the GOA circuit design can be quickly completed, and the design efficiency is improved. In a further embodiment, the GOA circuits in odd and even rows are staggered by at least one pixel unit. For example, as shown in the figures, the first thin film transistor NT1in the GOA circuit in the pixel area of the first row and the first thin film transistor NT1in the GOA circuit in the pixel area of the second row are staggered by at least one pixel unit in the horizontal direction. Since the pixel areas are driven line by line, the phases of the clock signals received by the pixel areas in adjacent two rows are opposite. For example, simultaneously, the first thin film transistor NT1in the GOA circuit in the pixel area of the first row receives the second clock signal CK, and the first thin film transistor NT1in the GOA circuit in the pixel area of the second row receives the first clock signal XCK. At least one pixel unit is staggered by the GOA circuits in odd and even rows, so that the arrangement position of the signal lines transmitting opposite driving signals to the GOA circuit in odd and even rows are staggered, so that only one data line and one driving signal line are provided between the pixel units111in two adjacent rows, without setting data lines and two signal lines with opposite driving signals simultaneously, and thus the number of wires can be reduced and layout utilization efficiency can be improved. The equivalent circuit diagram of the above GOA circuit is shown inFIG.5. The working principle of the GOA circuit of the present disclosure will be described below with reference toFIGS.5-6. The working sequence of the GOA circuit shown inFIG.5is mainly divided into the following three stages:Stage t1: the first clock signal XCK and the initialization signal STV are high level signals (High), and the second clock signal CK is a low level signal (Low). At this time, the first thin film transistor NT1, the second thin film transistor NT2and the fourth thin film transistor NT4are turned on. The first thin film transistor NT1and the second thin film transistor NT2are turned on, so that a first intermediate node N1and a second intermediate node N2receive high level signals, and thus the third thin film transistor NT3is turned on. The third thin film transistor NT3and the fourth thin film transistor NT4are turned on, and thus an output terminal Gn outputs a low level signal.Stage t2: the second clock signal CK changes to a high level signal, the first clock signal XCK and the initialization signal STV change to low level signals. At this time, the second thin film transistor NT2maintains on, the first thin film transistor NT1and the fourth thin film transistor NT4are turned off, and the first intermediate node N1remains high level. The third thin film transistor NT3maintains turned on, and the output terminal Gn outputs a high level signal. Simultaneously, the voltage of the second intermediate node N2rises due to the coupling of the capacitor C1.Stage t3: The first clock signal XCK changes to a high level signal, the second clock signal CK changes to a low level signal, and the initialization signal STV maintains a low level signal. At this time, the second thin film transistor NT2maintains turned on, and the thin film transistor NT1and the fourth thin film transistor NT4are turned on, so that the first intermediate node N1and the second intermediate node N2receive low level signals, the third thin film transistor NT3is turned off, and the output terminal Gn outputs a low level signal. Based on the same inventive concept, the present disclosure also provides a display panel. Please refer toFIG.7, which is a schematic diagram of a display panel architecture of the present disclosure. The display panel70includes an array substrate71, and the array substrate71applies the array substrate described in the present disclosure. Using the display panel of the array substrate of the present disclosure, the GOA circuit is designed in the display area, which realizes a design close to bezel-less and improves the product competitiveness. Meanwhile, the GOA circuit is modularly designed, according to the connection states of the three electrodes in the thin film transistors of the GOA circuit, the TFT types are divided into modules to form an independent layout model. The layout of the GOA circuit becomes a sequential combination of the layout models of the corresponding TFTs, which improves the design efficiency. It can be understood that, for those ordinary skilled in the art, equivalent replacements or changes can be made according to the technical solutions and inventive concepts of the present disclosure, and all such changes or replacements should fall within the protection scope of the claims appended to the present disclosure. | 17,414 |
11862065 | DESCRIPTION OF EMBODIMENTS The term “couple (or connect)” throughout the specification (including the claims) of this application are used broadly and encompass direct and indirect connection or coupling means. For instance, if the disclosure describes a first apparatus being coupled (or connected) to a second apparatus, then it should be interpreted that the first apparatus can be directly connected to the second apparatus, or the first apparatus can be indirectly connected to the second apparatus through other devices or by a certain coupling means. In addition, terms such as “first” and “second” mentioned throughout the specification (including the claims) of this application are only for naming the names of the elements or distinguishing different embodiments or scopes and are not intended to limit the upper limit or the lower limit of the number of the elements not intended to limit sequences of the elements. Moreover, elements/components/steps with same reference numerals represent same or similar parts in the drawings and embodiments. Elements/components/notations with the same reference numerals in different embodiments may be referenced to the related description. Please refer toFIG.1, which illustrates a block diagram of a timing control device, and also refer toFIG.2, which illustrates waveform plots of a timing control device operated in different display refresh rates, according to an embodiment of present disclosure. The timing control device110includes to a control circuit111. The control circuit111is coupled to a display panel120. The control circuit111is configured to generate a plurality of gate scanning control signals CLK1˜CLKN and a data transmission control signal TP, and transports the gate scanning control signals CLK1˜CLKN and the data transmission control signal TP to the display panel120for driving the display panel120. In detail, the display panel120receives the gate scanning control signals CLK1˜CLKN and generates a plurality of gate driving signals, such as gate driving signals G1to G1920if the display panel120has 1920 gate lines and 1920 display lines (i.e., a pixel row, also called a horizontal line), wherein the gate driving signals G1917to G1920are depicted inFIG.2as an example. The gate scanning control signals CLK1˜CLKN are periodical signals having the same period and different phases, and a gate on array circuit in the display panel120may generate the gate driving signals G1to G1920respectively output to the gate lines of the display panel120according to the gate scanning control signals CLK1˜CLK4. In every frame period, each gate driving signal has only one enable period (or called pulse), and the time position of the enable period of each gate driving signal is the same as one of a plurality of pulses of the gate scanning control signals CLK1˜CLK4. The timing control device110may be as a timing controller integrated circuit (IC), or, the timing control device110and a data driving circuit may be implemented as a single-chip display driver IC. The data transmission control signal TP is a periodical signal and has a plurality of pulses. Each pulse of the data transmission control signal TP is used to indicate a write time when display data of a corresponding display line of the display panel120to be written into the display line. In this embodiment, a data charging time of one display line of the plurality of display lines may be determined by a time difference between a trailing edge (as a falling edge inFIG.2) of a pulse of the data transmission control signal TP and a trailing edge of the pulse (i.e., enable period) of a gate driving signal, wherein the pulse of the gate driving signal is aligned with one of a plurality of pulses of the gate scanning control signals CLK1—CLK4. By adjusting the data charge time, the timing controller110can make the display panel120operated under the variable refresh rate, and can maintain the consistency of a displayed brightness. It should be noted here, when the display refresh rate of the display panel120is changed from a higher first frequency to a lower second frequency, a vertical blanking time is increased and the displayed brightness is reduced. In response thereto, the timing controller110may adjust the gate scanning control signals CLK1˜CLKN to generate a plurality of adjusted gate scanning control signals or adjust the data transmission control signal TP to generate an adjusted data transmission control signal, to increase the data charge time of the display panel120. Such as that, the data charging time can be increased correspondingly, and the displayed brightness of the display panel120can be maintained. Please refer toFIG.1andFIG.2commonly. A data enable signal DE can be transported to the timing control device110from a front-end circuit. The data enable signal DE is used to define a display time for one display line of the display panel120. When the data enable signal DE is kept on a low voltage level, the display panel120is in a vertical blanking time period. When the display panel120operated in a first display refresh rate DFR1is detected, the control circuit111can generate a plurality of gate scanning control signals CLK1˜CLK4, a data transmission control signal TP and a transmission data TX_DATA (which represent display data for one driving channel). The data transmission control signal TP is used to define time points for accessing the transmission data TX_DATA. In this embodiment, the transmission data TX_DATA is transported to the display panel120at trailing edges of the pulses of the data transmission control signal TP. In here, a plurality of pulses of the data transmission control signal TP are corresponding to a plurality of transmission data (denoted as TX_DATA) output from a driving channel. For example, the right-most data is regarding to data of the 1920thdisplay line, and a pulse of the data transmission control signal TP indicates a write time of the 1920thdisplay line; the second-right data is regarding to data of the 1919thdisplay line, and another pulse of the data transmission control signal TP indicates a write time of the 1919thdisplay line, and so on. InFIG.2, a data charging time CT1with respect to data of the 1917thdisplay line can be determined from a trailing edge of a pulse PS1, which is the pulse of the data transmission control signal TP indicating a write time of the 1917thdisplay line, to a trailing edge of the pulse (i.e., enable period) of the gate driving signal G1917, which is generated based on the gate scanning control signal CLK1such that the pulse (i.e., enable period) of the gate driving signal G1917is the same as the corresponding pulse of the gate scanning control signal CLK1. When the display panel120changed to be operated in a second display refresh rate DFR2is detected, and a second frequency of the second display refresh rate DFR2is lower than a first frequency of the first display refresh rate DFR1, the control circuit111can delay phases of the gate scanning control signals CLK1˜CLK4to generate the plurality of adjusted gate scanning control signals CLK1′˜CLK4′, and data charging time of the display panel120can be increased. Take the gate scanning control signal CLK1as an example, the control circuit111can change a leading edge EG1and a trailing edge EG2of the gate scanning control signal CLK1to respectively obtain an adjusted leading edge EG1′ and an adjusted trailing edge EG2′ of the adjusted gate scanning control signal CLK1′. By the adjusted trailing edge EG2′ of the adjusted gate scanning control signal CLK1′, an adjusted data charging time CT2, which is determined by a time difference between the trailing edge of the pulse (PS1) of the data transmission control signal TP indicating the write time of the 1917thdisplay line and an adjusted trailing edge of the pulse (i.e., enable period) of the gate driving signal G1917which is aligned with the adjusted trailing edge EG2′ of the adjusted gate scanning control signal CLK1′, can be increased. In present disclosure, a duty cycle and a frequency of the gate scanning control signal CLK1may be as same as the adjusted gate scanning control signal CLK1′. Only the phase of each gate scanning control signal is changed. In presented embodiment, if the display panel120changes to be operated in the first display refresh rate DFR1from the second display refresh rate DFR2, i.e., the display refresh rate increases, the control circuit111can shift the phases of the gate scanning control signals CLK1′-CLK4′ to be earlier to generate the adjusted gate scanning control signals CLK1-CLK4to reduce the time difference between the trailing edge of the pulse (such as PS1, corresponding to the 1917thdisplay line) of the data transmission control signal TP indicating the write time of the display line and the the adjusted trailing edge (such as EG2′) of the adjusted gate scanning control signal (such as CLK1′ which is aligned with the adjusted trailing edge of the gate driving signal G1917). Such as that, a displayed brightness of the display panel120can be balanced between the first display refresh rate DFR1and the second display refresh rate DFR2, whatever from a higher display refresh rate to a lower display refresh rate, or from a lower display refresh rate to a higher display refresh rate. It should be noted here, a variation of the display refresh rate of the display panel120can be detected by the timing control device110. When the display refresh rate is varied from the higher first display refresh rate DFR1to the lower second display refresh rate DFR2, the timing control device110sets the control circuit111to generate the adjusted gate scanning control signals CLK1′˜CLK4′ by delaying the phases of the gate scanning control signals CLK1˜CLK4. Please refer toFIG.1andFIG.3commonly, whereinFIG.3illustrates waveform plots of a timing control device operated in different display refresh rates according to another embodiment of present disclosure. InFIG.3, a data enable signal DE can be transported to the timing control device110from a front-end circuit. The data enable signal DE is used to define a display time for one display line of the display panel120. When the data enable signal is kept on a low voltage level, the display panel120is in a vertical blanking time period. When the display panel120operated in a first display refresh rate DFR1is detected, the control circuit111can generate a plurality of gate scanning control signals CLK1˜CLK4, a data transmission control signal TP and a transmission data TX_DATA. The gate scanning control signals CLK1˜CLK4are sequentially enabled. The display panel120can generate a plurality of gate driving signals according to the gate scanning control signals CLK1˜CLK4. The data transmission control signal TP is used to define time points for accessing the transmission data TX_DATA. In this embodiment, the transmission data TX_DATA is transported to the display panel120at trailing edges of the pulses of the data transmission control signal TP. Take the gate scanning control signal CLK1as an example, a data charging time CT1of the 1917thdisplay line can be determined by a time difference between a trailing edge of the pulse PS1of the data transmission control signal TP with respect to the 1917thdisplay line and a trailing edge of the pulse (i.e., enable period) of the gate driving signal G1917, which is aligned with the trailing edge of the gate scanning control signal CLK1. When the display panel120operated in a second display refresh rate DFR2is detected, and a second frequency of the second display refresh rate DFR2is lower than a first frequency of the first display refresh rate DFR1, the control circuit111can reduce a duty cycle of the data transmission control signal TP to generate an adjusted data transmission control signal TP', such that the data charging time is increased. In presented embodiment, the control circuit111can narrow a width of each pulse of the data transmission control signal TP, in other words, can reduce the duty cycle, to generate the adjusted data transmission control signal TP'. Take the gate scanning control signal CLK1as an example, an adjusted data charging time CT3of one display line can be determined by a time difference between a trailing edge of the narrowed pulse PS1′ of the adjusted data transmission control signal TP' and the trailing edge of the pulse (i.e., enable period) of the gate driving signal G1917which is aligned with the trailing edge of the gate scanning control signal CLK1. Of course, if the display panel120changes to be operated in the first display refresh rate DFR1from the second display refresh rate DFR2, the control circuit111can increase the duty cycle of the data transmission control signal TP' to generate the adjusted data transmission control signal TP. Such as that, a displayed brightness of the display panel120can be maintained in the variable refresh rate (VRR) application. Please refer toFIG.4, which illustrates a schematic plot for a displayed brightness compensation scheme according to an embodiment of present disclosure. A curve410is a displayed brightness when a display panel is operated in a first display refresh rate with a higher first frequency. A curve420is a displayed brightness when the display panel is operated in a second display refresh rate with a lower second frequency. In present embodiment, when a display refresh rate of the display panel is changed between the first display refresh rate and the second display refresh rate, the timing control device can dynamically adjust a data charging time between a data charging time CT1and a data charging time CT2. In detail, when the display refresh rate of the display panel is the higher first frequency, the display panel may have the lower data charge time CT1, and when the display refresh rate of the display panel is the lower first frequency, the display panel may have the higher data charge time CT2. Such as that, an average displayed lightness LAVG1of the display panel in the first display refresh rate can be as same as an average displayed lightness LAVG2of the display panel in the second display refresh rate. A display performance of the display panel can be improved. Please refer toFIG.5, which illustrates a block diagram of a timing control device according to another embodiment of preset disclosure. The timing control device510is coupled to a gate on array (GOA)521and a source driver (S-IC)522. The GOA521and the S-IC522are used to respectively provide gate driving signals and source driving signals to drive a display panel. The timing control device510includes a control circuit511. The timing control device510receives a data enable signal DE and generates a frame start signal STV, a plurality of gate scanning control signals CLKX and a data transmission control signal TP. The timing control device510transports the frame start signal STV and the gate scanning control signals CLKX to the GOA521, and transports the data transmission control signal TP to the S-IC522. The GOA521can generate the gate driving signals according to the frame start signal STV and the gate scanning control signals CLKX. The S-IC522can generate the source driving signals according to the data transmission control signal TP and transmission data. In present embodiment, the GOA521can be implemented by any gate on array circuit well known by a person skilled in this art, the S-IC522also can be implemented by any source driving circuit well known by a person skilled in this art, and there are no more special limitations here. In this embodiment, the data enable signal DE can be provided by a front-end circuit, such as a television chip. The control circuit511can detect a display refresh rate according to the data enable signal DE. In detail, the data enable signal DE can provide a certain time period for keeping on a low voltage level, and the certain time period is a vertical blanking time period. The control circuit511can detect the display refresh rate by identifying the vertical blanking time period according to the data enable signal DE. If a time length of the vertical blanking time period getting longer, the control circuit511can determine the display refresh rate is reduced, and if the time length of the vertical blanking time period getting shorter, the control circuit511can determine the display refresh rate is increased. Furthermore, the control circuit511can generate the frame start signal STV according to the data enable signal DE. The control circuit511also can obtain the display refresh rate according to the frame start signal STV. Wherein, the frame start signal STV provides a plurality of vertical start pulses, and the control circuit511can obtain the display refresh rate by calculating two neighbored start pulses. The control circuit511can adjust phases of the gate scanning control signals CLK1-CLKx or adjust a duty cycle of the data transmission control signal TP according to the detected display refresh rate known by detecting the frame start signal STV. If the display refresh rate is varied from a higher first frequency to a lower second frequency, the control circuit511can delay the phases of the gate scanning control signals CLK1-CLKx, or reduce the duty cycle of the data transmission control signal TP. On the contrary, if the display refresh rate is varied from the lower first frequency to the higher second frequency, the control circuit511can shift the phases of the gate scanning control signals CLK1-CLKx to be earlier, or increase the duty cycle of the data transmission control signal TP. A hardware structure of the control circuit511can be implement by digital circuit. Or the control circuit511can be implemented by any processor or controller chip having operation capability and well known by a person skilled in this art. Please refer toFIG.6, which illustrates a block diagram of a timing control device according to another embodiment of preset disclosure. The timing control device610is coupled to a gate on array (GOA)621and a source driver (S-IC)622. The GOA621and the S-IC622are used to respectively provide gate driving signals and source driving signals to drive a display panel. The timing control device610includes a control circuit611and a frame rate detection circuit612. The control circuit611is coupled to the frame rate detection circuit612. The frame rate detection circuit612may receive a vertical synchronization signal Vsync, and determine a display refresh rate according to the vertical synchronization signal Vsync. The frame rate detection circuit612may generate a digital value DRV according to the detected display refresh rate, and transport the digital value DRV to the control circuit611. The control circuit611receives the digital value DRV and obtain the display refresh rate by decoding the digital value DRV. The control circuit511can adjust phases of the gate scanning control signalsCLK1-CLKx or adjusts a duty cycle of the data transmission control signal TP according to the detected display refresh rate. If the display refresh rate is varied from a higher first frequency to a lower second frequency, the control circuit611can delay the phases of the gate scanning control signals CLK1-CLKx, or reduce the duty cycle of the data transmission control signal TP. On the contrary, if the display refresh rate is varied from the lower first frequency to the higher second frequency, the control circuit611can shift the phases of the gate scanning control signals CLK1-CLKx to be earlier, or increase the duty cycle of the data transmission control signal TP. Please refer toFIG.7, which illustrates a block diagram of a timing control device according to another embodiment of preset disclosure. The timing control device710is coupled to a gate driver (G-IC)721and a source driver (S-IC)722. Different from the timing control device510inFIG.5, the timing control device710is coupled to the G-IC721which is not disposed on the display panel. Both of the G-IC721and the S-IC722may be off-panel chips. In this embodiment, the timing control device710includes a control circuit711. The control circuit711receives a data enable signal DE and generates a frame start signal STV, a plurality of gate scanning control signal CLK1-CLKx, an output enable signal OE, a shading signal KB and a data transmission signal TP according to the data enable signal DE. The control circuit711transports the frame start signal STV, the gate scanning control signal CLK1-CLKx, the output enable signal OE and the shading signal KB to the G-IC721and transports the data transmission signal TP to the S-IC722. The control circuit711can obtain a display refresh rate according to the data enable signal DE. The control circuit711can further adjust phases of the gate scanning control signals CLK1-CLKx according to the detected display refresh rate. In present embodiment, the G-IC721can be implemented on an integrated circuit, and a hardware structure of the G-IC721can be implemented by any gate driving circuit well known by a person skilled in the art. The S-IC722also can be implemented by any source driving circuit well known by a person skilled in the art. There are no special limitations here. FIG.8illustrates waveform plots of a timing control device operated in different display refresh rates according to another embodiment of present disclosure. Refer toFIG.7andFIG.8, in this embodiment, for adjusting the phases of the gate scanning control signals CLK1-CLKx, the control circuit711can adjust the gate scanning control signals CLK1-CLKx by adjusting a duty cycle of the output enable signal OE and adjusting a phase of the shading control signal KB. In detail, please refer toFIG.8, which illustrates waveform plots of a timing control device operated in different display refresh rates according to another embodiment of present disclosure. When the display refresh rate is getting lower (the display refresh rate is varied from a first display refresh rate DFR1to a second display refresh rate DFR2), the control circuit711may delay a trailing edge of each of the gate scanning control signals CLK1-CLKx by increasing the duty cycle of the output enable signal DE and delaying the phase of a shading control signal KB. A data charging time for a display line can be increased. On the contrary, when the display refresh rate is getting higher (the display refresh rate is varied from the second display refresh rate DFR2to the first display refresh rate DFR1), the control circuit711may shift the trailing edge of each of the gate scanning control signals CLK1-CLKx to be earlier by reducing the duty cycle of the output enable signal DE and shifting the phase of a shading control signal KB to be earlier. The data charging time for a display line can be reduced. On the other way, in another embodiment, in response to the variation of the display refresh rate, the control circuit711can adjusting the duty cycle of the data transmission control signal TP according to the detected display refresh rate. Please refer toFIG.9, which illustrates waveform plots of a timing control device operated in different display refresh rates according to another embodiment of present disclosure. When the display refresh rate is getting lower (the display refresh rate is varied from a first display refresh rate DFR1to a second display refresh rate DFR2), the control circuit711can reduce the duty cycle of the data transmission control signal TP based on not to change the frequency of the data transmission control signal TP, to increase the data charging time for a display line. In other words, the control circuit711narrows a width of each pulse of the data transmission control signal TP to increase a data charging time for a display line. On the contrary, when the display refresh rate is getting higher (the display refresh rate is varied from the second display refresh rate DFR2to the first display refresh rate DFR1), the control circuit711can increase the duty cycle of the data transmission control signal TP based on not to change the frequency of the data transmission control signal TP, to reduce the data charging time for a display line. In other words, the control circuit711enlarges a width of each pulse of the data transmission control signal TP to reduce the data charging time for a display line. Such as that, by adjusting the plurality of gate scanning control signals or adjusting the data transmission control signal, the control circuit711can adjust the data charging time for a display line according to the display refresh rate, and a displayed brightness of the display panel can be maintained. Please refer toFIG.10, which illustrates a block diagram of a timing control device according to another embodiment of preset disclosure. The timing control device1010is coupled to a gate driver (G-IC)1021and a source driver (S-IC)1022. The timing control device1010is coupled to the G-IC1021which is not disposed on the display panel. Both of the G-IC1021and the S-IC1022may be off-panel chips. The timing control device1010includes a control circuit1011and a frame rate detection circuit1012. The control circuit1011is coupled to the frame rate detection circuit1012. The frame rate detection circuit1012receives a vertical synchronization signal Vsync, and determine a display refresh rate according to the vertical synchronization signal Vsync. The frame rate detection circuit1012can generate a digital value DRV according to the detected display refresh rate, and transport the digital value DRV to the control circuit1011. The control circuit1011can decode the digital value DRV to obtain display refresh rate, and generates a frame start signal STV, a plurality of gate scanning control signal CLK1-CLKx, an output enable signal OE, a shading signal KB and a data transmission signal TP according to the display refresh rate. In this embodiment, the control circuit1011can further adjust phases of the gate scanning control signals CLK1-CLKx according to the detected display refresh rate, or adjust the data transmission control signal TP, for driving a display panel under the currently detected display refresh rate. Please refer toFIG.11, which illustrates a flow chart of a control method for a display device according to an embodiment of present disclosure. In a step S1110, a plurality of gate scanning control signals and a data transmission signal are generated. In a step S1120, in response to that a display refresh rate changes from a first frequency to a second frequency, the plurality of gate scanning control signals can be adjusted to generate a plurality of adjusted gate scanning control signals, or the data transmission control signal can be adjusted to generate an adjusted data transmission control signal, for driving a display panel under the second frequency as the display refresh rate. Detail of the steps mentioned above have been described in the above embodiments, and no more repeated description here. In summary, in preset embodiments, in response to the variable display refresh rate, the timing control device can adjust the gate scanning control signals or adjust the data transmission control signal to adjust data charging time for each display line of the display panel. Such as that, a displayed brightness of the display panel can be maintained in the variable refresh rate application, and a performance of the display panel can be improved. It will be apparent to those skilled in the art that various modifications and variations can be made to the structure of the disclosed embodiments without departing from the scope or spirit of the disclosure. In view of the foregoing, it is intended that the disclosure cover modifications and variations of this disclosure provided they fall within the scope of the following claims and their equivalents. | 27,960 |
11862066 | DETAILED DESCRIPTION FIGS.1-6illustrate techniques for instructing a display control module to capture content and display captured content in response to the refresh rate of a display exceeding a frame generation rate of a graphics processing unit (GPU) while reducing accesses by the GPU to memory while captured content is being replayed at the display. Display refresh rates often exceed the rate at which a GPU generates frames, sometime by a factor of two or more. Rather than re-transmit the same frame multiple times, the GPU instructs the display control module to replay a previously-transmitted frame. The GPU detects the rate of frame generation based on, for example, the frame rate of a fixed-rate video stream or the complexity of the frames being generated for a variable frame rate gaming application. In response to determining that a frame should be replayed (for example, by detecting that the display refresh rate exceeds the rate of frame generation by at least a threshold amount), the GPU instructs the display control module to capture and then replay captured content rather than retransmitting a frame for display a second (or more) time. During a refresh cycle in which the display control module is replaying captured content, the GPU omits accessing memory to retrieve (and resend) the frame that is being replayed, and instead sends only dummy content (e.g., invalid data) and GPU timing information so that the display control module remains synchronized with the GPU. The GPU thus saves memory bandwidth and power by reducing the number of accesses to memory while captured content is being replayed at the display. FIG.1illustrates a processing system100to instruct a display control module160for a display device170to capture and replay a frame when a display refresh rate exceeds a rate at which a graphics processing unit generates frames in accordance with some embodiments. The processing system100executes sets of instructions (e.g., computer programs) to carry out specified tasks for an electronic device. Examples of such tasks include controlling aspects of the operation of the electronic device, displaying information to a user to provide a specified user experience, communicating with other electronic devices, and the like. Accordingly, in different embodiments the processing system100is employed in one of a number of types of electronic devices, such as a desktop computer, laptop computer, server, game console, tablet, smartphone, and the like. To support execution of the sets of instructions, the processing system100includes a plurality of processor cores (not shown atFIG.1). In some embodiments, each processor core includes one or more instruction pipelines to fetch instructions, decode the instructions into corresponding operations, dispatch the operations to one or more execution units, execute the operations, and retire the operations. In the course of executing instructions, the processor cores generate graphics operations and other operations associated with the visual display of information. Based on these operations, the processor cores provide commands and data to a graphics processing unit (GPU)110, illustrated atFIG.1. The GPU110receives the commands and data associated with graphics and other display operations from the plurality of processor cores. Based on the received commands, the GPU110executes operations to generate frames (e.g., frame140) for display. Examples of operations include vector operations, drawing operations, and the like. The rate at which the GPU110is able to generate frames based on these operations is referred to as the frame generation rate, or simply the frame rate, of the GPU110. The frame generation rate is illustrated atFIG.1as frame rate105. It will be appreciated that the frame rate105varies over time, based in part on the complexity of the operations executed by the GPU to generate a set of frames. For example, sets of frames requiring a relatively high number of operations (as a result of drawing a relatively large number of moving objects for example) are likely to cause a lower frame rate, while sets of frames requiring a relatively low number of operations are likely to allow for a higher frame rate. Further, for some applications, the frame rate105is fixed, and for other applications the frame rate105is variable. As a user switches from one application to another, the frame rate105can switch from fixed to variable and vice versa. The graphics processing unit110is coupled to a memory130. The GPU110executes instructions and stores information in the memory130such as the results of the executed instructions. For example, the memory130stores a plurality of previously-generated images (not shown) that it receives from the GPU110. In some embodiments, the memory130is implemented as a dynamic random access memory (DRAM), and in some embodiments, the memory130is implemented using other types of memory including static random access memory (SRAM), non-volatile RAM, and the like. Some embodiments of the processing system100include an input/output (I/O) engine (not shown) for handling input or output operations associated with the display170, as well as other elements of the processing system100such as keyboards, mice, printers, external disks, and the like. To display frames, the processing system100includes a display control module160and a display170. The display170is a display device that visually displays images based on the frames generated by the GPU110. Accordingly, in different embodiments the display170is a liquid crystal display (LCD) device, an organic light-emitting diode (OLED) device, and the like. As will be appreciated by one skilled in the art, the display170periodically renders (or “draws”) the most recent frame generated by the GPU110, thereby displaying the frame. In some embodiments, the display170has a fixed refresh rate155. Each frame render is associated with a portion of time, referred to as a blanking interval, during which the display170does not render image data. In some embodiments, the display170has a blanking interval of programmable length. Accordingly, as described further herein, in some embodiments the display170has a variable refresh rate155that is adjustable by programming different lengths for the blanking interval. The display control module160controls the rendering of frames at the display170and is implemented as hard-coded logic on one or more integrated circuit (IC) chips, as programmable logic, as configurable logic (e.g., fuse-configurable logic), one or more processors executing a program of instructions, or a combination thereof. In some embodiments the display control module160performs operations including buffering of frames generated by the GPU110, adjustment of the refresh rate155of the display170by programming different blanking interval lengths, and the like. It will be appreciated that although the display control module160is illustrated as a separate module from the GPU110for ease of illustration, in some embodiments the display control module160is incorporated in the GPU110. In other embodiments, one or more operations of the display control module160are performed at the display170. To conserve memory bandwidth and reduce accesses to memory130by the GPU110, the GPU110includes replay logic120, which compares the refresh rate155of the display170to the frame rate105of the GPU110and determines whether the display control module160is to display live content (i.e., a current frame) at the display170, capture live content at a buffer165, and display (replay) captured content based on the relative rates, and to transmit instructions to the display control module160. The replay logic120is implemented as hard-coded logic on one or more integrated circuit (IC) chips, as programmable logic, as configurable logic (e.g., fuse-configurable logic), one or more processors executing a program of instructions, or a combination thereof. To illustrate, in operation, the replay logic120detects whether a replay mode is supported at the display170. In response to detecting that replay mode is supported at the display170, the replay logic120signals the display control module160to enable replay mode. Once replay mode has been enabled, the replay logic120determines for a current frame140whether the refresh rate155of the display170exceeds the frame rate105of the GPU110by more than a threshold amount. In some embodiments, the threshold amount is double the frame rate105. Thus, if the frame rate105is half or less than half of the display refresh rate155, the threshold amount is met. In other embodiments, the threshold amount is slightly more than the frame rate105, but not necessarily double. For example, for a fixed refresh rate display having a refresh rate155slightly higher than the frame rate105, some amount of frames will be repeated, in which case the GPU110signals the display control module160to replay a frame140. If the refresh rate155of the display170does not exceed the frame rate105of the GPU110by more than the threshold amount, the replay logic120determines that the display control module160is to display the current frame140at the display170(i.e., the display170is to display live content). The replay logic120transmits the frame140and replay information150indicating that the display control module160is to display the current frame140at the display170. Because in this example the replay logic120has determined that the display control module160is to display the current frame140at the display without capturing the current frame140or re-displaying a previously-captured frame, the replay information150indicates only that the display control module160is to display the current frame140at the display170for the current display refresh cycle. At the next display refresh cycle, the GPU110will transmit a next frame and replay information to the display control module160. If the refresh rate155of the display170exceeds the frame rate105by more than the threshold amount (e.g., the refresh rate155is at least double the frame rate105), the refresh logic120determines that the display control module160is to capture the current frame140for subsequent replay at the display170. Thus, the replay logic120transmits the current frame140and replay information150indicating that the display control module160is to display the current frame140at the display170and capture the current frame140at the buffer165. In response, the display control module160displays the current frame140at the display170and copies the current frame140to the buffer165. For the subsequent refresh cycle of the display170, the GPU110omits accessing the current frame140from the memory130and instead transmits dummy content (not shown) to the display control module160with replay information150indicating that the display control module160is to use the frame rate timing of the GPU110and replay the previously captured current frame140at the display170. The replay logic120repeats the transmission of dummy content and replay information150indicating that the display control module160is to replay the previously captured current frame140as many times as the refresh rate155exceeds the frame rate105, or until a new frame has been generated by the GPU110. Thus, for example, if the frame rate105is 24 frames per second (fps) and the refresh rate of the display170is 48 Hz, there are two refresh cycles of the display170for each frame that is generated by the GPU110. If both rates are fixed, during a first display refresh cycle, the replay logic120transmits a current frame N140and replay information150indicating that the display control module160is to display the current frame N140at the display170and capture the current frame N140at the buffer165. During a second display refresh cycle, the replay logic120transmits dummy content and replay information150indicating that the display control module160is to replay the previously captured frame N140. The display control module160discards the dummy content and accesses the previously captured frame N140from the buffer165for display at the display170. During a third display refresh cycle, the GPU110generates a current frame N+1140, and the replay logic120transmits the current frame N+1140and replay information150indicating that the display control module160is to display the current frame N+1140at the display170and capture the current frame N+1140at the buffer165. During a fourth display refresh cycle, the replay logic120transmits dummy content and replay information150indicating that the display control module160is to replay the previously captured frame N+1140. The display control module160discards the dummy content and accesses the previously captured frame N+1140from the buffer165for display at the display170. Accordingly, during the second and fourth display refresh cycles, the GPU110omits accessing the N and N+1 frames from the memory130and retransmitting them to the display control module160while the N and N+1 frames are being replayed at the display170. In some embodiments, such as during a PowerPoint® presentation, a single frame is displayed over an extended amount of time and unchanged. The replay logic120detects that the content of the frame is unchanging and signals the display control module160to capture and continually replay the static frame. In this scenario, the replay logic120dynamically determines on a frame-by-frame basis whether to signal the display control module160to replay the captured frame. The replay logic120determines whether to signal the display control module160to replay the captured frame independently of the GPU frame rate105, determining instead to continue to replay captured content until the frame content changes. If the replay logic120detects a static frame content and signals the display control module160to capture the frame, but on the subsequent frame determines that the content has changed, the replay logic120reverts to transmitting the current frame140and replay information150indicating that the display control module160is to display the current frame140at the display170. Thus, the replay logic120dynamically determines to play live content, and the captured frame is not used in this case. In some embodiments, the refresh rate155of the display170is more than double the frame rate105of the GPU110. In such cases, the replay logic120determines to instruct the display control module160to display the captured content for more than two refresh cycles of the display170. In other embodiments in which the display has a variable refresh rate, even if the refresh rate155of the display170could be synchronized with the frame rate105of the GPU110, the replay logic120may determine that the user experience would be enhanced if the display refresh rate is set at a higher rate, to reduce flicker. In such cases, the replay logic120instructs the display control module160to capture live content and then display the captured live content for at least two higher-rate refresh cycles of the display170. The term “live content”, as used herein, refers to frames generated by the GPU that have not been stored by the display control module160for re-display. In some embodiments, the display170has a variable refresh rate with a range of refresh frequencies. For example, in some embodiments, the display170has a refresh rate that can be dynamically changed within a range of 40 Hz to 120 Hz. If a gaming application executing at the GPU110has a frame rate of 30 frames per second, the replay logic120determines a number of frame replays and a display refresh rate for the display170that will optimize a user experience. For example, if the replay logic120determines, as a first option, to refresh the display at 90 Hz, the replay logic120signals the display control module160to capture a frame during a first refresh cycle and replay the frame twice. Alternatively, as a second option, the replay logic120could determine to refresh the display at 60 Hz, and to replay the frame once or, as a third option, the replay logic120could determine to refresh the display at 120 Hz, and to replay the frame three times. Determining a display refresh rate and number of frame replays can impact whether side effects like stutter or tearing are observable, particularly for variable frame rate content such as gaming applications. In this example, the second option (60 Hz, one replay) has a lower refresh rate that saves power. However, the first option (90 Hz, two replays) is in the middle of the refresh rate range of 40 Hz to 120 Hz of the display170, and provides less opportunity for stuttering or tearing to occur if there are frame rate changes due to frame-to-frame variations in rendering complexity. Thus, the first option may provide an improved user experience for variable rate content. FIG.2is a diagram illustrating an example of the replay logic120of the GPU110of the processing system100ofFIG.1instructing the display control module160to capture and replay content in accordance with some embodiments. During a first refresh cycle1202, the replay logic120detects that the refresh rate155of the display170does not exceed the frame rate105of the GPU110by more than a threshold amount, and therefore determines that the display170is to display live content. Accordingly, the replay logic120transmits the active (current) frame N210and a live content indicator215to the display control module160, indicating that the display control module160is to display the active frame N210at the display170. During a second refresh cycle2204, the replay logic120detects that the refresh rate155of the display170exceeds the frame rate105of the GPU110by more than a threshold amount (for example, the replay logic120detects that the refresh rate155of the display170is more than double the frame rate105of the GPU110), and therefore determines that the display170is to display live content while the display control module160captures the live content and stores the live content at the buffer165. The replay logic120therefore transmits active frame N+1220and capture content indicator225to the display control module160. In response to receiving the capture content indicator225, the display control module160copies the active frame N+1220at the buffer165and displays the active frame N+1220at the display170. During a third refresh cycle3206, the replay logic120confirms that the refresh rate155of the display170still exceeds the frame rate105of the GPU110by more than the threshold. Because the replay logic120has already transmitted the active frame N+1220to the display control module160and instructed the display control module160to capture the active frame N+1220, the GPU110does not need to re-transmit the active frame N+1220to the display control module160or re-access the active frame N+1220from memory130. Instead, the replay logic120transmits dummy content230and a replay content indicator235to the display control module160. In response to receiving the dummy content230and replay content indicator235, the display control module160discards the dummy content230, accesses the active frame N+1220from the buffer165, and displays the active frame N+1220at the display170. During a fourth refresh cycle4208, the replay logic120detects that the refresh rate155of the display170does not exceed the frame rate105of the GPU110by more than the threshold. The replay logic120therefore determines that the display170is to display live content. Accordingly, the replay logic120transmits the active (current) frame N+2240and the live content indicator215to the display control module160, indicating that the display control module160is to display the active frame N+2240at the display170. FIG.3is a block diagram of an example of the graphics processing unit110of the processing system100ofFIG.1instructing the display control module160to display live content in accordance with some embodiments. In the illustrated example, the replay logic (not shown) of the GPU110has determined that the refresh rate of the display170does not exceed the frame rate of the GPU110by more than a threshold amount. The GPU110therefore transmits the active frame N310and replay information in the form of a live content indicator312to the display control module160, signaling that the display control module160is to display the active frame N310at the display170without storing the active frame N310at the buffer165. In response to receiving the active frame N310and the live content indicator312, the display control module160displays the active frame N310at the display170without capturing the active frame N310at the buffer165. FIG.4is a diagram of an example of the graphics processing unit110of the processing system100ofFIG.1instructing the display control module160to capture content and display live content in accordance with some embodiments. In the illustrated example, the replay logic (not shown) of the GPU110has determined that the refresh rate of the display170exceeds the frame rate of the GPU110by more than a threshold amount. The GPU110therefore transmits the active frame N+1410and a capture live content indicator412to the display control module160, signaling that the display control module160is to display the active frame N+1410at the display170and also copy the active frame N+1410at the buffer165. In response to receiving the active frame N+1410and the capture live content indicator412, the display control module160displays the active frame N+1410at the display170and copies the active frame N+1 to the buffer165. FIG.5is a diagram of an example of the graphics processing unit110of the processing system100ofFIG.1instructing the display control module160to display captured content in accordance with some embodiments. In the illustrated example, the replay logic (not shown) of the GPU110has previously determined that the refresh rate of the display170exceeds the frame rate of the GPU110by more than a threshold amount and has previously instructed the display control module160to capture the previously-transmitted active frame N+1410, as shown inFIG.4. For the current display refresh cycle, the GPU110transmits dummy content510and a replay content indicator512to the display control module160, instructing the display control module160to access the active frame N+1410from the buffer165and display the active frame N+1410at the display170. In response to receiving the dummy content510and the replay content indicator512, the display control module160discards the dummy content510, accesses the active frame N+1410from the buffer, and displays the active frame N+1410at the display170while maintaining synchronicity with the timing of the GPU110. FIG.6is a flow diagram of a method600of a graphics processing unit instructing a display control module to capture content and display captured content in response to a display refresh rate exceeding a frame generation rate in accordance with some embodiments. The method600is implemented in some embodiments of the processing system100shown inFIG.1. At block602, the replay logic120of the GPU110compares the rate105at which the GPU110generates frames to the refresh rate155of the display170. At block604, the replay logic120determines whether the display refresh rate155exceeds the frame rate105by more than a threshold amount. If, at block604, the replay logic120determines that the refresh rate155does not exceed the frame rate105by more than the threshold amount, the method flow continues to block606. At block606, the replay logic120transmits the active frame N140and a live content indicator215to the display control module160. In response to receiving the active frame N140and the live content indicator215, the display control module160displays the active frame N140at the display170. The method flow then continues back to block602. If, at block604, the replay logic120determines that the refresh rate155exceeds the frame rate105by more than the threshold amount, the method flow continues to block608. At block608, the replay logic120transmits the active frame N140and a capture content indicator225to the display control module160. In response to receiving the active frame N140and the capture content indicator225, the display control module160displays the active frame N140at the display170and copies the active frame N140at the buffer165. At block610, the replay logic120omits accessing the active frame N140from the memory130, and instead transmits dummy content230and a replay content indicator235to the display control module160. In response to receiving the dummy content230and replay content indicator230, the display control module160discards the dummy content230, accesses the active frame N140from the buffer165, and displays the active frame N140at the display170. A computer readable storage medium may include any non-transitory storage medium, or combination of non-transitory storage media, accessible by a computer system during use to provide instructions and/or data to the computer system. Such storage media can include, but is not limited to, optical media (e.g., compact disc (CD), digital versatile disc (DVD), Blu-Ray disc), magnetic media (e.g., floppy disc , magnetic tape, or magnetic hard drive), volatile memory (e.g., random access memory (RAM) or cache), non-volatile memory (e.g., read-only memory (ROM) or Flash memory), or microelectromechanical systems (MEMS)-based storage media. The computer readable storage medium may be embedded in the computing system (e.g., system RAM or ROM), fixedly attached to the computing system (e.g., a magnetic hard drive), removably attached to the computing system (e.g., an optical disc or Universal Serial Bus (USB)-based Flash memory), or coupled to the computer system via a wired or wireless network (e.g., network accessible storage (NAS)). In some embodiments, certain aspects of the techniques described above may implemented by one or more processors of a processing system executing software. The software includes one or more sets of executable instructions stored or otherwise tangibly embodied on a non-transitory computer readable storage medium. The software can include the instructions and certain data that, when executed by the one or more processors, manipulate the one or more processors to perform one or more aspects of the techniques described above. The non-transitory computer readable storage medium can include, for example, a magnetic or optical disk storage device, solid state storage devices such as Flash memory, a cache, random access memory (RAM) or other non-volatile memory device or devices, and the like. The executable instructions stored on the non-transitory computer readable storage medium may be in source code, assembly language code, object code, or other instruction format that is interpreted or otherwise executable by one or more processors. Note that not all of the activities or elements described above in the general description are required, that a portion of a specific activity or device may not be required, and that one or more further activities may be performed, or elements included, in addition to those described. Still further, the order in which activities are listed are not necessarily the order in which they are performed. Also, the concepts have been described with reference to specific embodiments. However, one of ordinary skill in the art appreciates that various modifications and changes can be made without departing from the scope of the present disclosure as set forth in the claims below. Accordingly, the specification and figures are to be regarded in an illustrative rather than a restrictive sense, and all such modifications are intended to be included within the scope of the present disclosure. Benefits, other advantages, and solutions to problems have been described above with regard to specific embodiments. However, the benefits, advantages, solutions to problems, and any feature(s) that may cause any benefit, advantage, or solution to occur or become more pronounced are not to be construed as a critical, required, or essential feature of any or all the claims. Moreover, the particular embodiments disclosed above are illustrative only, as the disclosed subject matter may be modified and practiced in different but equivalent manners apparent to those skilled in the art having the benefit of the teachings herein. No limitations are intended to the details of construction or design herein shown, other than as described in the claims below. It is therefore evident that the particular embodiments disclosed above may be altered or modified and all such variations are considered within the scope of the disclosed subject matter. Accordingly, the protection sought herein is as set forth in the claims below. | 28,830 |
11862067 | DESCRIPTION OF EMBODIMENTS Referring now to the drawings,FIG.1schematically represents aspects of an example processing circuitry100, for example embodying a so-called Internet of Things (IoT) device. Such devices are generally aimed at providing localised low-power low-cost processing capability relating to a particular task, often with a wireless Internet connection being provided. In the example arrangement ofFIG.1, so-called energy “harvesting” is implemented by an energy harvester110. This term refers to providing electrical energy for the operation of the circuitry100from one or more locally available sources, examples including solar, thermal, induction and/or mechanical energy sources. Therefore, the energy harvester110may comprise, for example, a solar electrical generator or converter, a thermal electrical generator, for example responsive to a temperature gradient across the energy harvester, induction circuitry to receive electrical energy from a complimentary apparatus placed nearby and/or a mechanical or vibrational electrical energy generator responsive to movement of the energy harvester110. These are all just examples of energy harvesting techniques and any one or more of these or other similar techniques may be used. A common feature of energy harvesting arrangements is that the generated power can be relatively low and intermittent, which in turn implies that when that harvested energy is stored, for example by energy storage120(for example, capacitive and/or rechargeable battery storage), and used to power the circuitry100, it is important that the power consumption of the circuitry, or at least the energy requirement to complete a particular task, is also low. Therefore, in examples, the circuitry ofFIG.1comprises energy harvesting apparatus to generate electrical energy to power at least some operations of the display apparatus (being any one or more parts ofFIG.1including the display160) in response to a current configuration or motion of the circuitry. The energy harvesting apparatus may comprise (i) solar generation apparatus; (ii) apparatus to generate electrical energy in response to physical motion of the energy harvesting apparatus; (iii) apparatus to generate electrical energy in response to a temperature of the energy harvesting apparatus; and/or (iv) induction apparatus to generate electrical energy in response to presence of the energy harvesting apparatus within a given electrical and/or magnetic field. Of course, IoT devices do not have to use energy harvesting and in some examples they can be powered by dry cell sources or rechargeable cell sources. Once again though, in order to avoid having to replace those dry cell sources (or recharge the rechargeable cell sources) too often, low-power consumption is also an advantage in this situation. It is also noted that the present techniques (to be described below) apply not only to IoT devices of the type described above, but to any processing circuitry incorporating a display device, whether or not the processing circuitry performs communication (by the Internet or otherwise) and whether or not the processing circuitry incorporates one or more sensors. The present techniques are particularly applicable where power consumption is an issue, for example in an energy harvesting situation or in the context of a small capacity dry cell or rechargeable power supply. However, the present techniques are not limited to any specific application of this type and it is noted that the environmental benefit of reducing power consumption of electronic circuitry is a constant aim in many fields of electronic technology. A typical IoT device performs some sort of processing operation, and in the example ofFIG.1processing circuitry130is provided for this purpose. Similarly, an IoT device generally performs communication via the Internet and to this end communication (“Comms”) circuitry140is provided to allow a wireless link145with an access point or a base station (not shown). Depending on the functionality required of the circuitry100, one or more sensor(s)150may provide input to the processing circuitry130and a display160may allow for the display of information generated by the processing circuitry130. It is noted that the harvesting of energy by the energy harvester110can be intermittent and also may or may not coincide with occasions at which processing is required by the processing circuitry130. Therefore, harvested energy can be stored by the energy storage120and used by the remainder of the circuitry under the control of the processing circuitry130, for example at predetermined intervals, in response to an interrupt or other indication provided by ongoing quietened operation of the sensor(s)150, in response to a detection that the energy storage120currently holds enough energy to complete a particular set of operational tasks or the like. FIG.2is a schematic flowchart illustrating one example of this type of operation, in which at a step200, energy is harvested by the energy harvester110. At a step210a sensor detection is made by the sensor(s)150and in response to that detection, one or both of a communications message is sent (at a step220) by the communications circuitry140and a display image or state displayed by the display160is updated at a step230. If, at a step240, sufficient energy remains in the energy storage to repeat these operations, control may return to the step210(either straightaway or after a defined period, for example). If not then control passes to a step250which the circuitry enters a quiescent mode and the harvesting of energy at the step200continues. The display116, in this example, makes use of a set of display elements which are each activated by controlling the storage of electrical charge by that display element. Accordingly, an individual display element may be considered as a capacitor such as that shown as a capacitor300inFIG.3a. Control circuitry to be described below can connect the capacitor300to a current source so that a charging operation may be described by the schematic current flow profile ofFIG.3b, in which time is represented on a horizontal axis and current (I) on a vertical axis. In this schematic representation, current rises310to a peak320before then decaying330in response to increasing charge storage by the capacitor300. The display elements may represent pixels or, in other examples such as that shown schematically inFIG.4, segments410of so-called seven-segment displays400. Here, in a previously proposed arrangement provided to illustrate some of the underlying techniques relating to examples of the present disclosure, four such seven-segment displays are illustrated schematically and are controlled by driver circuitry420responsive to data430indicative of alphanumeric characters to be displayed by the seven-segment displays400. The driver circuitry provides a set of seven control signals440each applicable to a respective one of the segments and (in this example) a set of four control signals450each applicable to a respective one of the seven-segment displays400. The control signals440and450control respective switches connecting a segment to a current source (not shown inFIG.4) so that between the control signals440and the control signals450, any individual segment of any individual seven-segment display can be addressed and connected to the current source. The activation of a display element or segment in this example is such that the display element retains the stored charge at least for a period which is significantly longer than the time taken to charge that display element by the example profile ofFIG.3b. So, it is possible for the driver circuitry420to address each of the display elements which requires activation in turn so as to generate an overall set of activated display elements for a user to view. Note that the “activation” of a display element can involve the display element changing brightness, colour or both so as to enable a user to perceive a difference between activated display elements and not-activated display elements. In order to deactivate an already-activated display element, that display element can be shorted to ground, again under the control of the driver circuitry420and the control signals440,450, so as to discharge the stored charge held by that display element. Therefore in these examples, the driver circuitry420operates as driver circuitry to control display of a prevailing display image by display elements410of a display device (in this example, the set of seven-segment displays400or indeed any individual one of the seven-segment displays), the driver circuitry generating a signal providing electrical charge for storage by display elements, in which an electrical charge stored by a display element controls a display output of that display element. These basic aspects of operation are illustrated by a schematic circuit diagram ofFIG.5incorporating the driver circuitry420and a power source (represented schematically by V+ and V− supply terminals). The control signals440control a set of switches510to connect segment lines520either to the power source or to ground530. The segment lines520relates to an individual segment410in each of the seven-segment displays400; for example, “Seg 0” may relate to the uppermost segment405of each of the seven-segment displays, with the display element represented by a capacitor symbol525connected between that segment line and a “common” line540which is common to the entire seven-segment display400. So, for example, the common line540“COM 0” might relate to a left-most seven-segment display460; the common line “COM 1” might relate to a second-leftmost seven-segment display470and so on. Under the control of the signals450the common lines can individually be grounded, connected to the power supply or tri-stated (effectively, isolated or connected to neither) by other respective switches510. Therefore, using this arrangement, an appropriate combination of the control signals440,450can ground any individual display element or connect any individual display element to the power source. In this way, as discussed above, any individual display element can be activated (by being caused to store a charge) or deactivated (by being grounded so as to discharge any stored charge). FIGS.6and7schematically represent an example of a technique applicable to embodiments of the disclosure, in which stored charge is shared as between one or more display elements to be deactivated and one or more display elements to be newly activated. As background, in an IoT or similar device of the type discussed above, the amount of electrical energy available to perform a given processing and display task may be heavily limited, for example by the limitations of an energy harvesting arrangement in use. Example embodiments recognise this and also recognise that simply discharging a display element to ground is potentially wasteful of the charge that was used in order to initially activate that display element. Referring toFIG.6, a seven-segment display is illustrated in an initial state600(displaying a representation of the numeral2) and a subsequent state610(displaying a representation of the numeral3). In order to transition between the initial state600and the subsequent state610, many of the individual segments of the seven-segment display remain unchanged, but it is noted that one segment620(labelled as “LHS”) has to be deactivated and another segment630(labelled as “RHS”) has to be activated. The deactivation of the LHS segment620is performed inFIG.6by simply grounding the segment. At the lower portion ofFIG.6is shown a schematic charging curve illustrating the current flow in charging the RHS segment630from a completely grounded initial state. Here a peak current Imaxis required to perform the charging operation. Turning toFIG.7, the same initial600and final610states are shown, but in between them an intermediate state700is illustrated in which the one or more display elements to be deactivated (in this case the element620) are connected not initially to ground but to the one or more display elements to be activated (in this case the element630). This provides for a so-called charge sharing between the one or more display elements to be deactivated and the one or more display elements to be activated. In principle at least, the amount of charge stored by the display element620will reduce to 50% and a pre-charge of the display element630will provide 50% of the required charge to activate the display element630. These ratios referred to the specific example ofFIGS.6and7in which the number of display elements to be deactivated is equal to the number of display elements to be activated. If these numbers are different, the amount of pre-charging of the display element(s) to be activated may be different to 50%. These ratios also refer to a notionally perfect or lossless discharging and pre-charging operation and such perfect efficiency may not be obtained in an empirical example, but nevertheless at least some pre-charging of the one or more display elements to be activated can be achieved by using charge previously stored by the one or more display elements to be deactivated. The intermediate state700is maintained for a predetermined or minimum time period to allow charge flow between the respective display elements and then the charge sharing display elements are isolated from one another so as to leave the one or more elements to be activated holding an amount of pre-charge. The one or more display elements to be deactivated may then be disconnected from the charge sharing arrangement and grounded, and the power supply connected to the one or more display elements to be newly activated so as to complete the charging and activation of those one or more display elements. Significantly, however, with reference to the schematic graph drawn as a lower portion ofFIG.7, the pre-charging of the RHS element630has the result that less charge has to be drawn from the power supply in order to complete the charging of the RHS element630. In the schematic graph this is illustrated by a peak charging current Iredbeing somewhat lower than the peak charging current Imaxrequired in the comparative example ofFIG.6. Note that the energy cost of controlling the relevant switches to earth the display element(s) to be deactivated is the same as betweenFIG.6andFIG.7. Similarly, the energy cost of controlling the relevant switches to connect the power supply to the display element(s) to be activated is also the same as betweenFIG.6andFIG.7. There is an additional switching operation inFIG.7to connect and then disconnect the display elements which are to undergo charge sharing, but in comparison to the energy saving through the charge sharing operation, this energy costs in performing the switching is considered negligible. Therefore, using these techniques, the charge sharing arrangement shown schematically inFIG.7can contribute to a net energy saving (with respect to the previously proposed arrangement ofFIG.6) when a display device is transitioned between a first display state and a second, different, display state. Although the example ofFIGS.6and7has been with respect to seven-segment displays, the same techniques and underlying potential benefits may apply in respect of other types of displays having multiple display elements, for example pixelated displays. FIG.8provides a schematic overview of circuitry capable of performing the operations ofFIG.7. This can operate in respect of various different display devices as discussed above but in some examples the display device comprises one or more alphanumeric character displays such as seven-segment displays. For the sake of simplicity of the diagram ofFIG.8, only the LHS620and RHS630display elements of a particular example seven-segment display are illustrated (being shown by a schematic capacitor symbols). Note that the circuitry ofFIG.8may be implemented without the presence of a display device, in the form of circuitry for connection to a display device, or may incorporate the display device as well. Image generator circuitry800(to generate a display image for display by the display device) and detector circuitry810may be implemented by, for example, the processing circuitry130ofFIG.1, for example a CPU or other processing element. Driver circuitry820corresponds to the driver circuitry420ofFIG.4so as to generate control signals830(corresponding schematically to the control signals440,450) in response to data840(corresponding to the information430) defining a required display configuration or state to be displayed. The detector circuitry810is configured to detect, for a given display image transition from a current display image to a second, different, display image, a first set of one or more display elements which are in a respective first state controlled by a first stored electrical charge in the current display image and which are required to be in a respective second state controlled by a second stored electrical charge, lower than the first stored electrical charge, in the second display image. With reference to the example ofFIG.7, the first set of one or more display elements is represented by the LHS element620. In the current display image represented by the display state600, these are in a first (activated) state and in the second display image represented by the display state610, these are in a second state (deactivated) controlled by a (lower) second stored electrical charge. In other words, the detected or identified first display element(s) are elements which are to be deactivated or at least reduced in terms of their stored charge between the current display image and the second (for example subsequent) display image. Switching circuitry, comprising a switching controller850and one or more switches860is responsive to the detector circuitry, to divert electrical charge from display elements of the set of one or more display elements to a secondary charge store in response to initiation of the display image transition. The switching controller850controls the switches860by schematic control signals855. In the example ofFIGS.7and8, the secondary charge store referred to above is represented by the so-called RHS element630. Other examples will be described below in which charges diverted to a different type of secondary charge store, but in the context of these first examples, the detector circuitry810is configured to detect, for the given display image transition, a second set of one or more display elements which are in a respective second state controlled by a second stored electrical charge in the current display image and which are required to be in a respective first state controlled by a first stored electrical charge, greater than the second stored electrical charge, in the second display image. In other words, the second set of one or more display elements corresponds to display elements to be newly activated, or at least ones which are to have their stored charge increased by the given display image transition. In the example ofFIG.7, this corresponds to the RHS element630, as an example of the secondary charge store comprising one or more of the second set of one or more display elements. Therefore, in the example arrangement ofFIG.8, the control signals830control the charging and discharging of display elements to be activated and deactivated. In other words the driver circuitry is configured to generate the signal in respect of a display element to provide a required total stored electrical charge dependent upon a required display output of that display element. In examples, for example as discussed with reference to the reduced peak current IredofFIG.7, for a given display element of the second set of one or more display elements, the driver circuitry is configured to generate the signal in respect of the given display element to provide the required total stored electrical charge taking into account electrical charge diverted to the given display element by the switching circuitry. The given display image transition may comprise, for example, a transition from a current display of a current set of one or more alphanumeric characters to a display of a second, different, set of alphanumeric characters. The switching controller850controlling operation of the switches860controls the selective shorting together or connecting together (for the purposes of charge sharing) of display elements identified in respect of a given display image transition by the detector circuitry. In example arrangements this charge sharing applies only to display elements identified in respect of a given display image transition by the detector circuitry. Example Circuitry Referring now toFIGS.9and10, example circuitries are provided which are similar in some respects to that ofFIG.5but which also embody the functionality of the switches510ofFIG.5and the switches860ofFIG.8by source switches940under the control of control signals950and charge switches920under the control of control signals930. In the schematic representation ofFIG.9, a single schematic circuitry900represents the functionality of the driver circuitry820and the switching controller850. The switch operation is as follows. When charge switch=0 the relevant charge switch920is connected to ground. When charge switch=1 the relevant charge switch is connected to the respective source switch940. When source switch=1, the respective source switch is connected to the power supply. When source switch=0, the respective source switch is connected to a bus910so as to provide a function of selectively connecting together groups of segments identified by the detector circuitry810(not shown inFIG.9) for the purposes of selective and temporary charge sharing. Therefore, in order to charge a segment from the power supply, charge switch=source switch=1. In order to ground a segment, charge switch=0. In order to share charge, charge switch=1 and source switch=0. Referring toFIG.10, features in common withFIG.9will not be described again but a substantive difference is the presence of an energy storage device such as a capacitor1000between the bus910and ground. Also connected across the capacitor1000is an optional voltage regulator1030which in turn supplies power to the processing circuitry130for example. In this example, when a display element is to be deactivated or at least reduced in terms of its stored charge, that stored charge may be diverted by the switches920,940to the bus910which has the effect of charging the capacitor1000rather than (or in addition to) one or more other display elements. This can provide a reserve of energy for use in powering aspects of the overall circuitry ofFIG.1, such as the processing circuitry130. The selection of whether to route charge held by an element to be deactivated into another display element, into the capacitor1000or into both can be made by the detector circuitry810, for example in response to a control signal815indicative of one or more of (a) a current output of the energy harvester110; (b) a prevailing amount of energy stored by the energy storage120; and (c) a prediction of processing tasks to be performed next by the processing circuitry130. When the detector circuitry810detects that more energy is required then is either already available or is likely to be available via energy harvesting, the detector circuitry810may route stored charge from one or more display elements to be deactivated at least in part into the capacitor1000to provide further electrical energy by which to operate the processing circuitry130or other parts of the circuitry ofFIG.1. Similarly, this also implies that if a display device is being entirely deactivated such that no display elements remain activated (or substantially fewer than were previously activated) then a waste of stored charge can be avoided by instead providing that stores charge to the capacitor1000. When the detector circuitry810detects that there is sufficient energy already stored by the energy storage120and/or the rate of energy harvesting implies that sufficient energy will be available to conduct a next processing operation, the stored charge can simply be routed (for example by suitable switches) to one or more other display elements. In other examples, stores charge can always be provided to the capacitor1000to supplement the energy storage120. The capacitor1000and the energy storage120may in fact be represented by the same set of energy storage components. In this way, the capacitor1000provides an example of the secondary charge store comprising a charge store configured to provide electrical energy to power at least some operations of the overall apparatus. FIGS.11a-11cprovide a timing diagram (FIG.11a) and two schematic flowcharts (FIGS.11band11c) to illustrate various timing considerations.FIG.11drelates to a worked example to be described below. Referring first toFIG.11a, a sequence of time points t0 . . . t4 is shown. The time point t0 represents a time at which the current or initial image is displayed by a particular display element and the time point t4 represents a time at which the next or subsequent image is displayed by that display element. The intervening time points t1 . . . t3 will be described below and relates to stages in an example charge sharing process. Note that the time period between t0 and t4 may be of the order of (say) a few milliseconds, but it may vary according to design parameters in dependence upon (amongst other potential influences) the capacitance of each display element and the RC time constant of the circuitry including the display element, defining in turn a characteristic charge or discharge time for that display element. FIG.11bprovides a schematic flowchart relating to initialisation steps performed in respect of a given image transition. The steps are performed for each display element and involves setting two variables for that display element and that image transition, namely “SRC” (at a step1100) and “CHG” (at a step1110). These are set as follows:SRC=AND (old_data, new_data)CHG=OR(old_data, new_data) Here, “old_data” represents the state of the display element in the current image (for example, a1at a given bit position indicating an activated element) and “new_data” represents the state of the display element in the next or subsequent image following the given image transition. Note that the switches920,940are controlled on a bit by bit basis as discussed earlier in connection withFIGS.9and10. The time points t0 to t4 are represented by respective flowchart steps1120. . .1160and operations are performed as shown below. Here, “source_switch” relates to the state of the switch(es)940relevant to the display element under consideration. Also as mentioned above the variable “charge_switch” relates to the state of the switch(es)920relevant to the display element under consideration.t0 (step1120): source_switch=charge_switch=old_datat1 (step1130): source_switch=SRC; charge_switch=old_datat2 (step1140): source_switch=SRC; charge_switch=CHGt3 (step1150): source_switch=SRC; charge_switch=new_datat4 (step1160): source_switch=charge_switch=new_data This arrangement conveniently provides for only one of the two switches being changed at any one time (inFIG.11c, the switch which is changing state is indicated by bold and underlined text). The window in which charge sharing actually takes place in this example is between t2 and t3. In one example, using four bit values for old_data and new_data, consider the example in which old_data=1100 and new_data=0110. Here, SRC=0100 and CHG=1110. This leads to the following actions:t0 (step1120): source_switch=charge_switch=1100t1 (step1130): source_switch=0100; charge_switch=1100t2 (step1140): source_switch=0100; charge_switch=1110t3 (step1150): source_switch=0100; charge_switch=0110t4 (step1160): source_switch=charge_switch=0110 Referring now to another example seven-segment display as illustrated inFIG.11d, each segment is controlled by a respective bit of a control value, with the segments d0-d6 being controlled by old_data or new_data bits 0 to 6 respectively. Consider a further example in which the display is being changed from displaying a value 3 to a value 4. At the start of the process (t0) old_data (d7:0) will be 0x4F (binary 1001111) so that the number 3 is displayed. The value new_data, in order to define a displayed number 4, is 0x66 (binary 1100110).To derive “SRC” and “CHG”:SRC=AND (old_data, new_data)=AND (0x4F, 0x66)=0x46 (binary 1000110)CHG=OR(old_data, new_data)=OR(0x4F, 0x66)=0x6F (binary 1101111) In some example arrangements, there can be multiple different ways of representing a required alphanumeric character and in the context of the techniques discussed above; a selection between these can provide various useful attributes relating to energy and charge management. As an alternative to a seven-segment display,FIG.12schematically illustrates an example pixelated display by which a display image can be represented by changing the state of pixels1200each of which operates in the same manner as a display element discussed above. In the example ofFIG.12, display elements can be set to an activated state (drawn as black) or a deactivated state (drawn as white), although in other examples different colours or different greyscale options may be provided. It will be appreciated that one example representation of the numeral7is shown inFIG.12a, but by activation of different pixels other representations such as that shown inFIG.12bcould be provided and which will be readily understood by a user to represent the same numeral character. Similarly, with reference toFIGS.13aand13b, the numeral9could be displayed as a representation1300or as a representation1310, either of which would be readily understood by a user. This then provides for a selection by, for example, the image generator circuitry800operating in collaboration with the detector circuitry810of a form of representation of a required alphanumeric character (of the “second set” relating to the next or subsequent image to be displayed), the image generator circuitry selecting the representation from two or more candidate representations in response to an amount of electrical energy currently available for use by the apparatus. For example, it may be possible to select a next representation which requires fewer display elements to transition from deactivated to activated or indeed from activated to deactivated than another potential representation, thereby potentially saving some energy which will otherwise be used or lost as part of the display elements transitions. In this way, the image generator circuitry may be configured to select the representation from the two or more candidate representations in response to a quantity of display elements in a different state as between the current display image and each of the candidate representations. For example, to save power the image generator circuitry may be configured to select a representation from the two or more candidate representations having a lowest quantity of display elements in a different state as between the current display image and the selected representation. In other examples, if there is currently a large amount of stored energy available, it may be appropriate to elect to store some of this in the display, for example by the image generator circuitry being configured to select a representation from the two or more candidate representations having a highest quantity of display elements in a different state as between the current display image and the selected representation. In other examples, the display elements themselves can be used as supplementary charge storage so as to provide additional stored energy over and above that held by the energy storage120and/or the capacitor1000. This stored energy can be liberated for use by the processing circuitry via the mechanism described above with reference toFIG.10. In such arrangements the image generator circuitry may be configured to select a representation from the two or more candidate representations in dependence upon a quantity of display elements required to be in the first state in the selected representation. In any of these cases, the assessment of which representation to use can take into account optional variations in state for the display elements. For example, representations may be in different display colours (for a colour display) or different monochrome intensities. So the choice between representations can be not only as between a first and a second representation which make use of different display elements but also between representations with different colours or intensities or the like. Various examples will be described now with reference toFIGS.13ato15. In the example ofFIG.13a, a pair of candidate representations of the numeral9is shown, in which a representation1305has a lower intensity than a representation1300. In the present examples, a lower intensity is representative of a lower stored charge by the relevant display elements so that the lower intensity representation may be used in the context of a detection of a lower quantity of energy stored or available via the energy harvester120for use by the circuitry. In the example ofFIG.13b, different forms of representation1310,1315of the example numeral9are provided, each of which is readily understandable by a user. Selection between them can be in response to how many of the display elements need to transition in order to arrive at one of the representations or by virtue of the fact that the representation1310has fewer activated display elements than the representation1315. FIGS.14aand14brelates to display transitions between a representation of a first alphanumeric character “9”1400(FIG.14a) or “6”1410(FIG.14b) into a representation of the alphanumeric character “1”1420,1430. Either of the candidate representations1420,1430is readily understandable by a user as the required alphanumeric character “1” but the selection between them is made by the image generator circuitry, optionally in collaboration with the detector circuitry, in response to reducing the number of display elements which leads to perform a transition, and in particular those display elements which need to perform a transition from a deactivated (or less activated) state to an activated (or more activated) state. In each ofFIGS.14aand14b, the selection of the respective representation1420,1430results in no display elements having to perform a transition from deactivated to activated. InFIG.15, again a pair of equally readable representations1500,1510are provided of the alphanumeric character “9” but, in the context of a pixelated display such as that shown inFIGS.12aand12b, the number of activated pixels differs between the two representations, so that, for example, a readable representation1510can be provided which provides greater charge storage than another readable representation1500. FIG.16provides a schematic representation in the form of a flowchart of this type of technique in which, at a step1600, the image generator circuitry generates the required display data (for example, an alphanumeric character “9”). Operating potentially in collaboration with the detector circuitry, a detection is made of the currently harvested energy at a step1610and/or of the currently stored energy at a step1620, so that at a step1630the image generator circuitry generates a representation of the required display data from two or more candidate representations according to the techniques described above, which is to say in order to provide energy storage in the case of potentially surplus energy and/or in order to provide reduced energy consumption by reducing display element transitions in the case of a potential shortfall of energy. The various switches510,920,940may be configured to allow for different permutations of charging, discharging or charge-sharing amongst the display elements. For example, display elements can be charged from the power supply monitor time or in parallel. For charge sharing either with other display elements or with the capacitor1000, display elements can be connected in series, for example to provide a higher voltage which might be required to run some or all operations of the processing circuitry130such as memory access or wireless operations, and/or to provide for a net flow of charge to another already partially charged display element or capacitor. As a possible additional technique for use with any of the techniques described above,FIG.17schematically illustrates the use of detection circuitry. Here, a part of the circuitry relating to one display element is represented schematically. Transistors1700provide the switching510relating to the common rail applicable to the display element. The display element itself is connected to a terminal1710. A schematic switch1705connected to a bus or rail1720represents example functionality of the switch920. A further feature inFIG.17, however, is an analogue-to-digital converter (ADC)1730or other detection circuitry configured to sample the prevailing charge level at the display element (for example, by sampling the voltage across that display element which, for a particular capacitance value, is indicative of the stored charge) and providing an output1735indicative of the prevailing charge level to the driver circuitry820and/or switching controller850.FIG.18is a schematic flowchart indicative of how this information may be handled. At a step1800, the stored charge for a display element is detected using, for example, the ADC1730. At a step1810, the operations described above may potentially be varied in dependence upon the detection. Examples of such variation may include one or more of the following: If the remaining charge in a display element is less than the energy cost to turn on a switch for charge-sharing then the switching controller850may elect not to turn on the switch and not to perform charge sharing at that time. If different display elements have varying levels of charge then the switching controller850may prioritize operations by sharing first (or only) from the ones with highest energy difference between current and next states. Further, the detected amount of charge can be used as part of the process to elect whether to divert stored charge to the capacitor1000for use in powering the processing circuitry130, and/or to elect whether to connect display elements in series or parallel (or to treat them individually) for charge sharing as discussed above. FIG.19is a schematically flowchart illustrating a method comprising:controlling (at a step1900) display of a prevailing display image by display elements of a display device, by generating a signal providing electrical charge for storage by display elements, in which an electrical charge stored by a display element controls a display output of that display element;detecting (at a step1910), for a given display image transition from a current display image to a second, different, display image, a first set of one or more display elements which are in a respective first state controlled by a first stored electrical charge in the current display image and which are required to be in a respective second state controlled by a second stored electrical charge, lower than the first stored electrical charge, in the second display image; anddiverting (at a step1920), in response to the detection, electrical charge from display elements of the set of one or more display elements to a secondary charge store in response to initiation of the display image transition. In the present application, the words “configured to . . . ” are used to mean that an element of an apparatus has a configuration able to carry out the defined operation. In this context, a “configuration” means an arrangement or manner of interconnection of hardware or software. For example, the apparatus may have dedicated hardware which provides the defined operation, or a processor or other processing device may be programmed to perform the function. “Configured to” does not imply that the apparatus element needs to be changed in any way in order to provide the defined operation. Although illustrative embodiments of the invention have been described in detail herein with reference to the accompanying drawings, it is to be understood that the invention is not limited to those precise embodiments, and that various changes and modifications can be effected therein by one skilled in the art without departing from the scope and spirit of the invention as defined by the appended claims. | 40,945 |
11862068 | DETAILED DESCRIPTION Hereinafter, embodiments of the present disclosure will be described in detail with reference to the attached drawings, such that those skilled in the art can easily implement the present inventive concept. The present disclosure may be implemented in various forms, and is not limited to embodiments to be described herein below. In the drawings, parts which are not related to the present disclosure will be omitted to explain the present disclosure more clearly. Reference should be made to the drawings, in which similar reference numerals are used throughout the different drawings to designate similar components. For reference, the size of each component and the thicknesses of lines illustrating the component are arbitrarily expressed for the sake of explanation, and the present disclosure is not limited to those illustrated in the drawings. In the drawings, the thicknesses of the components may be exaggerated to clearly express several layers and areas. FIG.1is a diagram illustrating a display device10in accordance with an embodiment of the present disclosure. The display device10in accordance with an embodiment of the present disclosure may include a timing controller11, a data driver12, a scan driver13, a pixel area14, and a sensor15. The timing controller11may receive gray scale values and control signals for each image frame from an external processor. The timing controller11may render the gray scale values in accordance with to specifications of the display device10. For example, the external processor may provide a red gray-scale value, a green gray-scale value, and a blue gray-scale value for each unit dot. However, for example, in the case where the pixel area14has a pentile structure, because adjacent unit dots may share a pixel, the pixels may not be in one-to-one correspondence with the respective gray scale values. In this case, there is a need to render the gray scale values. If the pixels are in one-to-one correspondence with the respective gray scale values, the rendering of the gray scale values may not be required. Gray scale values that have been rendered or have not been rendered may be provided to the data driver12. Furthermore, the timing controller11may provide control signals to the data driver12, the scan driver13, the sensor15, etc. to display images. The data driver12may generate data voltages to be provided to data lines D1, D2, D3, and Dm using the gray scale values and the control signals. For example, the data driver12may sample the gray scale values using a clock signal, and apply data voltages corresponding to the gray scale values to the data lines D1to Dn one row at a time. Here, n is an integer greater than 0. The scan driver13may receive a clock signal, a scan start signal, etc. from the timing controller11and generate first scan signals to be provided to first scan lines S11, S12, and S1nand second scan signals to be provided to second scan lines S21, S22, and S2n. Here, n is an integer greater than 0. The scan driver13may sequentially supply the first scan signals each having a turn-on level pulse to the first scan lines S11, S12, and S1n. The scan driver13may sequentially supply the second scan signals each having a turn-on level pulse to the second scan lines S21, S22, and S2n. For example, the scan driver13may include a first scan driver coupled to the first scan lines S11, S12, and S1n, and a second scan driver coupled to the second scan lines S21, S22, and S2n. The first scan driver and the second scan driver each may include scan stages having shift registers. The first scan driver and the second scan driver each may generate scan signals by sequentially transmitting a scan start signal having a turn-on level pulse to a subsequent stage under control of a clock signal. In some embodiments, the first scan signals and the second scan signals may be the same as each other. In this case, the first scan line and the second scan line in each pixel may be coupled to the same node to receive a same scan signal. In this case, the scan driver13may include a single scan driver. The sensor15may receive a control signal form the timing controller11and supply an initialization voltage to sensing lines I1, I2, I3, and Im and/or receive sensing signals from the sensing lines I1, I2, I3, and Im. For example, the sensor15may supply an initialization voltage to the sensing lines I1, I2, I3, and Im during an initialization period in a display period. For example, the sensor15may receive sensing signals from the sensing lines I1, I2, I3, and Im during a sensing period. The sensor15may include sensing channels coupled to the sensing lines I1, I2, I3, and Im. For example, the sensing lines I1, I2, I3, and Im may be in one-to-one correspondence with the sensing channels in the sensor15. The pixel area14may include pixels PX1, PX2, PX3, PX4, PX5, PX6, PX7, and PX8. Each pixel may be coupled to a corresponding data line, a corresponding scan line, and a corresponding sensing line. A first pixel PX1may be coupled to scan lines S1iand S2i, a data line Dj, and a sensing line Ij as disclosed inFIG.3. A second pixel PX2, a third pixel PX3, and a fourth pixel PX4may be coupled to the same scan lines S1iand S2ias that of the first pixel PX1as disclosed inFIG.4. However, the first to fourth pixels PX1, PX2, PX3, and PX4may be coupled to different data lines Dj, D(j+1), D(j+2), and D(j+3) and different sensing lines Ij, I(j+1), I(j+2), and I(j+3), respectively. Here, i and j each may be an integer greater than or equals to 0. A fifth pixel PX5may be coupled to scan lines S1(i+1) and S2(i+1), the data line Dj, and the sensing line Ij. A sixth pixel PX6, a seventh pixel PX7, and an eighth pixel PX8may be coupled to the same scan lines S1(i+1) and S2(i+1) as that of the fifth pixel PX5. However, the fifth to eighth pixels PX5, PX6, PX7, and PX8may be coupled to different data lines Dj, D(j+1), D(j+2), and D(j+3) and different sensing lines Ij, I(j+1), I(j+2), and I(j+3), respectively. In an embodiment, the pixels PX1, PX2, PX3, and PX4that are coupled to the same scan lines S1iand S2imay include a first group of pixels PX1and PX3(odd numbered pixels) and a second group of pixels PX2and PX4(even numbered pixels). The first group of pixels PX1and PX2and the second group of pixels PX2and PX4may be alternately arranged. For example, the first group of pixels PX1and PX3may include pixels coupled to odd-numbered data lines, and the second group of pixels PX2and PX4may include pixels coupled to even-numbered data lines. In an embodiment, during a first period, the sensor15may store first sampling signals in first sampling capacitors CS2ain first sensing channels151which correspond to the first group of pixels PX1and PX3. Here, the first sampling signals may include characteristic information, for example, mobility characteristic information, about the first group of pixels PX1and PX3and the common mode noise. Furthermore, during the first period, the sensor15may store second sampling signals in second sampling capacitors CS2bin sensing channels152which correspond to the second group of pixels PX2and PX4. Here, the second sampling signals may not include characteristic information about the second group of pixels PX2and PX4but include the common mode noise only. Since the first sampling signals and the second sampling signals have been stored during a same period (the first period), the first and second sampling signals may include a common mode noise which is included in the first sensing channels151and the second sensing channels152. Therefore, characteristic information about the first group of pixels PX1and PX3which does not include the common mode noise may be obtained by removing the common mode noise stored in the second sampling capacitors CS2bfrom the first sampling signals stored in the first sampling capacitors CS2a. During a second period following the first period, first sensing capacitors CS1aof the first sensing channels151may be initialized. Also, during the second period, second sensing capacitors CS1bof the second sensing channels152may be initialized. Depending on connection (e.g., whether or not a switch exists) between the sampling capacitors CS1aand CS1band the sensing capacitors CS1aand CS1b, a process of acquiring the above-mentioned characteristic information may be performed during a period subordinate to the second period or during a period independent from the second period. During a third period following the second period, the sensor15may store third sampling signals in the first sampling capacitors CS1ain the first sensing channels151which correspond to the first group of pixels PX1and PX3. Here, the third sampling signals may not include characteristic information about the first group of pixels PX1and PX3but include the common mode noise only. Furthermore, during the third period, the sensor15may store fourth sampling signals in the second sampling capacitors CS1bin the second sensing channels152which correspond to the second group of pixels PX2and PX4. Here, the fourth sampling signals may include characteristic information about the second group of pixels PX2and PX4and the common mode noise. Since the third sampling signals and the fourth sampling signals have been stored during a same period (the third period), the third and fourth sampling signals may include a common mode noise which is included in the first sensing channels151and the second sensing channels152. Therefore, characteristic information about the second group of pixels PX2and PX4which does not include the common mode noise may be obtained by removing the common mode noise stored in the first sampling capacitors CS2afrom the second sampling signals stored in the second sampling capacitors CS2b. Likewise, during a fourth period following the third period, the sensor15may store characteristic information about a first group of pixels PX5and PX7coupled to scan lines S1(i+1) and S2(i+1) next to the scan lines S1iand S2i. During a fifth period following the fourth period, a process of initializing the sensing capacitors may be performed. During a sixth period following the fifth period, the sensor15may store characteristic information about a second group of pixels PX6and PX8. FIGS.2to4are diagrams for describing a method of driving the display device during a display period in accordance with an embodiment of the present disclosure. FIG.2illustrates examples of waveforms of signals applied to scan lines S1(i−1), S2(i−1), S1i, S2i, S1(i+1), and S2(i+1), data lines Dj and D(j+1), and sensing lines Ij and I(j+1) pertaining to the first pixel PX1and the second pixel PX2during an N-th frame period FRAMEN and an N+1-th frame period FRAME(N+1). An example of the configuration of a first pixel PX1and a first sensing channel151will be described with reference toFIG.3. The first pixel PX1may include transistors T1a, T2a, and T3a, a storage capacitor Ca, and a light emitting diode LDa. The transistors T1a, T2a, and T3aeach may be an N-type transistor. In an embodiment, the transistors T1a, T2a, and T3aeach may be a P-type transistor. In an embodiment, the transistors T1a, T2a, and T3aeach may be a complementary transistor which includes an N-type transistor and a P-type transistor. The term “P-type transistor” is a transistor in which an amount of current flowing through a channel increases when a voltage difference between a gate electrode and a source electrode increases in a negative direction. The term “N-type transistor” is a transistor in which an amount of current flowing through a channel increases when a voltage difference between a gate electrode and a source electrode increases in a positive direction. Each transistor may be a thin film transistor (TFT), a field effect transistor (FET), and a bipolar junction transistor (BJT). A first transistor T1amay include a gate electrode coupled to a first node N1a, a first electrode coupled to a first power supply ELVDD, and a second electrode coupled to a second node N2a. The first transistor T1amay be referred to as “a driving transistor”. A second transistor T2amay include a gate electrode coupled to the first scan line S1i, a first electrode coupled to the data line Dj, and a second electrode coupled to the first node N1a. The second transistor T2amay be referred to as “a scanning transistor”. A third transistor T3amay include a gate electrode coupled to the second scan line S2i, a first electrode coupled to the second node N2a, and a second electrode coupled to the sensing line Ij. The third transistor T3amay be referred to as “a sensing transistor”. The storage capacitor Ca may include a first electrode coupled to the first node N1a, and a second electrode coupled to the second node N2a. The light emitting diode LDa may include an anode coupled to the second node N2a, and a cathode coupled to a second power supply ELVSS. Generally, the voltage of the first power supply ELVDD may be greater than that of the second power supply ELVSS. However, for example, in a special case where there is a need to prevent the light emitting diode LDa from emitting, the voltage of the second power supply ELVSS may be set to a value greater than that of the first power supply ELVDD. The first sensing channel151may include switches SW2ato SW7a, a first sensing capacitor CS1a, a first amplifier AMPa, and a first sampling capacitor CS2a. The second switch SW2amay include a first end coupled to a third node N3a, and a second end coupled to an initialization power supply VINT. The first amplifier AMPa may include a first input terminal (e.g., a non-inverting terminal) coupled to a reference power supply VREF. The first amplifier AMPa may be formed of an operational amplifier. The third switch SW3amay include a first end coupled to the third node N3aand a second end coupled to a second input terminal (e.g., an inverting terminal) of the first amplifier AMPa. The first sensing capacitor CS1amay include a first electrode coupled to the second input terminal of the first amplifier AMPa and a second electrode coupled to an output terminal of the first amplifier AMPa. The first sampling capacitor CS2amay be coupled to the first sensing capacitor CS1athrough at least one switch (e.g., SW5aand SW6a). The fourth switch SW4amay include a first end coupled to the first electrode of the first sensing capacitor CS1aand a second end coupled to the second electrode of the first sensing capacitor CS1a. The fifth switch SW5amay include a first end coupled to the output terminal of the first amplifier AMPa and a second end coupled to a fourth node N4a. The sixth switch SW6amay include a first end coupled to the fourth node N4aand a second end coupled to a first electrode of the first sampling capacitor CS2a. The seventh switch SW7amay include a first end coupled to the first electrode of the first sampling capacitor CS2aand a second end coupled to an analog-digital converter ADC1. The eighth switch SW8amay include a first end coupled to the third node N3a, and a second end coupled to the fourth node N4a. The sensor15may include the first sensing channel151and the analog-digital converter ADC1. For example, the sensor15may include analog-digital converters ADC1and ADC2. The number of the analog-digital converters ADC1and ADC2may correspond to the number of sensing channels151and152. In an embodiment, the sensor15may include a single analog-digital converter, and convert sampling signals stored in the sensing channels in a time-sharing manner. Transistors T1b, T2b, and T3b, a storage capacitor Cb, and a light emitting diode LDb that are included in the second pixel PX2ofFIG.4have substantially the same configurations as those of the transistors T1a, T2a, and T3a, the storage capacitor Ca, and the light emitting diode LDa that are included in the first pixel PX1; therefore, repetitive explanation thereof will be omitted. Furthermore, switches SW2bto SW7b, a second sensing capacitor CS1ba second amplifier AMPb, and a second sampling capacitor CS2bthat are included in the second sensing channel152ofFIG.4have substantially the same configurations as those of the switches SW2ato SW7a, the first sensing capacitor CS1a, the first amplifier AMPa, and the first sampling capacitor CS2athat are included in the first sensing channel151; therefore repetitive explanation thereof will be omitted. Referring toFIG.2again, during a display period, for example, a data writing period, the sensing lines Ij and I(j+1) are coupled with the initialization power supply VINT. During the display period, the second switches SW2aand SW2bmay be turned on. During the display period, the third switches SW3aand SW3band the eighth switches SW8aand SW8bmay be turned off. Hence, the sensing lines Ij and I(j+1) may be prevented from being coupled to other power supplies (e.g., VREF). During the display period, data voltages DS(i−1)j to DS(i+2)(j+1) may be sequentially applied to the data lines Dj and D(j+1). Scan signals having a turn-on level (high level) may be sequentially applied to the first scan lines S1(i−1), S1i, and S1(i+1). Also, scan signals having a turn-on level may be applied to the second scan lines S2(i−1), S2i, and S2(i+1) in synchronization with the first scan signals applied to the first scan lines S1(i−1), S1i, and S1(i+1). In an embodiment, during the display period, for example, the data writing period, scan signals having a turn-on level may always be applied to the second scan lines S2(i−1), S2i, and S2(i+1). For example, if scan signals having a turn-on level are applied to the i-th first scan line S1iand the i-th second scan line S2i, the second transistors T2aand T2band the third transistors T3aand T3bmay be turned on. Therefore, a voltage corresponding to a difference between a data voltage DSij and the initialization power supply VINT is stored to the storage capacitor Ca of the first pixel PX1and a voltage corresponding to a difference between a data voltage DSi(j+1) and the initialization power supply VINT is stored to the storage capacitor Cb of the second pixel PX2. In the first pixel PX1, depending on a difference in voltage between the gate electrode and the source electrode of the first transistor T1a, the amount of driving current flowing through the light emitting diode LDa from the first power supply ELVDD to the second power supply ELVSS may be determined. The emission luminance of the light emitting diode LDa may be determined depending on the amount of driving current. In the second pixel PX2, depending on a difference in voltage between the gate electrode and the source electrode of the first transistor T1b, the amount of driving current flowing through the light emitting diode LDb from the first power supply ELVDD to the second power supply ELVSS may be determined. The emission luminance of the light emitting diode LDb may be determined depending on the amount of driving current. Subsequently, in a display period, if scan signals having a turn-off level are applied to the i-th first scan line S1iand the i-th second scan line S2i, the second transistors T2aand T2band the third transistors T3aand T3bmay be turned off. Therefore, regardless of a change in voltage of the data lines Dj and D(j+1), a difference in voltage between the gate electrodes and the source electrodes of the first transistors T1aand T1bmay be maintained by the storage capacitors Ca and Cb, and the emission luminance of the light emitting diodes LDa and LDb may be maintained during the display period. FIGS.5to7are diagrams for describing a method of driving the display device during a sensing period in accordance with an embodiment of the present disclosure. Referring toFIG.5, the sensing period of the display device10in accordance with an embodiment of the present disclosure may include at least three sensing frame periods SFRAME1, SFRAME2, and SFRAME3. During the first sensing frame period SFRAME1, sensing voltages SS(i−1)j to SS(i+2)j may be sequentially applied to the j-th data line Dj. Here, a sensing reference voltage SREF may be applied to the j+1-th data line D(j+1). Furthermore, the sensing lines Ij and I(j+1) may be coupled to the reference power supply VREF. Referring toFIGS.6and7, the third switches SW3aand SW3bmay be turned on. Since the reference power supply VREF is applied to the non-inverting terminals and the inverting terminals of the first amplifiers AMPa, the non-inverting terminals and the inverting terminals of the first amplifiers AMPa are in a virtual short state. If scan signals having a turn-on level are applied to the i-th first scan line S1iand the i-th second scan line S2i, the second transistors T2aand T2band the third transistors T3aand T3bmay be turned on. Hence, a sensing voltage SSij may be applied to the first node N1aof the first pixel PX1, and a voltage of the reference power supply VREF may be applied to the second node N2a. A difference in voltages between the sensing voltage SSij and the reference power supply VREF may be greater than the threshold voltage of the first transistor T1a. Hence, the first transistor T1amay be turned on, so that sensing current may flow through a sensing current path connected between the first power supply ELVDD and the first electrode of the first sensing capacitor CS1a(the inverting terminal of the first amplifier AMPa) through the first transistor T1a, the second node N2a, the third transistor T3a, the third node N3aand the third switch SW3a. The sensing current may include characteristic information of the first transistor T1aand the common mode noise. The sensing current flowing through the first transistor T1amay correspond to the equation 1 below: Id=12(u×Co)(WL)(Vgs-Vth)2[Equation1] Here, Id may denote sensing current flowing through the first transistor T1a. u may denote mobility. Co may denote a capacitance formed by a channel, an insulating layer, and the gate electrode of the first transistor T1a. W may denote a width of the channel of the first transistor T1a. L may denote a length of the channel of the first transistor T1a. Vgs may denote a difference in voltage between the gate electrode and the source electrode of the first transistor T1a. Vth may denote a threshold voltage value of the first transistor T1a. Here, Co, W, L each may be a constant. Vth may be detected by a predetermined detection method (e.g., refer toFIGS.15and16). Vgs may be a difference in voltage between the sensing voltage SSij and the reference power supply VREF. The voltage of the third node N3ais fixed. Hence, as the sensing current Id is increased, the voltage of the fourth node N4ais reduced. The voltage of the fourth node N4amay be stored in the first sampling capacitor CS2aas a sampling signal. Subsequently, after turning on the seventh switch SW7a, the analog-digital converter ADC1may calculate the magnitude of the sensing current Id by converting the sampling signal stored in the first sampling capacitor CS2ainto a digital signal. Therefore, the mobility u that is the remaining variable may be calculated. However, the first sensing capacitor CS1amay be vulnerable to noise because the capacitance thereof is smaller than that of other elements (e.g., a parasitic capacitance of the sensing line Ij). In an embodiment of the present disclosure, a sampling signal of the adjacent second sensing channel152may be further used, and a sampling signal of the first sensing channel151and a sampling signal of the second sensing channel152may be processed to obtain the characteristic information of the first transistor T1aby removing the common mode noise. Hence, the sensing reference voltage SREF may be applied to the first node N1bof the second pixel PX2, and the voltage of the reference power supply VREF may be applied to the second node N2b. A difference in voltage between the sensing reference voltage SREF and the reference power supply VREF may be less than the threshold voltage of the first transistor T1b. Therefore, the first transistor T1bmay be turned off, and only noise current may flow through the second sensing channel152. The noise current may not include the characteristic information of the first transistor T1bbut include the common mode noise only. Therefore, the sampling signal stored in the second sampling capacitor CS2bmay only include the common mode noise information without including the characteristic information of the first transistor T1b. Thus, mobility characteristic information of the first transistor T1aof the first pixel PX1from which the common mode noise has been removed may be acquired by sampling signals acquired during the first sensing frame period SFRAME1. Likewise, during the first sensing frame period SFRAME1, mobility characteristic information of a first transistor of the third pixel PX3from which the common mode noise has been removed may be acquired. During the second sensing frame period SFRAME2, the pixels may be initialized. For the sake of explanation, the following description will be made only for the first pixel PX1and the second pixel PX2. For example, the sensing reference voltage SREF may be applied to the data lines Dj and D(j+1), and the sensing lines Ij and I(j+1) may be coupled with the initialization power supply VINT. Scan signals having a turn-on level may be sequentially supplied to the scan lines S1(i−1) to S2(i+1). In an embodiment, the scan signals having a turn-on level may be simultaneously supplied to all of the scan lines S1(i−1) to S2(i+1). Hence, the sensing reference voltage SREF may be stored in the first nodes N1aand N1bof the pixels PX1and PX2, and the voltage of the initialization power supply VINT may be applied to the second nodes N2aand N2b. A parasitic capacitance Cpa may be present between the first node N1aof the first pixel PX1and the i-th first scan line S1i. Also, a parasitic capacitance Cpb may be present between the first node N1bof the second pixel PX2and the i-th first scan line S1i. Hence, if the pixels are not initialized during the second sensing frame period SFRAME2, the sensing voltage SSij pre-stored in the first node N1aof the first pixel PX1may affect a sensing voltage SSi(j+1) to be written to the first node N1bof the second pixel PX2during the third sensing frame period SFRAME3. In other words, a horizontal crosstalk issue may occur. Mobility characteristic information of the first transistor T1bof the second pixel PX2from which the common mode noise has been removed may be acquired by sampling signals acquired during the third sensing frame period SFRAME3. Likewise, during the third sensing frame period SFRAME3, mobility characteristic information of a first transistor of the fourth pixel PX4from which the common mode noise has been removed may be acquired. The third sensing frame period SFRAME3is similar to the first sensing frame period SFRAME1except only the fact that sensing target pixels are different pixels PX2and PX4; therefore, repetitive explanation thereof will be omitted. FIGS.8to14are diagrams for describing a method of driving the display device during a sensing period in accordance with an embodiment of the present disclosure. Referring toFIG.8, during a sensing frame period SFRAME′, sensing voltages SS(i−1)j, SSij, and SS(i+1)(j) may be sequentially supplied to the j-th data line Dj, and sensing voltages SS(i−1)(j+1), SSi(j+1), and SS(i+1)(j+1) may be sequentially supplied to the j+l-th data line D(j+1). In synchronization with supply timings of the sensing voltages SS(i−1)(j+1), SSi(j+1), and SS(i+1)(j+1), scan signals having a turn-on level may be sequentially supplied to the first scan lines S1(i−1), S1i, and S1(i+1), and scan signals having a turn-on level may be sequentially supplied to the second scan lines S2(i−1), S2i, and S2(i+1). The sensing lines Ij and I(j+1) may be coupled with the reference power supply VREF. A first time t1may be a time during the first period. A second time t2may be a time during the second period. A third time t3may be a time during the third period. The first period, the second period, and the third period may be sequential time and may not overlap with each other. The first time t1will be described with reference toFIGS.9and10. The first period may be a first sensing period, and the first time t1may be a first sensing time. A first sensing channel151′ may further include a first switch SW1a, as compared to the first sensing channel151ofFIG.3. The first switch SW1amay include a first end coupled to the j-th sensing line Ij, and a second end coupled to the third node N3a. The other components of the first sensing channel151′ are substantially the same as those of the first sensing channel151ofFIG.3; therefore, repetitive explanation thereof will be omitted. A second sensing channel152′ may further include a first switch SW1bas compared to the second sensing channel152ofFIG.4. The first switch SW1bmay include a first end coupled to the j+l-th sensing line I(j+1), and a second end coupled to the third node N3b. The other components of the second sensing channel152′ are substantially the same as those of the second sensing channel152ofFIG.4; therefore, repetitive explanation thereof will be omitted. During the first period, the first sensing channel151′ may store a first sampling signal SS1in the first sampling capacitor CS2aby connecting the j-th sensing line Ij to the first sensing channel151′. For example, the first switch SW1amay be in a turned-on state. A process of storing the first sampling signal SS1is substantially the same as that described with reference toFIG.6; therefore, repetitive explanation thereof will be omitted. During the first period, the second sensing channel152′ may store a second sampling signal SS2in the second sampling capacitor CS2bwhile disconnecting the j+l-th sensing line I(j+1) from the second sensing channel152′. For example, the first switch SW1bmay be in a turned-off state. Therefore, even when the first transistor T1bis in a turned-on state, sensing current may be prevented from flowing into the second sensing channel152′. Therefore, the second sampling signal SS2stored in the second sampling capacitor CS2bmay include only noise information without including the characteristic information of the first transistor T1b. The second time t2will be described with reference toFIGS.11and12. The second period may be an initialization and conversion period. The second time t2may be an initialization and conversion time. In some embodiments, depending on switching conditions, an initialization period and a conversion period may be separated from each other. The conversion period may correspond to any one of a period after the first period or a period before the third period. During the second period, the first sensing channel151′ may initialize the first sensing capacitor CS1awhile disconnecting the first sensing line Ij from the first sensing channel151′. For example, the fourth switch SW4amay be turned on. Therefore, the voltages of the first and second electrodes of the first sensing capacitor CS1abecome equal to each other, whereby the first sensing capacitor CS1amay be discharged. Here, the sixth switch SW6ais turned off, so that the initialization of the first sensing capacitor CS1ais prevented from affecting the first sampling signal SS1stored in the first sampling capacitor CS2a. During the second period, the second sensing channel152′ may initialize the second sensing capacitor CS1bwhile disconnecting the second sensing line I(j+1) from the second sensing channel152′. For example, the fourth switch SW4bmay be turned on. Therefore, the voltages of the first and second electrodes of the second sensing capacitor CS1bbecome equal to each other, whereby the second sensing capacitor CS1bmay be discharged. Here, the sixth switch SW6bis turned off, so that the initialization of the second sensing capacitor CS1bis prevented from affecting the second sampling signal SS2stored in the second sampling capacitor CS2b. In some embodiments, depending on switching conditions, the initialization period of the second sensing capacitor CS1bmay differ from the initialization period of the first sensing capacitor CS1a. During the conversion period, the seventh switches SW7aand SW7bmay be turned on. Therefore, the analog-digital converters ADC1and ADC2may convert corresponding sampling signals SS1and SS2to digital signals. If the sensor15′ includes a single analog-digital converter, turn-on periods of the seventh switches SW7aand SW7bmay not overlap with each other. As the first sampling signal SS1and the second sampling signal SS2are processed to obtain the characteristic information of the first transistor T1aby removing the common mode noise, characteristic information of the first transistor T1afrom which the common mode noise has been removed may be acquired. The third time t3will be described with reference toFIGS.13and14. The third period may be a second sensing period, and the third time t3may be a third sensing time. During the third period, the first sensing channel151′ may store a third sampling signal SS3in the first sampling capacitor CS2awhile disconnecting the j-th sensing line Ij from the first sensing channel151′. For example, the first switch SW1amay be in a turned-off state. Therefore, even when the first transistor T1ais in a turned-on state, sensing current may be prevented from flowing through the first sensing channel151′. Therefore, the third sampling signal SS3stored in the first sampling capacitor CS2amay include only noise information without including the characteristic information of the first transistor T1a. During the third period, the second sensing channel152′ may store a fourth sampling signal SS4in the second sampling capacitor CS2bby connecting the j+l-th sensing line I(j+1) to the second sensing channel152′. For example, the first switch SW1bmay be in a turned-on state. A process of storing the fourth sampling signal SS4is substantially the same as that described with reference toFIG.6; therefore, repetitive explanation thereof will be omitted. A fourth time t4may be a time during the fourth period. A fifth time t5may be a time during the fifth period. A sixth time t6may be a time during the sixth period. The fourth period, the fifth period, and the sixth period may be sequential time and may not overlap with each other. During the fourth to sixth periods, characteristic information of the pixels PX5, PX6, PX7, and PX8may be stored, and related contents may refer to the description ofFIG.1. In the embodiments ofFIGS.8to14, it is possible to sense characteristic information of all of the pixels of the pixel circuits14during one sensing frame period SFRAME′. Thus, there is an advantage in that required sensing time may be reduced as compared to those of the embodiment ofFIGS.5to7which include at least three sensing frame periods SFRAME1, SFRAME2, and SFRAME3. Furthermore, in the embodiments ofFIGS.8to14, as compared to the embodiment ofFIGS.5to7, the number of switching operations of transistors and switches is reduced, and the number of times signals are transmitted form the timing controller11to the data driver12is reduced. Therefore, the power consumption may be reduced. FIGS.15and16are diagrams for describing a method of driving the display device during a threshold voltage sensing period in accordance with an embodiment of the present disclosure. Referring toFIG.16, unlike the foregoing embodiments, the third switch SW3aand the fifth switch SW5amay remain turned off, and the eighth switch SW8amay remain turned on. Referring toFIG.15, at a first time t1′, the voltage of the second power supply ELVSS is increased, so that the light emitting diode LDa may be prevented from emitting light. Next, at a second time t2′, since the second switch SW2ais turned on, the j-th sensing line Ij may be initialized to the voltage of the initialization power supply VINT. At a third time t3′, scan signals having a turn-on level may be applied to the i-th first scan line S1iand the i-th second scan line S2i. Here, a data reference voltage Dref may be applied to the j-th data line Dj. Therefore, the data reference voltage Dref may remain on the first node N1a. Also, the j-th sensing line Ij may be coupled to the second node N2a. The voltage of the second node N2amay increase from the voltage of the initialization power supply VINT to a voltage corresponding to (Dref-Vth). If the voltage of the second node N2aincreases to the voltage corresponding to (Dref-Vth), the first transistor T1ais turned off. Consequently, the voltage of the second node N2ano longer increases. The sixth switch SW6amay be in a turned-on state. Hence, a sampling signal may be stored in the first sampling capacitor CS2a. Here, since the fourth node N4aand the second node N2aare coupled to each other, the sampling signal may include the threshold voltage value Vth of the first transistor T1a. After the seventh switch SW7ais turned on, the analog-digital converter ADC1may convert the sampling signal to a digital signal to obtain the threshold voltage of the first transistor T1a. In a display device and a method of driving the display device in accordance with an embodiment, different characteristics of transistors may be compensated for. Example embodiments have been disclosed herein, and although specific terms are employed, they are used and are to be interpreted in a generic and descriptive sense only and not for purpose of limitation. In some instances, as would be apparent to one of ordinary skill in the art as of the filing of the present application, features, characteristics, and/or elements described in connection with a particular embodiment may be used singly or in combination with features, characteristics, and/or elements described in connection with other embodiments unless otherwise specifically indicated. Accordingly, it will be understood by those of skill in the art that various changes in form and details may be made without departing from the spirit and scope of the present disclosure as set forth in the following claims. | 38,265 |
11862069 | DETAILED DESCRIPTION The following description is intended to convey a thorough understanding of the present disclosure by providing a number of specific embodiments and details involving display systems utilizing micro-light emitting diodes (micro-LEDs). It is understood, however, that the present disclosure is not limited to these specific embodiments and details, which are examples only, and the scope of the disclosure is accordingly intended to be limited only by the following claims and equivalents thereof. It is further understood that one possessing ordinary skill in the art, in light of known systems and methods, would appreciate the use of the disclosure for its intended purposes and benefits in any number of alternative embodiments, depending upon specific design and other needs. In some display applications in which the pixel architecture for the pixels is implemented as micro-LEDs, such as augmented reality/virtual reality (AR/VR) systems, projectors, phones, tablets, laptops, televisions, and plasma displays, a faster modulation speed than the kilo-Hertz range of conventional LED drivers is required. In some cases, pulses shorter than 1 μs, or even shorter than 100 ns, are required to meet specifications for a satisfactory user experience. Although micro-LEDs are small and therefore have a small capacitance, the rise time for a high-quality micro-LED to switch from a fully-off state to an on-state in which the micro-LED is emitting light can be substantially longer than 100 ns, on the order of tens or hundreds of nanoseconds when using conventional driving techniques. FIGS.1-9illustrate techniques for driving micro-LEDs having lateral dimension that is smaller than 20 μm to reduce the response time of the micro-LEDs. In some embodiments, a micro-LED driver applies a low baseline power (i.e., a baseline voltage or current) to pre-charge a micro-LED in a nominally-off (i.e., non-light-emitting) state in addition to applying an operating driving power to drive the micro-LED in a light-emitting state. By applying the low baseline power to pre-charge the micro-LED prior to applying the operating driving power, the micro-LED driver significantly decreases the time between application of the operating driving power and onset of emission of light from the micro-LED. The micro-LED driver applies the low baseline power at all times in some embodiments, and in other embodiments, the micro-LED driver conserves power by applying the low baseline power at all times only to specific areas of the display, such as a banner at the top of the display to show icons, that remain illuminated while the remaining areas of the display are off when the display is in a particular operation mode. In some embodiments, the micro-LED driver includes a timing circuit that applies the low baseline power to a set of pixels a short time before that set of pixels will become active. The micro-LED driver applies the low baseline power only to active pixels (i.e., non-dark pixels) in some embodiments. In some embodiments, the micro-LED driver uses a primary power path to supply the operating driving power to drive the micro-LED in the light-emitting state and a secondary power path to supply the baseline power to pre-charge the micro-LED prior to application of the operating driving power. In some embodiments, the micro-LED driver applies an operating driving power having multiple phases of current density (referred to herein as a “shaped pulse”) to reduce the time between application of the operating driving power and onset of emission of light from the micro-LED. For example, by applying an initial phase having a relatively high current density followed by a second phase having a lower current density, the micro-LED driver reduces the capacitance charging time of the micro-LED. The micro-LED driver applies a shaped pulse instead of, or in addition to, a low baseline power pre-charge of the micro-LED in some embodiments. In various embodiments, the techniques described herein apply to time-dependent driving of optoelectronic emitters, including LEDs and more particularly micro-LED displays. The terms pulse and power pulse are used herein to generally describe a time-dependent driving scheme, alternating between relatively low input power (i.e., off or nearly off) and a relatively high input power during which light is emitted. The pulses may be current pulses, or voltage, or power pulses. The examples disclosed herein consider a III-nitride LED. However, some of the techniques are applicable to other optoelectronic devices, including semiconductor LEDs (e.g., GaAs, AlInGaP, AlInGaAsP, III-V and II-VI compounds), organic LEDs, perovskites and other materials known in the art. FIG.1is a diagram of a display100made up of an array of pixels, such as pixel102. Each pixel includes a pixel circuit such as pixel circuit105, which includes three sub-pixels: red (R) sub-pixel105-1, green (G) sub-pixel105-2, and blue (B) sub-pixel105-3. Each sub-pixel includes a micro-LED driver and a micro-LED that emits light when the micro-LED driver applies power to the micro-LED. Thus, R sub-pixel105-1includes R micro-LED driver110-1, which applies power to R micro-LED115-1and causes R micro-LED115-1to emit light. Similarly, G sub-pixel105-2includes G micro-LED driver110-2, which applies power to G micro-LED115-2, and B sub-pixel105-3includes B micro-LED driver110-3, which applies power to B micro-LED115-3. In some embodiments, the display100is used in a flat panel display, mobile device display, head-mounted display, or other display format. In some embodiments, the display100includes thousands of pixel circuits. In some embodiments, the micro-LED drivers110-1,110-2,110-3improve the response time of the micro-LEDs115-1,115-2,115-3by driving the corresponding micro-LEDs at a baseline power when the micro-LEDs are in a nominally-off state, wherein the baseline power is greater than a zero-power level, or by applying a power pulse having a shaped current density to the micro-LEDs. This can be better understood with reference toFIG.2. FIG.2is a block diagram illustrating a micro-LED display element200corresponding to one of the sub-pixels105-1,105-2,105-3ofFIG.1including a micro-LED driver205corresponding to one of the micro-LED drivers110-1,110-2,110-3ofFIG.1that supplies a baseline power230and a driving power pulse235to a micro-LED210corresponding to one of the micro-LEDs115-1,115-2,115-3ofFIG.1in accordance with some embodiments. The micro-LED210has a lateral dimension that is smaller than 20 μm and includes n-contact212and p-contact224layers, an n-type layer214and a p-type layer222, and an active (light-emitting) region225including a core region216, a quantum well218, and an electron blocking layer220. The micro-LED driver205applies the driving power pulse235to the micro-LED210to cause the micro-LED210to emit light having an intensity commensurate with the amplitude of the driving power pulse235. Part of the current of the driving power pulse235is consumed by charging the active region225, which is characterized by a capacitance per area. The remaining current of the driving power pulse235is injected as free carriers in the core region216, where carriers can be captured by the light emitting layers of the active region225. Once in the light emitting layers, the carriers are consumed by recombinations. However, the response of the micro-LED210is limited by the time it takes to charge the capacitance of the micro-LED210, starting from an off-state in which no voltage or current is applied, which causes a delay in light emission. In addition, the recombination lifetime in the micro-LED210can be slow, especially at turn-on, limiting the rise time of the light output. The micro-LED210is characterized by a turn-on time τon, which is defined as the time the micro-LED210takes from the onset of the driving power pulse235until the micro-LED210reaches 90% of the light output plateau level for the driving power pulse235. The micro-LED210is further characterized by a turn-off time τoff, which is defined as the time the micro-LED210takes after the end of the driving power pulse235(i.e., the start of the falling edge of the driving power pulse235) to reach 10% of the light output plateau level of the micro-LED210. Some embodiments are characterized by an asymmetric time response, wherein the turn-off time and the turn-on time are substantially different. In some embodiments, a micro-LED is driven by a power pulse and is characterized by turn-on and turn-off times, and the ratio tau_on/tau_off is higher than 1.5 (or 2, 5, 10) or is lower than 1/1.5 (or ½, ⅕, 1/10). Such asymmetric behavior may distinguish the time-response of some embodiments from that of conventional optoelectronic devices. Some embodiments minimize the asymmetry of the time response, by matching the rise and fall times with approaches disclosed herein. Other embodiments use a substantially asymmetric response. In addition, by shaping the current density of the driving power pulse235, the micro-LED driver205further shortens the response time of the micro-LED210and controls the turn-off time τoff. By feeding a baseline power230to the micro-LED210, the micro-LED driver205reduces the turn-on time τon. The baseline power230is a current and/or voltage that is higher than zero that is applied when the micro-LED210is in a nominally-off state, in which the micro-LED210is not expected to emit light. In some embodiments, the amplitude of the baseline power230is selected such that the amount of light emitted by the micro-LED210in the nominally-off (baseline) state is negligible compared to the amount of light emitted by the micro-LED210in an on (light-emitting) state. For example, in some embodiments the amount of light emitted in the nominally-off state is 10% or less of the amount of light emitted in the light-emitting state. In other embodiments, the amount of light emitted in the nominally-off state is 1% or less of the amount of light emitted in the light-emitting state. In still other embodiments, the amount of light emitted in the nominally-off state is 0.1% or less of the amount of light emitted in the light-emitting state. The amount of light emitted in the light emitting state may vary significantly. For example, light emission from a micro-LED pixel may range from a maximum of 1000 cd/m2 to a minimum of 0.1 cd/m2). In some embodiments, the amount of light emitted in the baseline state is at most approximately 10% of the minimum amount emitted (e.g. if 0.1 cd/m2 is the minimum light emitted in the light emitting state, in the baseline state the micro-LED is limited to emitting 0.01 cd/m2 or less). In some embodiments, the structure of the micro-LED210is configured to improve the time response, including the time response associated to the capacitance and/or to the recombination time. In some embodiments, the LED is configured to achieve a desired capacitance per area, such as by maintaining the capacitance per area below a predetermined value. In some embodiments, the core region216of the micro-LED210has a thickness d (also referred to as the depletion thickness d), and the space-charge capacitance per unit area is approximately given by Csc=eps/d, wherein eps is the dielectric constant of the material. For example, for GaN, eps is approximately 10*eps0 at zero bias, wherein eps0 is the vacuum permittivity; the value under forward bias increases, e.g., by approximately a factor of two, as C=Csc*(1−V/Voc)−½ wherein Voc is the open-circuit voltage. In some embodiments, the value of d is approximately equal to the thickness of the undoped region between the p and n regions (i.e., d˜tc). By selecting the structure of the active region (e.g., quantum wells (QWs), barriers, spacing layers), embodiments facilitate tc to be selected separately from the active region thickness tw. This contrasts these embodiments from homojunction LEDs, in which recombinations occur across a substantial portion of the depletion thickness. A large value of tc facilitates a lower capacitance, whereas the value of tw may be selected to achieve a suitable efficiency. In some embodiments, the thickness of the depletion region is at least 2 times (or 5, 10, 20 times) the thickness of the light-emitting layers. For instance, some embodiments include only a few, thin QWs and thin barriers, but have a sufficient value of d to reduce Csc. To this effect, some embodiments employ dummy QWs (i.e., QWs of lower composition than the light-emitting QWs, which promote carrier transport but do not emit light, thus ensuring that carriers reach the light-emitting QWs) to increase d without adversely impacting the injection efficiency. Dummy QWs may be placed on either the p-, the n-, or both sides of the light-emitting QWs, or be interspersed with them. Some embodiments configure the epitaxial layer (not shown) to achieve a desired capacitance, independent of the thickness of the light-emitting QWs and barriers. Some embodiments employ other active region designs, including double heterostructures, layers of varying composition (stepped or graded), and/or alloys of AlGaN, InGaN, AlInN, AlInGaN. In some embodiments, the value of d is selected to reduce the value of Csc. For instance, Csc may be below 1E-7 F·cm-2 (or 5E-8, 2E-8, 1E-8, 5E-9, 1 E-9 F·cm-2). In some embodiments, the value of d and the LED's area A are selected to reduce the value of the net LED capacitance Csc*A. For instance, the net LED capacitance is less than 1E-13 F (or 5E-14, 1E-14, 5E-15, 1E-15, 5E-16, 1E-16 F). In some embodiments, micro-LED pixels or subpixels have a lateral dimension of less than 10 um (or 5 um, 3 um, 2 um, 1 um). In some embodiments, the rise time associated with the capacitance charging is tau_charge=V*Csc/J, wherein V is the typical operating voltage (about 2.5-3 V for common visible LEDs) and J is the current density. Accordingly, in some embodiments, the LED configuration and the choice of the operation current density jointly yield a sufficiently fast rise time. In some embodiments, the ratio Csc/J is less than 1 E-8 F/A (or 5E-9, 1E-9, 5E-10, 1E-10 F/A). In some embodiments, tau_charge is less than 100 ns (or 50 ns, 10 ns, 5 ns, 1 ns). In some embodiments, tau_charge is shorter than the time duration T of the pulse (or shorter than 0.5*T or 0.2*T or 0.1*T). In some embodiments, doping levels in the p- and n-doped regions214,222of the micro-LED210are selected to control the depletion width. In some embodiments, an abrupt transition from undoped to doped layers is formed. Some embodiments have an n-doped layer214(with a doping level of at least 1E18 cm-3, or 1E19 cm-3), followed by a nominally-undoped region (doping level less than 1E17 cm-3) containing light-emitting layers, followed by a p-doped active region222(doping level of at least 1E18 cm-3, or 1E19 cm-3). Such doping levels may be combined with other LED characteristics (such as the width of an undoped region) to yield a desired capacitance value. In some embodiments, the micro-LED210is configured to achieve a predetermined dynamic resistance rho=dV/dJ to facilitate avoidance of an interaction of the dynamic resistance with parasitic capacitances, which may lead to further delays in time response. In some embodiments, the dynamic resistance per area is maintained below a desired value in the nominally-off state by, for example, applying a baseline low current to the micro-LED210in its nominally-off state. In some embodiments, the dynamic resistance in the nominally-off state is less than 100 ohm·cm2 (or 10, 1, 0.1 ohm·cm2). In some cases, there may be a trade-off between material quality and response time. For example, a defective LED has a lower internal quantum efficiency (IQE), which results in inefficient operation, but has a faster non-radiative recombination time due to SRH recombination, or other kinds of defect-related recombinations (e.g., defect-induced leakage or tunneling) which improves the modulation speed. In some embodiments, the defect level is selected to facilitate operation at a given speed. For instance, a desired modulation speed is selected, and the defect level in the LED is controlled to facilitate such a speed. Some embodiments are designed to achieve a minimum IQE (or other related efficiency metric such as external quantum efficiency (EQE) or wall-plug efficiency (WPE)), such that the on-state is characterized by an IQE of at least 1% (or 5%, or 10%) and/or the baseline state is characterized by an IQE of less than 0.1% (or 0.01%). Accordingly, embodiments are configured with a sufficiently-low defect density to achieve the minimum IQE. This leads to a minimum rise/fall time for the active region. Accordingly, embodiments are driven with pulses which are longer than this minimum rise/fall time. Specifically, in some embodiments, the micro-LED210has a non-radiative lifetime t_low at low current density (such as the Shockley-Read-Hall (SRH) lifetime), and is driven by pulses whose length is at least half of tau_low (or one, two, five, ten times tau_low). In some embodiments, the turn-on time t_on is less than 500 ns (or 200 ns, 100 ns, 50 ns, 20 ns, 10 ns). In some embodiments, the SRH lifetime t_SRH (characterizing the active region) is more than 100 ns and t_on is less than 50 ns. In some embodiments, t_on is less than t_SRH divided by two (or three, five, ten). In some embodiments, the charge time t_charge is more than 10 ns and t_on is less than 10 ns. In some embodiments, t_on is less than t_charge divided by two (or three, five, ten). In some embodiments, t_on is less than t_charge+t_SRH divided by two (or three, five, ten). In some embodiments, the SRH lifetime is tied to a sufficient IQE value, as disclosed herein. In some embodiments, the IQE is at least 10% and t_on is less than 500 ns (or 200 ns, 100 ns, 50 ns, 20 ns, 10 ns). In some embodiments, the electrical pulse driving the LED in the on-state has a duration less than 5 us (or 2 us, 1 us, 500 ns, 200 ns, 100 ns, 50 ns, 10 ns). In some embodiments, light emission in the on-state occurs for a duration which is at least 90% (or 80%, 50%, 20%, 10%) of the duration of the electrical pulse. FIG.3illustrates a timing scheme in which the micro-LED driver205pre-charges a micro-LED210with a baseline power230that is represented as a baseline voltage VBASELINE305in accordance with some embodiments. The driving power pulse235ofFIG.2is represented as a pulse width modulation (PWM) voltage VPWM310.FIG.3illustrates time traces corresponding to examples of the baseline voltage VBASELINE305, the pulse width modulation (PWM) voltage VPWM310, which is the signal that drives light output from the micro-LED210, and a discharge voltage VDISCHARGE315. The time traces are offset vertically from each other for clarity. At time T1320, the micro-LED driver205applies the baseline voltage VBASELINE305to the micro-LED210for a length of time tcharge340. At time T2325, the micro-LED driver205discontinues the baseline voltage VBASELINE305and applies the PWM voltage VPWM310. The micro-LED driver205begins charging the micro-LED210with application of the baseline voltage VBASELINE305, reducing the capacitance charging time after the micro-LED driver205applies the PWM voltage VPWM310, and thus reducing the time between application of the PWM voltage VPWM310and the onset of light emission from the micro-LED210. At time T3330, the micro-LED driver205discontinues application of the PWM voltage VPWM310and applies the discharge voltage VDISCHARGE315for a length of time tdischarge345until time T4335to remove charge from the micro-LED210. FIG.4is a diagram illustrating a comparison of normalized light output from a micro-LED without pre-charging with a baseline voltage and with pre-charging with a baseline voltage in accordance with some embodiments. The curve410represents normalized light output from a micro-LED210that has been pre-charged with a baseline current density corresponding to a baseline voltage prior to application at time 0 ns of a driving PWM current density corresponding to a driving PWM voltage. The curve420represents normalized light output from a micro-LED210that has not been pre-charged with a baseline current density, and which has been driven by the driving PWM current density corresponding to the driving PWM voltage starting at time 0 ns. As illustrated, the onset of light emission is reduced from approximately 32 ns in curve420to approximately 3 ns in curve410by applying the baseline current density and baseline voltage. In the illustrated example, the baseline current density is 0.01 A/cm2, corresponding to a baseline voltage of approximately 2.5 V, and the driving PWM current density is 10 A/cm2, corresponding to a driving PWM voltage of approximately 2.7 V. In the baseline (nominally-off) state, the intensity of emitted light is negligible (e.g., less than 10% of the intensity of emitted light in the on-state or, in some cases, about 3E-5 times the light intensity in the on-state, calculated as the ratio of currents times the ratio of IQE) and the consumed power is very small (about 1E-4 times the power in the on-state, calculated as the ratio of currents). In some embodiments, the IQE in the baseline state is less than the IQE in the on-state divided by 10 (or 20, 50, 100). In some embodiments, the micro-LED driver205achieves the nominally-off state by controlling the voltage applied to the micro-LED210, as controlling voltage may be easier than controlling a very small current, and the micro-LED driver205achieves the on-state by controlling the current feeding the micro-LED210. The micro-LED driver205controls the baseline voltage to the micro-LED210using a transistor, such as a field effect transistor, or a resistor in some embodiments. In some embodiments, the micro-LED driver205maintains the baseline voltage in the nominally-off state at a voltage that is higher than 2 V, and/or that is less than 1 V less than the driving PWM operating voltage. FIG.5is a diagram of a micro-LED driver500with a first path505for supplying power to a micro-LED (referred to as first power path505) to apply a driving pulse width modulation to illuminate a micro-LED and a second power path510to apply a baseline current or voltage to the micro-LED in accordance with some embodiments. In some embodiments, the driver is a CMOS, a TFT backbone, or other architecture. The first power path505feeds the micro-LED with a column voltage VDD for a display and digital gate control voltage (row select) VG to set the voltage on the capacitor515. The capacitor515stores an analog voltage that turns on the transistor520, providing a current ION, which in some embodiments has a time-dependent waveform, that flows to the micro-LED, causing substantial light emission when the micro-LED is in an on state, during which no power flows through the second power path510. In nominally-off mode, no power flows through the first power path505, but a baseline power consisting of a baseline current Ibaseline or a baseline voltage Vbaseline is applied to the micro-LED through the second power path510. In some embodiments, the micro-LED driver500does not include the second power path510, and instead drives the micro-LED (pixel) in a nominally-off state at a low baseline power (voltage or current) at all times. To conserve power, if only a specific area of the display (i.e., a subset of micro-LEDs in an array) is used in a given operation mode, in some embodiments the micro-LED driver500applies the baseline power at all times, to only the subset of micro-LEDs corresponding to the specific area of the display that is being used. For example, in some operation modes, a banner on the top of the display is used to show icons, while the rest of the display is off. For such an operation mode, the micro-LED driver500applies the baseline power at all times to only the subset of micro-LEDs at the top of the display that form the banner. In some embodiments, the micro-LED driver500drives the nominally-off micro-LEDs (pixels) in one display frame with the baseline power only if the nominally-off pixels will be turned on in the following display frame. Thus, the display system considers the following frame when selecting the driving conditions for the current frame: if pixels are nominally-off (i.e., dark) in the current frame but will be in the on-state in the following frame, the micro-LED driver500applies a baseline power in the current frame to improve the response time of the following frame. Consideration of the following frame may increase latency because the next frame information is needed before the current frame can be displayed. Therefore, in some embodiments the display system applies a high refresh rate (such as 90 Hz or 120 Hz or more) to reduce latency. FIG.6is a diagram of a micro-LED driver600with a second power path610including a resistor615to convert a bias voltage Vbias to a baseline current Ibaseline to feed a micro-LED in accordance with some embodiments. Similar toFIG.5, a first power path605feeds the micro-LED with a voltage VDD when the micro-LED is in an on state, during which little power (e.g., if Vbias is not turned to zero when the micro-LED is in an on state) or no power flows through the second power path610, and a current ION, which in some embodiments has a time-dependent waveform, flows to the micro-LED, causing substantial light emission. In nominally-off mode, no power flows through the first power path605, but the resistor converts the bias voltage Vbias to the baseline current Ibaseline, which is applied to the micro-LED through the second power path610. In some embodiments, Vbias equals VDD, and in other embodiments Vbias differs from VDD. FIG.7is a diagram of a micro-LED driver700with a second power path710including a transistor715to apply a baseline current Ibaseline to a micro-LED in accordance with some embodiments. In the illustrated example, the transistor715is a parallel drive transistor. As withFIGS.5and6, a first power path705feeds the micro-LED with a voltage VDD when the micro-LED is in an on state, during which no power flows through the second power path710, and a current ION, which in some embodiments has a time-dependent waveform, flows to the micro-LED, causing substantial light emission. In nominally-off mode, no power flows through the first power path705, and the baseline current Ibaseline is generated by the transistor715, with the current value set by Vbias. In some embodiments, VDD2 equals VDD, and in other embodiments VDD2 differs from VDD. Vbias is a direct current (DC) voltage in some embodiments and is a time-dependent voltage in other embodiments. In some embodiments, the transistor715is also used as a discharge transistor to remove charge from the micro-LED once the light-generating current Ion has been turned off. In other embodiments, the transistor715in the second power path710is used only as a charge transistor and the micro-LED driver700includes a third power path (not shown) that includes a separate transistor (not shown) which is used as a discharge transistor. The driver architectures illustrated inFIGS.5-7are examples of architectures that can be used to supply a baseline power to the micro-LED210. Persons of skill will appreciate that other architectures could be used, such as a second power path including a resistor, as shown inFIG.6, combined with a third power path including a discharge transistor. In some embodiments, the driver architectures discussed herein pertain to pixels of a display formed from an array of micro-LEDs. Each subpixel corresponds to a micro-LED and micro-LED driver. The baseline current or voltage varies per panel, per region of pixels, per pixel or per subpixel in some embodiments. For example, the time-response of LEDs of different colors (such as R, G, B) may be different because (i) the junction capacitance depends on the details of the epi structure, which may differ between colors; and (ii) the recombination lifetime depends on the color (at least because of different defect levels and a different radiative lifetime). The capacitance charging time of one color may be at least twice the capacitance charging time of another color. Similarly, the low-current recombination lifetime of one color may be at least twice the low-current recombination lifetime of another color. Thus, different colors can have different behaviors in response to the same pulse shape. Accordingly, in some embodiments, the baseline current or voltage is different for subpixels of different colors (such as R, G, B). In some embodiments, the baseline current or voltage is below the photon voltage of each subpixel (with the photon voltage being defined as equal to the photon energy measured in electron-volts), or below some other threshold voltage. In some embodiments, pulses of different shapes are used for different colors to improve the time-response of each color individually. In some embodiments, the display has at least two colors, and the display is configured such that the turn-on times for the two colors are within a factor of two of each other. The micro-LED drivers may achieve different baseline powers by using different Vbias values, or a shared Vbias value converted by different electronic components (such as resistors and transistors). In some embodiments, the micro-LED drivers apply the baseline power to facilitate Mura compensation, reducing non-uniformities in a display panel. In some embodiments, each pixel or group of pixels has a different baseline condition, leading to a uniform light output under operation. For example, in some embodiments, the second power path includes a resistive device that facilitates current leakage. For example, an array can include one or more micro-LEDs of better material quality that start emitting light at lower current than other micro-LEDs of inferior material quality, resulting in uneven brightness at low current. By adding a small leakage path to all pixels, the micro-LEDs of the array are prevented from turning on at low current. The resistance is selected to cause a leakage current which is low compared to the micro-LED's nominally-on current, which facilitates evening out the brightness and/or the response time of a display. In some embodiments, a display has a plurality of micro-LEDs whose low-current non-radiative lifetimes are substantially different (for instance, because the defect level varies between micro-LEDs). Micro-LEDs with more non-radiative recombinations may emit more light and respond more rapidly at low current, leading to inhomogeneity. Accordingly, some embodiments comprise a leakage path which dominates the response time and/or the brightness at low current, thus reducing inhomogeneity. In some embodiments, the micro-LED drivers do not apply the baseline power (voltage or current) at all times. Instead, the micro-LED drivers only apply the baseline power for a suitable time before the pixels are to be turned on. For instance, if it takes a time τbaselineto drive a micro-LED from a fully-off state to the baseline state, the micro-LED driver applies the baseline power for a time of at least τbaseline, so that when the pixel (micro-LED) needs to be turned on, the micro-LED is in the baseline state. By applying the baseline power only for the time τbaseline, the micro-LED driver reduces power consumption associated with the baseline condition. In addition to or in place of pre-charging the micro-LED210with a baseline power230prior to applying the driving (PWM) power pulse235, the micro-LED driver205may reduce the response time (i.e., the time until the onset of light emission) of the micro-LED210by applying a driving power pulse235that is shaped to have varying intensity, or current density. In some embodiments, the micro-LED driver205applies a driving pulse that is characterized by a complex waveform (i.e., more complex than a simple square shape). For instance, a current or voltage pulse may have a peak or ripples. FIG.8is a diagram800illustrating different examples of current pulses to drive a micro-LED in a light-emitting state in accordance with some embodiments. The different current pulses are overlaid to illustrate the differences between them. Each pulse has a 100 ns total duration. Current pulse805has a simple square profile with a current density of J=10 A/cm2. Current pulse810comprises a first phase having a square profile with a current density of J=50 A/cm2 and a duration of around 10 ns, and a second phase with a square profile with a current density of J=10 A/cm2 and a duration of around 90 ns. As such, the total duration of the current pulse810is around 100 ns. Current pulse815has a first phase having a square profile with a current density of J=60 A/cm2 and a duration of around 10 ns, and a second phase with a square profile with a current density of J=10 A/cm2 and a duration of around 90 ns. As such, the total duration of the current pulse815is around 100 ns. In each of the current pulses810,815, the first phase immediately precedes the second phase. FIG.9is a diagram900illustrating normalized light output from a micro-LED driven by the different current pulses805,810,815as shown inFIG.8in accordance with some embodiments. The normalized light outputs resulting from each of the different current pulses are overlaid to illustrate the differences between them. For the current pulse805, the normalized light output is illustrated by curve905. For the current pulse810, the normalized light output is illustrated by curve910, and for the current pulse815, the normalized light output is illustrated by curve915. As illustrated, a higher current peak (current density), as provided in the first phase of pulses810and815, leads to a faster charging of the micro-LED capacitance and a faster buildup of carriers in the active region, as illustrated by the curves910and915, respectively. Depending on the length and magnitude of the current peak (i.e., the first phase), the normalized light output may display a peak, because the carrier density in the micro-LED temporarily overshoots its plateau value, as shown for the curve915corresponding to the third pulse815. In some embodiments, the micro-LED driver configures the pulse shape to avoid or limit such overshoot peaks. Avoiding or limiting such peaks may reduce the likelihood of damage to the driver and/or micro-LED. In some embodiments, during a pulse, the current pulse has a peak (i.e., the first phase) and a plateau (i.e., the second phase) and the micro-LED light output has a corresponding peak and plateau, such that the normalized light output peak is less than 2× (or 1.5×, 1.1×) the value of the light output plateau. In some embodiments, the micro-LED driver configures pulses having complex shapes to improve the LED response time. In some embodiments, the current pulse has a duration that is shorter than 1 microsecond and the light-emitting state extends for at least 50% of the current pulse duration. The example waveforms described above are provided as examples only. In some embodiments, the shapes of the pulses feeding different colors are different. For example, in some embodiments a blue pixel (micro-LED) has a first waveform having a first peak current and duration, a green pixel (micro-LED) has a second waveform having a second peak current and duration, a red pixel (micro-LED) has a third waveform having a third peak current and duration, with the peak currents and durations selected to reduce the turn-on delay to similar values. In some embodiments, a first micro-LED having a first color has a charging time tau_charge_1 and is driven by a first pulse having a first peak value and a first characteristic duration; a second micro-LED having a second color has a charging time tau_charge_2 and is driven by a second pulse having a second peak intensity and a second characteristic duration; tau_charge_2 is at least 2 (or 5, 10) times tau_charge_1, and the product (peak intensity*duration) is higher for the second micro-LED, such that the second micro-LED's time delay before light emission is less than 2 times (or 1.5, 1.2, 3, 5, 10) that of the first micro-LED. The micro-LED driver applies pulse shaping to one or more of the power pulses causing light output from the LED, the pre-charging baseline pulse, and the discharging pulse. In some embodiments, a controller (not shown) for the micro-LED driver uses a non-linear conversion between the desired LED brightness and the pulse shape (including length and/or intensity and/or other aspects of the pulse shape) to correct for the non-linearity due to time response. For example, in some embodiments the non-linear conversion is a lookup table that prescribes a given pulse width to achieve a given amount of emitted light. An example is given in Table 1. This table applies to the micro-LED ofFIG.2and assumes that simple square pulses with a current density of 10 A/cm2 are applied as the driving power pulse235. The bit depth is 8, corresponding to up to 256 gray levels. The shortest pulse would last 100 ns in the absence of nonlinear correction. TABLE 1TargetNominalUncorrectedExtraCorrectedgraypulselightpulselightlevellength [ns]amountlength [ns]amount2{circumflex over ( )}11000.653512{circumflex over ( )}22001.653522{circumflex over ( )}34003.653542{circumflex over ( )}48007.653582{circumflex over ( )}51,60015.6535162{circumflex over ( )}63,20032.6535322{circumflex over ( )}76,40063.6535642{circumflex over ( )}812,800127.6535128 In this example, an extra pulse length (or time offset) of 35 ns is applied to all gray levels. This extra pulse length corrects the total amount of emitted light and makes it proportional to the target gray level. In the absence of this non-linear correction, the gray levels could be substantially different from their desired values, especially for low gray levels. That is, by extending the length of all pulses by 35 ns (in the example described above), it is possible to compensate for the time taken for light emission to begin (or reach 90% of the full value). Applying an arbitrary time offset to pulses may be difficult if the time offset is not proportional to the display system's base clock time. Accordingly, some embodiments are configured such that the necessary time offset is close to the clock time. For instance, in the example above, a clock time of 33.333 ns can produce time offsets that are very close to the values of Table 1 (e.g., the shortest pulse lasts 4 clock cycles instead of 3). In some embodiments, other hardware such as a delay line is used to add a delay whose length is not dictated by the clock period. In this example, the extra pulse length is constant for all gray levels and a full lookup table is superfluous. However, other schemes may require a correction that depends on the gray level. This may occur for instance if the pulse driving current depends on the gray level, or if hysteresis effects (i.e., the state of the pixel before the pulse of interest) are taken into account. The lookup table may be more or less granular, and present non-linear correction values for more or fewer gray levels. For gray levels in between the levels of the lookup table, the extra pulse length may be interpolated. The values of such a look-up table may vary across elements of the display (for instance, different regions, different pixels, different subpixels, different LED colors). Additional bits, for instance 12 bit (8 for display and 4 for the correction) may be used to set the values for individual elements. In some embodiments, the controller applies the non-linear correction on its own, or combined with other teachings of this disclosure. For instance, the micro-LED and/or the micro-LED driver may be configured to allow an approximate minimum desired pulse length (e.g., on the order of 10 ns or 50 ns or 100 ns or 500 ns or 1 us), and non-linear correction may be applied to further control the light levels and correct residual time-response effects. In the example above, the micro-LED and micro-LED driver are configured to allow shortest pulses of approximately 100 ns, and non-linear correction is applied to precisely control the gray levels. The current density of the pulse may also be configured via a lookup table. Table 1 assumes that the desired light amount is strictly proportional to the bit depth. However, gamma correction may further be applied. Non-linear correction may be configured to achieve the desired gray level after gamma correction. The lookup table may be populated with values that are determined by applying a calibration process to the display, for example measuring the light values for different durations to determine how to modify pulse durations or current density of the pulses. Embodiments comprise methods of configuring a driving scheme, as disclosed herein, to achieve a desired amount of light. The method may include the following steps: determine a desired output (for instance a nominal brightness level corresponding to a bit depth); operate a display with a suited driving scheme (e.g., pulse shape and duration) to achieve an actual output which is within a predetermined range of the desired output (e.g., within +/−10% or 20% or 5% or 1%). In some embodiments, certain aspects of the techniques described above may be implemented by one or more processors of a processing system executing software. The software comprises one or more sets of executable instructions stored or otherwise tangibly embodied on a non-transitory computer readable storage medium. The software can include the instructions and certain data that, when executed by the one or more processors, manipulate the one or more processors to perform one or more aspects of the techniques described above. The non-transitory computer readable storage medium can include, for example, a magnetic or optical disk storage device, solid state storage devices such as Flash memory, a cache, random access memory (RAM) or other non-volatile memory device or devices, and the like. The executable instructions stored on the non-transitory computer readable storage medium may be in source code, assembly language code, object code, or other instruction format that is interpreted or otherwise executable by one or more processors. Note that not all of the activities or elements described above in the general description are required, that a portion of a specific activity or device may not be required, and that one or more further activities may be performed, or elements included, in addition to those described. Still further, the order in which activities are listed are not necessarily the order in which they are performed. Also, the concepts have been described with reference to specific embodiments. However, one of ordinary skill in the art appreciates that various modifications and changes can be made without departing from the scope of the present disclosure as set forth in the claims below. Accordingly, the specification and figures are to be regarded in an illustrative rather than a restrictive sense, and all such modifications are intended to be included within the scope of the present disclosure. Benefits, other advantages, and solutions to problems have been described above with regard to specific embodiments. However, the benefits, advantages, solutions to problems, and any feature(s) that may cause any benefit, advantage, or solution to occur or become more pronounced are not to be construed as a critical, required, or essential feature of any or all the claims. Moreover, the particular embodiments disclosed above are illustrative only, as the disclosed subject matter may be modified and practiced in different but equivalent manners apparent to those skilled in the art having the benefit of the teachings herein. No limitations are intended to the details of construction or design herein shown, other than as described in the claims below. It is therefore evident that the particular embodiments disclosed above may be altered or modified and all such variations are considered within the scope of the disclosed subject matter. Accordingly, the protection sought herein is as set forth in the claims below. | 44,399 |
11862070 | DETAILED DESCRIPTION Hereinafter, exemplary embodiments of the disclosure will be described in detail. Additionally, in the description and the accompanying drawings in each of the following embodiments, substantially the same or equivalent parts are designated by the same reference numerals. First Embodiment FIG.1is a block diagram showing a configuration of a display device100according to the disclosure. The display device100is an active matrix drive type liquid crystal display device. The display device100includes a display panel11, a timing controller12, a gate driver13, and a source driver14. The display panel11is composed of a semiconductor substrate in which a plurality of pixel units P11to Pnmand pixel switches M11to Mnm(natural numbers of n, m: 2 or more) are arranged in a matrix. The display panel11includes n gate lines GL1 to GLn, each of which is a scanning line extending in the horizontal direction, and m source lines SL1 to SLm arranged to intersect the gate lines GL1 to GLn. The pixel units P11to Pnmand the pixel switches M11to Mnmare provided at the intersections of the gate lines GL1 to GLn and the source lines SL1 to SLm. The pixel switches M11to Mnmare controlled to be turned on or off according to gate signals Vg1 to Vgn supplied from the gate driver13. The pixel units P11to Pnmreceive a drive voltage (gradation voltage) corresponding to the video data from the source driver14. Specifically, drive voltage signals Dv1 to Dvm are output from the source driver14to the source lines SL1 to SLm and when the pixel switches M11to Mnmare respectively turned on, the drive voltage signals Dv1 to Dvm are applied to the pixel units P11to Pnm. Accordingly, each of the pixel electrodes of the pixel units P11to Pnmis charged and the luminance is controlled. When the display device100is a liquid crystal display device, the pixel units P11to Pnmrespectively include transparent electrodes connected to the source lines SL1 to SLm via the pixel switches M11to Mnmand a liquid crystal enclosed between a facing substrate provided facing the semiconductor substrate and having one transparent electrode formed on the entire surface. A display is performed by changing the transmittance of the liquid crystal with respect to the backlight inside the display device according to the potential difference between the drive voltage (gradation voltage) applied to the pixel units P11to Pnmand the facing substrate voltage. The timing controller12generates a series (serial signal) of pixel data pieces PD that represent the luminance level of each pixel in, for example, 8-bit 256-stage luminance gradation on the basis of the video data VS. Further, the timing controller12generates an embedded clock system clock signal CLK having a constant clock period on the basis of the synchronization signal SS. The timing controller12generates a video data signal VDS which is a serial signal in which a series of pixel data pieces PD and a clock signal CLK are integrated, and supplies the video data signal VDS to the source driver14to control the display of video data. The video data signal VDS is configured as a video data signal serialized according to the number of transmission lines for each predetermined number of source lines. In this embodiment, the video data signal VDS for one frame is configured by serially continuing n pixel data pieces, each of which consists of m pixel data pieces PD. Each of the n pixel data piece groups is a pixel data piece group consisting of pixel data pieces corresponding to the gradation voltage to be supplied to the pixels on each one horizontal scanning line (that is, each of the gate lines GL1 to GLn). By the operation of the source driver14, the drive voltage signals Dv1 to Dvm to be supplied to n×m pixel units (that is, pixel units P11to Pnm) are applied via the source line on the basis of the m×n pixel data pieces PD. Further, the timing controller12generates a frame synchronization signal FS indicating the timing of each frame of the video data signal VDS on the basis of the synchronization signal SS, and supplies the frame synchronization signal FS to source drivers14-1to14-p. Further, the timing controller12generates a gate timing signal GS that controls the operation timing of the gate driver13on the basis of the synchronization signal SS, and supplies the gate timing signal GS to the gate driver13. The gate driver13receives a gate control signal GS from the timing controller12and sequentially supplies the gate signals Vg1 to Vgn to the gate lines GL1 to GLn on the basis of the clock timing included in the gate control signal GS. By supplying the gate signals Vg1 to Vgn, the pixel units P11to Prim are selected for each pixel row. Then, the gradation voltage is written to the pixel electrode by applying the drive voltage signals Dv1 to Dvm from the source driver14to the selected pixel unit. In other words, m pixel units arranged along the extension direction of the gate line (that is, a horizontal column) are selected as the supply target of drive voltage signals Gv1 to Gvm by the operation of the gate driver13. The source driver14applies the drive voltage signals Gv1 to Gvm to the selected horizontal column of pixel units to display a color corresponding to the voltage. The screen display for one frame is performed by repeating the application of the drive voltage signals in the extension direction (that is, the vertical direction) of the data line while selectively switching the pixel unit for one horizontal column selected as the supply target of the drive voltage signals Gv1 to Gvm. The source drivers14-1to14-preceive the video data signal VDS from the timing controller12, generate the drive voltage signals Dv1 to Dvm corresponding to a multi-valued level gradation voltage according to the number of gradation shown in the video data signal VDS, and applies the drive voltage signals to the pixel units P11to Pnmvia the source lines SL1 to SLm. Additionally, in the following description, the drive voltage signals Dv1 to Dvm are referred to as gradation voltage signals Dv1 to Dvm. Further, one of the gradation voltage signals Dv1 to Dvm is also simply referred to as a gradation voltage signal Dv. The source drivers14-1to14-pare provided for each of a predetermined number of source lines obtained by dividing the source lines SL1 to SLm. The number of source lines driven by each source driver corresponds to the number of output channels of the source driver. For example, if one source driver has an output of 960 channels and the display panel has one source line per pixel column, the source line is driven by 12 source drivers for the 4K panel and 24 source drivers for the 8K panel. Each of the source drivers14-1to14-pis formed on different semiconductor integrated circuit (IC) chips. Each of the source drivers14-1to14-phas a common configuration. In the following description, the source drivers14-1to14-pare collectively simply referred to as the “source driver14” when describing such a common configuration. FIG.2is a block diagram showing an internal configuration of the source driver14. The source driver14includes a data latch unit21, a gradation voltage conversion unit22, and an output unit23. The data latch unit21sequentially captures a series of pixel data pieces PD included in the video data signal VDS supplied from the timing controller12. Then, the data latch unit21outputs the captured pixel data piece PD as pixel data Q1 to Qj to the gradation voltage conversion unit22in response to the acquisition of the pixel data piece PD for jch. The gradation voltage conversion unit22converts the pixel data Q1 to Qj supplied from the data latch unit21into positive or negative gradation voltages A1 to Aj having voltage values corresponding to the luminance gradation represented by the pixel data and supplies the result to the output unit23. The output unit23generates a signal obtained by amplifying the gradation voltage A1 to Aj as gradation voltage signals Dv1 to Dvj and outputs the signals to the source line SL1 to SLj. Further, the output unit23has a circuit configuration for detecting whether or not the adjacent channels of the source lines SL1 to SLj are short-circuited. Additionally, the source lines SL1 to SLj have a configuration in which the source line (hereinafter, referred to as the positive channel) receiving the positive gradation voltage signal Dv and the source line (hereinafter, referred to as the negative channel) receiving the negative gradation voltage signal Dv are alternately arranged. That is, the gradation voltage signals Dv having different polarities are supplied to the adjacent channels of the source lines SL1 to SLj. The gradation voltage conversion unit22includes a positive decoder that generates a positive gradation voltage and a positive amplifier that amplifies and outputs the gradation voltage. Further, the gradation voltage conversion unit22includes a negative decoder that generates a negative gradation voltage and a negative amplifier that amplifies and outputs the gradation voltage. FIG.3is a circuit diagram showing a part of the configuration of the gradation voltage conversion unit22and the output unit23. Additionally, here, a case in which a source line SLA is a positive channel and a source line SLB is a negative channel is shown as an example. The positive decoder31pgenerates a positive gradation voltage (inp inFIG.3) on the basis of the pixel data Qp and outputs the positive gradation voltage. The output unit of the positive decoder31pis connected to a non-inverting input terminal of a positive amplifier32p. The positive amplifier32preceives an input of a positive electrode side input voltage inp which is the output voltage of the positive decoder31pto the non-inverting input terminal and amplifies and outputs the positive input voltage. The output terminal of the positive amplifier32pis negatively feedback-connected to the inverting input terminal thereof. In the following description, the connection node between the output terminal and the inverting input terminal of the positive amplifier32pis referred to as a node fbo. Further, the output terminal of the positive amplifier32pis connected to one end of a resistor R10via the node fbo. The resistor R10is provided in the output unit of the source driver14as a protective resistor from electro-static discharge (ESD). The resistor R10is composed of, for example, a resistor element having a resistance value of 0.2 kΩ. The other end of the resistor R10is connected to the source output terminal OT1which is a terminal outputting the positive gradation voltage signal Dv of the source driver14. The source output terminal OT1is connected to the source line SLA of the display panel11. The source line SLA includes source line loads R11to R1kand source line capacitances C11to C1k. A negative decoder31ngenerates a negative gradation voltage (inn inFIG.3) on the basis of the pixel data Qn and outputs the negative gradation voltage. The output unit of the negative decoder31nis connected to a non-inverting input terminal of a negative amplifier32n. The negative amplifier32nreceives an input of a negative electrode side input voltage inn which is the output voltage of the negative decoder31nto the non-inverting input terminal and amplifies and outputs the negative input voltage. The output terminal of the negative amplifier32nis negatively feedback-connected to the inverting input terminal. In the following description, the connection node between the output terminal and the inverting input terminal of the negative amplifier32nis referred to as a node fbe. Further, the output terminal of the negative amplifier32nis connected to one end of the resistor R20via the node fbe. The resistor R20is provided in the output unit of the source driver14as a protective resistor from ESD similarly to the resistor R10. The resistor R20is composed of, for example, a resistor element having a resistance value of 0.2 kΩ. The other end of the resistor R20is connected to the source output terminal OT2which is a terminal outputting the negative gradation voltage signal Dv by the source driver14. The source output terminal OT2is connected to the source line SLB of the display panel11. The source line SLB includes source line loads R21to R2kand source line capacitances C21to C2k. Further, the output unit23is provided with a power-down signal generation unit33and a voltage comparison circuit34. The power-down signal generation unit33generates power-down signals PDS and XPDS that reduce the drive capability of the negative amplifier32nand supplies the power-down signals to the negative amplifier32n. The power-down signals PDS and XPDS of this embodiment are signals that reduce the drive capability of the negative amplifier32nto 1/10. Specifically, the negative amplifier32nof this embodiment is composed of multi-comb (multi-stage) amplifiers in which a plurality of amplifiers is connected in parallel and is able to change the output current by switching the number of amplifier stages. The power-down signals PDS and XPDS are signals for switching the number of stages of such an amplifier. FIG.4is a circuit diagram showing a configuration of an output end35of the negative amplifier32n. The output end35of the negative amplifier32nis composed of ten combs (ten stages) of amplifiers AP1to AP10connected in parallel. The amplifier AP1of the first stage has a configuration in which the drains of a P-channel type MOS transistor PM1and a N-channel type MOS transistor NM1are connected to each other. In the transistor PM1, the power supply voltage VDD is applied to the source and the gate is connected to a first drive line LH which is the supply line of the positive electrode side drive voltage. In the transistor NM1, the ground potential VSS is applied to the source and the gate is connected to a second drive line LL which is the supply line of the negative electrode side drive voltage. The amplifiers AP2to AP10from the second stage include the P-channel type MOS transistor and the N-channel type MOS transistor connected to each other at the drains thereof and a changeover switch changing the connection/non-connection of the transistors with respect to the first drive line LH and the second drive line LL. The changeover switch switches the connection on the basis of the power-down signals PDS and XPDS. The power-down signal PDS and the power-down signal XPDS are signals whose signal levels complementarily change to a logic level of 0 and a logic level of 1. In this embodiment, the gate of the P-channel type MOS transistor PM2of the amplifier AP2is connected to the first drive line LH via the changeover switch S22. For example, the gate of the P-channel type MOS transistor PM2is connected to the first drive line LH when the power-down signal PDS has a logic level of 0 and is not connected to the first drive line LH when the power-down signal PDS has a logic level of 1. Further, the gate of the P-channel type MOS transistor PM2is connected to the drain of the switch transistor S21composed of the P-channel type MOS transistor. In the switch transistor S21, the power supply voltage VDD is applied to the source and the power-down signal XPDS is applied to the gate. Since the switch transistor S21is turned on when the power-down signal XPDS has a logic level of 0, the power supply voltage VDD is applied to the gate of the P-channel type MOS transistor PM2. Further, since the switch transistor S21is turned off when the power-down signal XPDS has a logic level of 1, the power supply voltage VDD is not applied to the gate of the P-channel type MOS transistor PM2and the positive electrode side drive voltage supplied to the first drive line LH is applied thereto. Further, the gate of the N-channel type MOS transistor NM2of the amplifier AP2is connected to the second drive line LL via the changeover switch S23. The gate of the N-channel type MOS transistor NM2is connected to the second drive line LL when the power-down signal XPDS has a logic level of 1 and is not connected to the second drive line LL when the power-down signal XPDS has a logic level of 0. Further, the gate of the N-channel type MOS transistor NM2is connected to the drain of the switch transistor S24composed of the N-channel type MOS transistor. In the switch transistor S24, the ground voltage VSS is applied to the source and the power-down signal PDS is applied to the gate. Since the switch transistor S24is turned on when the power-down signal PDS has a logic level of 1, the ground potential VSS is applied to the gate of the N-channel type MOS transistor NM2. Further, since the switch transistor S24is turned off when the power-down signal PDS has a logic level of 0, the ground potential VSS is not applied to the gate of the N-channel type MOS transistor NM2and the negative electrode side drive voltage supplied to the second drive line LL is applied thereto. The amplifiers from the third stage also have the same configuration as that of the amplifier AP2of the second stage. For example, when the power-down signal PDS has a logic level of 0 and the power-down signal XPDS has a logic level of 1, the amplifier of each stage is connected to the first drive line LH and the second drive line LL and the output end35of the negative amplifier32nhas a ten-comb amplifier configuration. On the other hand, when the power-down signal PDS has a logic level of 1 and the power-down signal XPDS has a logic level of 0, only the amplifier AP1of the first stage is connected to the first drive line LH and the second drive line LL and the output end35of the negative amplifier32nhas a one-comb amplifier configuration. Referring toFIG.3again, the voltage comparison circuit34is composed of a comparator CMP. The first input terminal (indicated by in in the figure) of the comparator CMP is connected to the node fbe. Further, the second input terminal (indicated by Ref in the figure) of the comparator CMP is connected to the node n0 which is the connection node between the output unit of the negative decoder31nand the non-inverting input terminal of the negative amplifier32n. The comparator CMP outputs a comparison result signal CRS which has a signal level of a logic level of 1 (that is, H level) when the voltage of the node fbe is larger than the voltage of the node n0 and has a signal level of a logic level of 0 (that is, L level) when the voltage of the node fbe is smaller than the voltage of the node n0. The voltage comparison circuit34is a circuit provided to determine whether the source line SLA and the source line SLB which are adjacent channels are short-circuited. The comparison result signal CRS output from the voltage comparison circuit34is output from the source driver14and is supplied to a short-circuit determination unit (not shown) provided in the timing controller12. The voltage comparison operation of the voltage comparison circuit34will be described with reference toFIG.5. Additionally, here, a case in which the positive electrode side input voltage inp is 15V and the negative electrode side input voltage inn is 1V will be described as an example. When the source line SLA and the source line SLB are short-circuited at the far end (that is, the end portion far from the source driver14), a short-circuit current flows in a direction indicated by a dashed arrow in the figure from the positive amplifier32ptoward the negative amplifier32n. The current value of the short-circuit current is controlled by the drive capability of the negative amplifier32n. As described above, since the drive capability of the negative amplifier32nis reduced to 1/10, the current value of the short-circuit current Ishort is 1 mA. When the resistance values of the resistor R10and the resistor R20are 0.2 kΩ and the source line loads of the source lines SLA and SLB are 5 kΩ, a relationship of “1 mA=(15V−fbe)÷(0.4 kΩ+10 kΩ)” is obtained and the voltage of the node fbe is obtained as fbe≈5V. In this way, when the source line SLA and the source line SLB are short-circuited, the voltage value of the node fbe becomes 5V and becomes larger than the voltage value of 1V of the negative electrode side input voltage inn. Thus, the comparison result signal CRS having a logic level of 1 is output from the comparator CMP. On the other hand, when the source line SLA and the source line SLB are not short-circuited, the voltage of the node fbe becomes the voltage value of 1V which is the same as that of the negative electrode side input voltage inn and the comparison result signal CRS having a logic level of 0 is output from the comparator CMP. The comparison result signal CRS is supplied to a short-circuit determination unit (not shown) provided in the timing controller12. The short-circuit determination unit determines that the short-circuit occurs between the source lines when the comparison result signal CRS having a logic level of 1 is received and determines that the short-circuit does not occur between the source lines when the comparison result signal CRS having a logic level of 0 is received. Additionally, in this embodiment, in order to detect the occurrence of the short-circuit even when the short-circuit current occurring between the source lines SLA and SLB is small, the power-down signal PDS is supplied to reduce the drive capability of the negative amplifier32n. Originally, the negative amplifier32nhas a high drive capability in order to drive the panel load (source line load) of the display panel11. Thus, if the drive capability of the negative amplifier32nis not reduced, the current load can be driven even when the short-circuit occurs. Thus, the voltage of the node fbe is not reduced and the short-circuit may not be detected. Here, in this embodiment, the output current of the negative amplifier32is reduced by decreasing the number of stages (combs) of the amplifiers constituting the negative amplifier32and the drive capability is reduced. Additionally,FIGS.3and5show a configuration in which the nodes located in the input unit and the output unit of the negative amplifier32nare respectively connected to a pair of input terminals of the comparator CMP and the voltage is compared with the negative voltage. However, unlike this, the nodes located at the input unit and the output unit of the positive amplifier32pmay be connected to a pair of input terminals of the comparator CMP and the voltage may be compared with the positive voltage to detect the occurrence of the short-circuit. Further, in the source driver14of this embodiment, the voltage comparison circuit34is not provided for each of a pair of source lines (hereinafter, referred to as a source line pair) constituting the adjacent channels, but one voltage comparison circuit is configured to be able to detect the occurrence of the short-circuit for twenty four channels by switching the source line which is a voltage comparison target by time division. FIG.6is a diagram showing an image of an entire chip of the source driver14. A voltage comparison circuit45is configured to be able to compare the voltage while changing the target source line in time division increments of twenty four channels according to the switching control by a switching control circuit44. Additionally, here, the power-down signal generation unit33is not shown. An L/S circuit41pis a latch circuit which captures the pixel data piece PD (indicated by GSP in the figure) for six channels on the positive electrode side. A positive decoder42pis a positive electrode side decoder which generates a gradation voltage for six channels on the positive electrode side on the basis of the pixel data piece output from the L/S circuit41p. The positive electrode side gradation voltage output from the positive decoder42pis input to the non-inverting input terminal of the positive amplifier32p. An L/S circuit41nis a latch circuit which captures the pixel data pieces PD (indicated by GSN in the figure) for six channels on the negative electrode side. A negative decoder42nis a negative electrode side decoder which generates a gradation voltage for six channels on the negative electrode side on the basis of the pixel data piece output from the L/S circuit41n. The negative electrode side gradation voltage output from the negative decoder42nis input to the non-inverting input terminal of the negative amplifier32n. The switching control circuit44receives an enable signal en indicating the start of the execution of the voltage comparison process and switches a switch SW2. The switch SW2is provided between the output terminal and the inverting input terminal and the negative amplifier32nand the detection line JL. When the switch SW2is turned on, the output terminal of the negative amplifier32nis connected to an input terminal of a comparator CM1of the inverting input terminal. Additionally,FIG.6shows the positive amplifier32pand the negative amplifier32ncorresponding to the source lines for two channels and the switch SW2, but in fact, six similar configurations are provided for twelve channels. The switching control circuit44switches the switch SW2in a time division manner. As a result, the source lines which are voltage comparison targets are sequentially switched and finally the voltage comparison is performed on the source lines for twelve channels, so that the presence or absence of the short-circuit with the adjacent channel is detected. The L/S circuit41p, the L/S circuit41n, the positive decoder42p, the negative decoder42n, and the switching control circuit44are provided on the left side region (hereinafter, referred to as a chip left side region LA) of the chip constituting the source driver14. Additionally, the right side region (not shown) of the chip is provided with a configuration similar to these. A chip center region CA located between the chip left side region LA and the chip right side region is provided with the voltage comparison circuit45, a level-down circuit46, and a signal output circuit47. The voltage comparison circuit45includes the comparator CM1which compares the voltage in the source lines for twelve channels (six channels on the positive electrode side and six channels on the negative electrode side) in the chip left side region LA and a comparator CM2which compares the voltage in the source lines for twelve channels in the right side region. The first input terminal of the comparator CM1is connected to the detection line JL. Further, the second input terminal of the comparator CM1is connected to the connection node (that is, the negative electrode side input voltage inn) between the output unit of the negative decoder42nand the non-inverting input terminal of the negative amplifier32. The comparator CM1outputs a comparison result signal having a signal level of a logic level of 1 (that is, H level) when the voltage of the detection line JL is larger than the voltage of the negative electrode side input voltage inn and outputs a comparison result signal having a signal level of a logic level of 0 (that is, L level) when the voltage of the detection line JL is equal to or smaller than the negative electrode side input voltage inn. Additionally, since the right side region of the comparator CM2also has the same configuration and performs the same operation, the description is omitted here. The level-down circuit46level-downs and outputs the comparison result signals output from the comparators CM1and CM2. The output circuit47outputs the level-down comparison result signal as the determination signal JS. As described above, the source driver14of this embodiment compares the voltage (that is, the negative electrode side input voltage inn) of the node n0 connected to the input terminal of the negative amplifier32nwith the voltage of the node fbe connected to the output terminal of the negative amplifier32n. Accordingly, it is determined whether or not the short-circuit occurs between the source line SLA and the source line SLB which are adjacent to each other. When it is determined that the short-circuit occurs between the source lines SLA and SLB, the voltage of the node fbe becomes larger than the voltage of the node n0 due to the voltage drop (actually, the voltage rise because the voltage drop is on the negative electrode side) caused when the short-circuit current flows through the source lines SLA and SLB. Thus, it is determined that the short-circuit occurs if the voltage of the node fbe is larger than the voltage of the node n0 as the voltage comparison result and it is determined that the short-circuit does not occur if the voltage of the node fbe is not larger than the voltage of the node n0. According to such a configuration, since the voltage comparison circuit is provided in the IC chip constituting the source driver14, it is possible to detect the occurrence of the short-circuit between the adjacent channels of the source line with a simple configuration. Further, since it is possible to detect the occurrence of the short-circuit of the entire chip by switching the source line as the voltage comparison target in time division, the time and effort required for inspection is small and the occurrence of the short-circuit can be detected quickly. Second Embodiment Next, a second embodiment of the disclosure will be described. A display device of the second embodiment is different from the display device of the first embodiment in the configuration of an output unit including a voltage comparison circuit in a source driver. FIG.7is a circuit diagram showing a part of the configuration of the gradation voltage conversion unit22and the output unit23of the source driver14of this embodiment. The first input terminal (indicated by in in the figure) of the comparator CMP constituting the voltage comparison circuit34is connected to the node oute which is the connection node between the other end of the resistor R20and the source output terminal OT2. Further, the second input terminal (indicated by Ref in the figure) of the comparator CMP is connected to the node n0 which is the connection node between the output unit of the negative decoder31nand the non-inverting input terminal of the negative amplifier32n. The comparator CMP outputs a comparison result signal CRS which has a signal level of a logic level of 1 (that is, H level) when the voltage of the node oute is larger than the voltage of the node n0 and has a signal level of a logic level of 0 (that is, L level) when the voltage of the node oute is smaller than the voltage of the node n0. The source driver14of this embodiment does not include the power-down signal generation unit PDS as in the first embodiment and does not supply the power-down signal PDS to the negative amplifier32n. Thus, the drive capability of the negative amplifier32nis the same as that in the normal operation. FIG.8is a diagram schematically showing a short-circuit current flowing when the short-circuit occurs in the source line SLA and SLB. When the source line SLA and the source line SLB are short-circuited at the far end, the short-circuit current flows as indicated by a dashed arrow in the figure from the positive amplifier32ptoward the negative amplifier32n. As described above, since the drive capability of the negative amplifier32nis not reduced in this embodiment, the voltage of the node fbe also becomes 1V when the voltage of the node n0 is set to 1V. Since the voltage of the node oute drops from the voltage of the node fbe by the voltage drop in the resistor R20, a relationship of “oute=1V+(R20×Ishort)=1V+Vdrop” is obtained. Thus, when the short-circuit occurs, the voltage of the node oute becomes larger than the voltage of the node fbe. In this way, when the source line SLA and the source line SLB are short-circuited, the voltage of the node oute becomes larger than the voltage of the node fbe. Thus, the comparison result signal CRS having a logic level of 1 is output from the comparator CMP. On the other hand, when the source line SLA and the source line SLB are not short-circuited, the voltage of the node oute has the same voltage value as that of the voltage of the node fbe and the comparison result signal CRS having a logic level of 0 is output from the comparator CMP. FIG.9is a diagram showing an image of an entire chip of the source driver14of this embodiment. The switching control circuit44receives an enable signal en indicating the start of the execution of the voltage comparison process and switches a switch SW4. The switch SW4switches the connection and non-connection between the output terminal and the inverting input terminal of the negative amplifier32nand the first detection line JL1. Further, the switch SW4switches the connection and non-connection between the second detection line JL2and the node oute which is the connection node between the other end of the resistor R20and the output terminal OT2. When the switch SW4is turned on and these are in the “connection” state, the output terminal and the inverting input terminal of the negative amplifier32nare connected to one of the input terminal of comparator CM1via the first detection line JL1and the node oute is connected to the other of the input terminal of the comparator CM1via the second detection line JL2. The comparator CM1outputs a comparison result signal having a logic level of 1 (that is, H level) when the voltage of the second detection line JL2is larger than the voltage of the first detection line JL1. Further, the comparator CM1outputs a comparison result signal having a logic level of 0 (that is, L level) when the voltage of the second detection line JL2is equal to or smaller than the voltage of the first detection line JL1. As described above, the source driver14of this embodiment compares the voltage of the node fbe connected to the output terminal of the negative amplifier32nwith the voltage of the node oute connected to the source output terminal OT2. Accordingly, it is determined whether or not the short-circuit occurs between the source line SLA and the source line SLB which are adjacent to each other. When the short-circuit occurs between the source lines SLA and SLB, the voltage of the node oute drops by the amount of the voltage drop in the resistor R20(actually, the voltage rises because the voltage drop is on the negative electrode side) as the short-circuit current flows through the source lines SLA and SLB. Thus, it is determined that the short-circuit concurs if the voltage of the node oute is larger than the voltage of the node fbe and it is determined that the short-circuit does not occur if the voltage of the node oute is not larger than the voltage of the node fbe. According to such a configuration, it is possible to detect the occurrence of the short-circuit between the adjacent channels of the source line with a simple configuration in which the voltage comparison circuit is provided in the IC chip as in the first embodiment. Further, since it is not necessary to provide the power-down signal generation unit33unlike the first embodiment, it is possible to detect the occurrence of the short-circuit with a simpler configuration than that of the first embodiment. Further, in the first embodiment, since the drive capability of the negative amplifier32nis reduced, it is necessary to detect the short-circuit by comparing the voltage during a blank period in which pixel data is not supplied. However, in this embodiment, since the drive capability of the negative amplifier32nis not reduced, it is possible to detect the short-circuit by performing voltage comparison while actually generating and supplying the gradation voltage signal based on pixel data. Third Embodiment Next, a third embodiment of the disclosure will be described. A display device of the third embodiment is different from the display devices of the first embodiment and the second embodiment in that a configuration for displaying a detection result of the occurrence of a short-circuit between adjacent channels of a source line on a display panel is provided. FIG.10is a circuit diagram showing a part of the configuration of the source driver of this embodiment. The source driver of this embodiment includes an IF/data processing circuit51, latch circuits52A,52B and52C, decoders53A,53B and53C, a short-circuit detection determination circuit54, a data writing circuit55, and a counter56. The data writing circuit55and the counter56are provided in the IF/data processing circuit51. Additionally, the source driver of this embodiment includes the same power-down signal generation circuit as that of the first embodiment, but the circuit is not shown inFIG.10. The IF/data processing circuit51is an interface circuit which receives the video data signal VDS and the frame synchronization signal FS transmitted from the timing controller12. Further, the IF/data processing circuit51performs various data processes on the basis of the video data signal VDS and the frame synchronization signal FS. For example, the IF/data processing circuit51includes a serial-parallel conversion circuit (not shown), converts a series of pixel data pieces PD included in the video data signal VDS into parallel data, and supplies the parallel data to the latch circuits52A,52B, and52C. Further, the IF/data processing circuit51receives a mode switching signal MS from the outside of the source driver and switches the operation mode to either the normal operation mode or the short-circuit detection mode. In the normal operation mode, an operation for performing a display based on the video data signal VDS from the timing controller12, that is, an operation of supplying the pixel data piece PD acquired from the video data signal VDS to the latch circuits52A,52B and52C is performed. On the other hand, in the short-circuit detection mode, the IF/data processing circuit51supplies the pixel data pieces for displaying black on the entire screen of the display panel11to the latch circuits52A,52B, and52C. The latch circuits52A,52B and52C capture the pixel data pieces PD output from the IF/data processing circuit51and supply the captured pixel data pieces PD to the decoders53A,53B and53C as the pixel data Q. The decoders53A,53B and53C generate a gradation voltage on the basis of the pixel data Q and supplies the gradation voltage to the positive amplifier32pand the negative amplifier32n. InFIG.10, a gradation voltage supplied to the positive amplifier32pis indicated by inp and a gradation voltage supplied to the negative amplifier32nis indicated by inn. The short-circuit detection determination circuit54is a circuit for determining whether or not the source line SLA and the source line SLB which are adjacent channels are short-circuited. The short-circuit detection determination circuit54is composed of a comparator (not shown) which compares the voltage of the node n0 with the voltage of the node fbe similarly to the voltage comparison circuit34of the first embodiment. The short-circuit detection determination circuit54outputs a comparison result signal CRS which has a signal level of a logic level 1 (H level) when the voltage of the node fbe is larger than the voltage of the node n0 and has a signal level of a logic level 0 (L level) when the voltage of the node fbe is equal to or smaller than the voltage of the node n0. When the source line SLA and the source line SLB are short-circuited similarly to the first embodiment, a short-circuit current flows from the positive amplifier32ptoward the negative amplifier32n. Since the drive capability of the negative amplifier32nis reduced to about 1/10 by a power-down signal (not shown), the voltage of the node fbEe becomes about 5V, for example, when the positive electrode side input voltage inp is 15V, the negative electrode side input voltage inn is 1V, each of the resistance values of the resistor R10and the resistor R20is 0.2 kΩ, and each of the source line loads of the source lines SLA and SLB is 5 kΩ. Thus, the comparison result signal CRS having a logic level of 1 is output from the short-circuit detection determination circuit54. On the other hand, when the source line SLA and the source line SLB are not short-circuited, the voltage of the node fbe has the same voltage value of 1V as the negative electrode side input voltage inn and the comparison result signal CRS having a logic level of 0 is output from the comparator CMP. The short-circuit detection determination circuit54of this embodiment is different from the voltage comparison circuit of the first embodiment and supplies the comparison result signal CRS to the data writing circuit55in the IF/data processing circuit51. The data writing circuit55is a circuit which receives the comparison result signal CRS and writes data to a source line pair adjacent to the outside of the source lines SLA and SLB, that is, a source line SLX which is an adjacent source line located on the side opposite to the source line SLB when viewed from the source line SLA and a source line SLY which is an adjacent source line located on the side opposite to the source line SLA when viewed from the source line SLB. As described above, when the mode is switched to the short-circuit detection mode, the pixel data piece for displaying black (pixel value 0) is supplied from the IF/data processing circuit51to the latch circuits52A,52B and52C. In response to this, the gradation voltage for displaying black is output from the source driver, and the black screen is displayed on the display panel11. In this state, when the comparison result signal CRS having a logic level of 1 is supplied from the short-circuit detection determination circuit54to the data writing circuit55, the data writing circuit55supplies the pixel data piece for displaying white (pixel value of 255) to the latch circuit52A corresponding to the source line SLX and the latch circuit52C corresponding to the source line SLY. In response to this, the gradation voltage for displaying white is output from the source driver and a white screen is displayed at a position corresponding to the source lines SLX and SLY of the display panel11. On the other hand, when the comparison result signal CRS having a logic level of 0 is supplied from the short-circuit detection determination circuit54to the data writing circuit55, the data writing circuit55does not supply the pixel data piece to the latch circuit52A and52C and the black screen is continuously displayed on the display panel11. The counter56is a circuit that generates a count value indicating the source line pair which is the short-circuit detection target of the short-circuit detection determination circuit54.FIG.10shows a configuration in which the short-circuit detection for only a pair of the source lines SLA and SLB is performed, but when the short-circuit detection of the entire chip is performed as described below, the source line pair of the detection target is sequentially switched. At that time, since the count value of the counter56indicates the source line pair which is the short-circuit detection target, the data writing circuit55can perform an operation of writing white to the source line adjacent to the source line pair of the detection target. The counter56generates a count value by counting based on a clock signal included in the video data signal VDS, for example, triggered by the supply of the mode switching signal MS to the IF/data processing circuit51. FIG.11is a diagram showing an example of a short-circuit detection screen displayed on the display panel when the short-circuit of the source line SLA and the source line SLB is detected. Since it is the short-circuit detection mode, the entire display panel11is displayed in black. Then, in order to show the detection of the short-circuit of the source line SLA and SLB, the positions corresponding to the source lines SLX and SLY adjacent to the outside to sandwich the source lines SLA and SLB are displayed in white. Next, a process operation of the short-circuit detection process executed by the source driver of this embodiment will be described with reference to the flowchart ofFIG.12. The IF/data processing circuit51receives the mode switching signal MS requiring the switching to the short-circuit detection mode and switches the operation to the short-circuit detection mode (STEP101). The IF/data processing circuit51supplies the pixel data piece PD (in this embodiment, the pixel data piece PD corresponding to black) for the short-circuit detection mode to each latch circuit. The gradation voltage is generated and amplified and is applied to the source line to display a detection mode image (in this embodiment, a black screen) on the display panel11(STEP102). The short-circuit detection determination circuit54detects the presence or absence of the short-circuit in the source lines SLA and SLB and supplies the comparison result signal CRS representing the detection result to the data writing circuit55of the IF/data processing circuit51. The data writing circuit55determines whether the short-circuit is detected on the basis of the comparison result signal CRS (STEP103). When it is determined that the short-circuit is detected (STEP103: YES), the data writing circuit55supplies the pixel data piece PD for displaying a short-circuit occurrence detection image (in this embodiment, a white image) at a position on the display screen to the latch circuit corresponding to the source lines SLX and SLY which are the source lines adjacent to the source lines SLA and SLB in which the occurrence of the short-circuit is detected. The gradation voltage is generated and amplified and is applied to the source line to display a short-circuit detection image indicating the detection of the occurrence of the short-circuit on the display panel11(in this embodiment, a white display at the source line position adjacent to the short-circuited source line) (STEP104). On the other hand, when it is determined that the occurrence of the short-circuit is not detected (STEP103: NO), the short-circuit detection process is ended. Additionally, in the source driver of this embodiment, one short-circuit detection determination circuit is provided at the center portion of the chip similarly to the source driver of the first embodiment and the occurrence of the short-circuit in the source line for twenty four channels can be detected by switching the source line as the short-circuit detection target in time division. FIG.13is a diagram showing an image of the entire source driver of this embodiment. A short-circuit detection determination circuit61performs a short-circuit detection while changing the target source line in time division increments of twenty four channels according to the switching control by the switching control circuit44. The switching control circuit44receives an enable signal en indicating the start of the execution of the short-circuit detection process and sequentially switches the switch SW2in response to this. The switch SW2is provided between the output terminal and the inverting input terminal of the negative amplifier32nand the detection line JL. When the switch SW2is turned on, the output terminal and the inverting input terminal of the negative amplifier32nare connected to the input terminal of the comparator CM1in the short-circuit detection determination circuit61. The short-circuit detection determination circuit61includes the comparator CM1which compares the voltage in the source lines for twelve channels (six channels on the positive electrode side and six channels on the negative electrode side) in the first region that is responsible for half of the chip constituting the source driver and the comparator CM2which compares the voltage in the source lines for twelve channels in the second region that is responsible for the other half of the chip. The first input terminal of the comparator CM1is connected to the detection line JL. Further, the second input terminal of the comparator CM1is connected to the connection node (that is, the negative electrode side input voltage inn) between the output unit of the negative decoder42nand the non-inverting input terminal of the negative amplifier32. The comparator CM1outputs a comparison result signal CRS which has a signal level of a logic level of 1 (that is, H level) when the voltage of the detection line JL is larger than the voltage of the negative electrode side input voltage inn and has a signal level of a logic level of 0 (that is, L level) when the voltage of the detection line JL is equal to or smaller than the negative electrode side input voltage inn. Additionally, the comparator CM2also has the same configuration in the second region of the chip and performs the same operation. The comparators CM1and CM2supply the comparison result signal CRS to the data writing circuit55. The data writing circuit55supplies the pixel data piece PD (in this embodiment, the pixel data piece corresponding to white) for displaying the short-circuit detection image to the latch circuit corresponding to the source line adjacent to the source line in which the occurrence of the short-circuit is detected (that is, the comparison result signal CRS has a logic level of 1) on the basis of the comparison result signal CRS. Additionally, as shown inFIG.2, the IF/data processing circuit51is provided with the counter56which counts based on a clock signal and generates a count value. Since the target source line is switched by the short-circuit detection determination circuit61in synchronization with the clock signal, it is possible to determine the source line which is currently detected by the short-circuit detection determination circuit61on the basis of the count value generated by the counter56. The data writing circuit55supplies the pixel data piece PD for the short-circuit detection image to the latch circuit connected to the corresponding source line on the basis of the count value generated by the counter56. As described above, in the source driver of this embodiment, the occurrence of the short-circuit in the adjacent source line pair is detected and the result is displayed on the display panel. Specifically, in the short-circuit detection mode, the entire screen is displayed in black and when the occurrence of the short-circuit is detected, the position adjacent to the occurrence position is displayed in white. According to such a display, the user can visually recognize that the short-circuit has occurred between any of the source lines. Further, according to the source driver of this embodiment, since the position of the source line adjacent to the short-circuit occurrence position is displayed in white, the user can visually specify the short-circuit occurrence position in addition to the recognition of the occurrence of the short-circuit. Further, according to the source driver of this embodiment, since the above-described process can be performed regardless of the operation of the timing controller12, it is possible to display the short-circuit detection result on the display panel11without mounting a special configuration on the timing controller12. Additionally, the disclosure is not limited to the above-described embodiment. For example, in the above-described embodiments, a case in which the plurality of source drivers14-1to14-pare provided and the voltage comparison circuit as in the above-described embodiments is provided in each of them has been described as an example. However, unlike this, it is possible to apply the configuration of the above-described embodiments to a display device provided with only one source driver. Further, in the above-described embodiments, a case in which the timing controller12includes the short-circuit determination unit (not shown) and determines the presence or absence of the short-circuit on the basis of the comparison result signal from the voltage comparison circuit has been described. However, the disclosure is not limited to this configuration and the short-circuit determination unit may be provided in another portion of the source driver14or the display device100. Further, in the above-described embodiments, an example of detecting the occurrence of the short-circuit by comparing the input/output voltages of the negative amplifier32nhas been described. However, unlike this, the configuration may be such that the occurrence of the short-circuit is detected by comparing the input/output voltages of the positive amplifier32p. Further, in the above-described embodiments, a case in which gradation voltages having different polarities are supplied to adjacent channels has been described as an example. However, it is possible to apply the configuration of the above-described embodiments even when the gradation voltage having the same polarity is supplied. For example, a potential difference may be provided between adjacent channels by using different data for the input data of the adjacent channels so that a current flows from one side to the other side when the short-circuit occurs. With this configuration, it is possible to detect the short-circuit by comparing the voltages. Further, in the third embodiment, an example has been described in which the black screen is displayed on the display panel11as a detection mode screen and the source line position adjacent to the position in which the occurrence of the short-circuit is detected is displayed in white. However, the color selection and display method is not limited thereto. For example, a screen having a color other than black may be displayed on the display panel11as a detection mode screen. Further, when the occurrence of the short-circuit is detected, it may be configured to display a specific color other than white. Further, it may be configured to display a specific color at a position corresponding to another source line instead of the source line adjacent to the source line pair in which the occurrence of the short-circuit is detected. Further, when the occurrence of the short-circuit is detected, a specific color may be displayed on the entire screen (that is, all source lines other than the position in which the short-circuit occurs). According to such a method, for example, when the user needs to be aware of the occurrence of the short-circuit, but does not need to specify the position of the short-circuit, it is possible to display that the short-circuit has occurred in a more understandable manner. Further, in the third embodiment, a case in which the short-circuit detection determination circuit54supplies the comparison result signal CRS to the data writing circuit55has been described, but in addition to this configuration, the comparison result signal CRS may be also supplied to the timing controller12. According to such a configuration, it is possible to perform the same short-circuit determination as the first embodiment in the timing controller12in addition to the display of the short-circuit detection result on the display panel11. Further, in the third embodiment, a case in which the occurrence of the short-circuit is detected by the same method as the first embodiment has been described as an example. However, unlike this, the occurrence of the short-circuit may be detected by the method of the second embodiment. | 55,531 |
11862071 | BEST MODE FOR DISCLOSURE A pixel according to an embodiment of the present disclosure includes a luminous element and a pixel circuit connected to the luminous element, wherein the pixel circuit includes a first pixel circuit configured to control light-emission and non-emission of the luminous element in response to a control signal applied to each of a plurality of subframes constituting a frame during a light-emitting period and a second pixel circuit storing a bit value of image data in a data writing period and generating the control signal based on the bit value and a clock signal in the light-emitting period. MODE FOR DISCLOSURE Since the present disclosure may apply various transformations and have various embodiments, specific embodiments will be illustrated in a diagram and described in detail in the detailed description. The effects and features of the present disclosure, and a method of achieving them, will be clarified with reference to the embodiments described later in detail together with diagrams. However, the present disclosure is not limited to the embodiments disclosed below and may be implemented in various forms. Hereinafter, embodiments of the present disclosure will be described in detail with reference to attached diagrams, and when describing with reference to diagrams, the same or corresponding constituent elements are assigned the same diagram symbol, and redundant descriptions thereof will be omitted. In the following embodiments, terms such as first and second are used for distinguishing one constituent element from other constituent elements. These constituent elements should not be limited by these terms. In addition, in the following embodiments, expressions in the singular include plural expressions unless the context clearly indicates otherwise. In the following embodiments, the connection between X and Y may include a case where X and Y are electrically connected, a case where X and Y are functionally connected, and a case where X and Y are directly connected. Here, X and Y may be objects (for example, devices, elements, circuits, wirings, electrodes, terminals, conductive films, layers, etc.). Therefore, it is not limited to a certain connection relationship, for example, a connection relationship indicated in a diagram or the detailed description, and may include other connection relationships than that indicated in a diagram or the detailed description. The case where X and Y are electrically connected may include, for example, a case where at least one element that enables the electrical connection of X and Y (e.g., a switch, a transistor, a capacitor, an inductor, a resistance element, a diode, etc.) is connected between X and Y. The case where X and Y are functionally connected may include a case where at least one circuit of a circuit that enables a functional connection of X and Y, like in a case where the signal output from X is transmitted to Y (e.g., a logic circuit (OR gate, inverter, etc.), a signal conversion circuit (an AD conversion circuit, a gamma correction circuit, etc.), a potential level conversion circuit (a level shifter circuit, etc.), a current supply circuit, an amplification circuit (a circuit that may increase signal amplitude or current amount, etc.), a signal generation circuit, and a memory circuit (a memory, etc.), is connected between X and Y. In the following embodiments, “ON” used in connection with the element state may refer to an activated state of the element, and “OFF” may refer to an inactive state of the element. “On” used in connection with a signal received by the element may refer to a signal that activates the element, and “off” may refer to a signal that disables the element. The element may be activated by a high voltage or a low voltage. For example, the P-type transistor is activated by a low voltage, and the N-type transistor is activated by a high voltage. Therefore, it should be understood that the “on” voltage for the P-type transistor and the N-type transistor is the opposite (low vs. high) voltage level. In the following embodiments, terms such as include or have means that the features or elements described in the specification are present, and do not preclude the possibility that one or more other features or elements may be added. FIG.1is a diagram schematically illustrating a manufacturing process of a display device according to an embodiment of the present disclosure. Referring toFIG.1, the display device30according to an embodiment may include a luminous element array10and a driving circuit board20. The luminous element array10may be coupled with the driving circuit board20. The luminous element array10may include a plurality of luminous elements. A luminous element may be a light-emitting diode (LED). At least one luminous element array10may be manufactured by growing a plurality of LEDs on a semiconductor wafer (SW). Accordingly, the display device30may be manufactured by coupling the luminous element array10with the driving circuit board20, without the need to individually transfer the LED to the driving circuit board20. A pixel circuit corresponding to each LED on the luminous element array10may be arranged on the driving circuit board20. The LED on the luminous element array10and the pixel circuit on the driving circuit board20may be electrically connected to form a pixel PX. FIGS.2and3are diagrams schematically illustrating a display device30according to an embodiment of the present disclosure. Referring toFIGS.2and3, the display device30may include a pixel unit110and a driving unit120. The pixel unit110may display an image by using an n bit digital image signal capable of displaying 1 to 2ngray scales. The pixel unit110may include a plurality of pixels PX arranged in a certain pattern, for example, a matrix-type pattern or a zigzag-type pattern. The pixel PX emits light of a single color, and may emit, for example, light of red, blue, green, or white. The pixel PX may emit light of other colors than red, blue, green, and white. The pixel PX may include a luminous element. The luminous element may be a self-luminous element. For example, the luminous element may be a LED. The luminous element may be a LED having a micro to nano size. The luminous element may emit light having a single peak wavelength or may emit light having a plurality of peak wavelengths. The pixel PX may further include a pixel circuit connected to the luminous element. The pixel circuit may include at least one thin-film transistor and at least one capacitor. The pixel circuit may be implemented by a semiconductor stack structure on a substrate. A driving unit120may drive and control the pixel unit110. The driving unit120may include a control unit121, a gamma setting unit123, a data driving unit125, a current supply unit127, and a clock generator129. The control unit121may receive image data of a frame from an external device (for example, a graphic controller) and extract gradations for each pixel PX, and convert the extracted gradations into digital data having a preset number of bits. The control unit121receives a correction value from the gamma setting unit123and performs gamma correction of input image data DATA1using the correction value, thereby generating correction image data DATA2. The control unit121may output the correction image data DATA2to the data driving unit125. The control unit121may output, to a shift register125, a most significant bit MSB to a least significant bit LSB of the correction image data DATA2in a certain order. The gamma setting unit123may set a gamma value using a gamma curve, set a correction value of image data according to a set gamma value, and output a set correction value to the control unit121. The gamma setting unit123may be provided as a circuit separate from the control unit121, or may be provided to be included in the control unit121. The data driving unit125may transfer, to each pixel PX of the pixel unit110, the correction image data DATA2from the control unit121. The data driving unit125may provide a bit value included in the correction image data DATA2to each pixel PX for every frame. The bit value may have one of a first logic level and a second logic level. The first logic level may be a high level and the second logic level may be a low level. Alternatively, the first logic level may be a low level and the second logic level may be a high level. One frame may include a plurality of subframes. When display device30displays n bit image data, the frame may include 8 subframes. The lengths of subframes may be different from one another. For example, the length of a subframe corresponding to the most significant bit MSB of correction image data DATA2may set to be the longest, and the length of a subframe corresponding to the least significant bit LSB may set to be the shortest. The order of the most significant bit MSB to the least significant bit LSB of the image data DATA2may correspond to the order of a first subframe to an n-th subframe, respectively. The order of expression of subframes may be set differently depending on the designer. The data driving unit125may include a line buffer and a shift register circuit. The line buffer may be one line buffer or two line buffers. The data driving unit125may provide n bit image data to each pixel in a line unit (a row unit). The current supply unit127may generate and supply the driving current of each pixel PX. The configuration of the current supply unit127will be described later with reference toFIG.4. The current supply unit127may be included in the pixel PX, specifically in the pixel circuit. The clock generator129may generate a clock signal for every subframe during a single frame and output the generated clock signal to pixels PX. The length of the clock signal may be the same as the length of the corresponding subframe. The clock generator129may sequentially supply a clock signal to the clock line CL for every subframe. The clock generator129may generate a clock signal according to a preset subframe order. For example, when the order of expression of four subframes is 1-2-3-4, the clock generator129may sequentially output a first clock signal to a fourth clock signal in the order of the first subframe to a fourth subframe. When the output order of four subframes is 1-3-2-4, the clock generator129may output the clock signal in the order of the first clock signal, a third clock signal, a second clock signal, and the fourth clock signal in the order of the first subframe, the third subframe, the second subframe, and the fourth subframe. Each component of the driving unit120may be formed as a separate integrated circuit chip or a single integrated circuit chip, and be mounted directly on a substrate on which the pixel unit110is formed, or be mounted on a flexible printed circuit film, or be attached in a form of a TCP (tape carrier package) on a substrate, or be formed directly on the substrate. In one embodiment, the control unit121, the gamma setting unit123, and the data driving unit125may be connected to the pixel unit110in the form of an integrated circuit chip, and the current supply unit127and the clock generator129may be formed directly on the substrate. In one embodiment, the pixel unit110may include array of pixels and the array may form rows and columns. In the embodiment, a row controller may be connected to each of the rows and provide a clock signal to pixels in at least one of the rows in common. In the embodiment, a column controller connected to each of the columns and providing an image data signal to pixels in at least one of the columns in common. In the embodiment, the control unit121may receive image data of a frame from an external device, generate a correction image data based on the received image data, and output the correction image data to the column controller. In the embodiment, the control unit121may output a most significant bit MSB to a least significant bit of the correction image data in a preset order to the column controller. In one embodiment, the display device30may further include a parallel-to-serial converter. The parallel to serial converter is configured to convert n clock signals generated by the clock generator129in parallel for each bit (e.g., MSB, LSB) into a serial clock signal. The parallel to serial converter may transfer the serial clock signal to the pixel unit110. The parallel to serial converter may be included in the same component as the second pixel circuit50of the pixel PX or may be included as a separate component among the driving circuits of the pixel PX. Also, the parallel to serial converter may be included in the clock generator129. FIG.4is a circuit diagram illustrating a current supply unit according to an embodiment of the present disclosure. Referring toFIG.4, the current supply unit127may include a first transistor51, a second transistor53, an operational amplifier55, and a variable resistor57. The first transistor51has a gate connected to the pixel PX, a first terminal connected to a power voltage VDD, and a second terminal connected to the gate and a first terminal of the second transistor53. The second transistor53has a gate connected to an output terminal of the operational amplifier55, the first terminal connected to the second terminal of the first transistor51, and a second terminal connected to a second input terminal (−) of the operational amplifier55. A first input terminal (+) of the operational amplifier55is connected to a reference voltage Vref, and the second input terminal (−) is connected to the variable resistor57. The output terminal of the operational amplifier55is connected to the gate of the second transistor53. When the reference voltage Vrefis applied to the first input terminal (+), the second transistor53may be turned on or off according to the voltage at the output terminal due to the voltage difference among the first input terminal (+), the second input terminal (−) and the output terminal. A resistance value of the variable resistor57may be determined according to the control signal SC from the control unit121. Depending on the resistance value of the variable resistor57, a voltage of the output terminal of the operational amplifier55VDD may be changed, and the current Irefflowing along the first transistor51and second transistor53turned on from the power voltage VDD may be determined. The current supply unit127may supply a driving current corresponding to the current Irefto the pixel PX by configuring a current mirror together with a transistor in the pixel PX. The driving current may determine a total luminance (brightness) of the pixel unit110. In the above-described embodiment, the current supply unit127includes the first transistor51implemented as a P-type transistor and the second transistor53implemented as an N-type transistor, but the embodiment of the present disclosure is not limited thereto. In one or more embodiments, the first transistor51and second transistor53may be implemented as different types of transistors, and an operational amplifier corresponding thereto may be configured to form the current supply unit127. FIG.5is a circuit diagram illustrating a pixel PX according to an embodiment of the present disclosure. Referring toFIG.5, the pixel PX may include a luminous element ED and a pixel circuit including a first pixel circuit40and a second pixel circuit50connected thereto. The first pixel circuit40may be a high voltage driving circuit, and the second pixel circuit50may be a low voltage driving circuit. The second pixel circuit50may be implemented as a plurality of logic circuits. The luminous element ED may selectively emit light for every subframe based on a bit value (logic level) of image data provided from the data driving unit125during a single frame, thereby adjusting the light-emission time within the single frame to display gradation. The first pixel circuit40may control light-emission and non-emission of the luminous element ED in response to the control signal applied to each of the plurality of subframes during a single frame. The control signal may be a pulse width modulation (PWM) signal. The first pixel circuit40may include a first transistor401, a second transistor403, and a level shifter405electrically connected to the current supply unit127. The first transistor401may output the driving current. The first transistor401includes a gate connected to the current supply unit127, a first terminal connected to the power voltage VDD, and a second terminal connected to a first terminal of the second transistor403. The gate of the first transistor401is connected to the gate of the first transistor51of the current supply unit127, thereby forming a current mirror circuit with the current supply unit127. Accordingly, as the first transistor51of the current supply unit127is turned on, the first transistor401which has been turn on may supply a driving current corresponding to the current Irefformed in the current supply unit127. The driving current may be equal to the current Irefflowing in the current supply unit127. The second transistor403may transmit or block the driving current to the luminous element ED according to the PWM signal. The second transistor403includes a gate connected to an output terminal of the level shifter405, the first terminal connected to the second terminal of the first transistor401, and a second terminal connected to the luminous element ED. The second transistor403may be turned on or off according to the voltage output from the level shifter405. The light-emission time of the luminous element ED may be adjusted according to the turn-on or turn-off time of the second transistor403. The second transistor403may be turned on when a gate-on-level signal (low level in the embodiment ofFIG.5) is applied to the gate, and transfers the driving current Irefoutput from the first transistor401to the luminous element ED, so that the luminous element ED may emit light. The second transistor403may be turned off when a gate-off level signal (high level in the embodiment ofFIG.5) is applied to the gate, and blocks the driving current Irefoutput from the first transistor401from being transferred to the luminous element ED, so that the luminous element ED may not emit light. During a single frame, the light-emission time and the non-emission time of the luminous element ED are controlled by the turn-on time and the turn-off time of the second transistor403, so that a color depth of the pixel unit110may be expressed. The level shifter405may be connected to an output terminal of a PWM controller501of the second pixel circuit50, and may convert a voltage level of a first PWM signal output from the PWM controller501to generate a second PWM signal. The level shifter405may generate a second PWM signal by converting a first PWM signal into a gate-on voltage level signal capable of turning on the second transistor403and a gate-off level signal capable of turning off the second transistor403. A pulse voltage level of the second PWM signal output by the level shifter405may be higher than a pulse voltage level of the first PWM signal, and the level shifter405may include a booster circuit that boosts an input voltage. The level shifter405may be implemented as a plurality of transistors. The turn-on time and turn-off time of the second transistor403during a single frame may be determined according to a pulse width of the first PWM signal. The second pixel circuit50may store a bit value of image data applied from the data driving unit125during a data writing period for every frame, and generate the first PWM signal based on the bit value and a clock signal during the light-emitting period. The second pixel circuit50may include the PWM controller501and a memory503. The PWM controller501may generate the first PWM signal based on a clock signal CK input from the clock generator120and a bit value of image data read from the memory503during the light-emission period. When a clock signal in a subframe is input from a clock generator120, the PWM controller501may read a corresponding image data bit value from the memory503to generate a first PWM signal. The PWM controller501may control a pulse width of a first PWM signal based on a bit value of image data in a subframe and a signal width of a clock signal. For example, when the bit value of the image data is 1, the pulse output of the PWM signal may be turned on as much as the signal width of the clock signal, and when the bit value of the image data is 0, the pulse output of the PWM signal may be turned off as much as the signal width of the clock signal. That is, an on time of the pulse output of the PWM signal and an off time of the pulse output may be determined by the signal width (signal length) of the clock signal. The PWM controller501may include at least one logic circuit (for example, an OR gate circuit, etc.) implemented as at least one transistor. In synchronization with a frame start signal, the memory503may receive and store in advance the n bit correction image data DATA2applied through a data line DL from the data driving unit125during the data writing period. In the case of a still image, image data previously stored in the memory503before an image update or refresh may be used for continuous image display for a plurality of frames. The bit values (logic levels) from the most significant bit MSB to the least significant bit LSB of the n bit correction image data DATA2may be input from the data driving unit125to the memory503in a certain order. The memory503may store at least 1 bit data. In one embodiment, the memory503may be an n bit memory. In the memory503, the bit values from the most significant bit MSB to the least significant bit LSB of correction image data DATA2may be recorded during the data writing period of the frame. In another embodiment, the memory503may be implemented as a bit memory of less than n depending on a driving frequency. The memory503may be implemented as at least one transistor. The memory503may be implemented as a random access memory (RAM), for example, SRAM or DRAM. In the embodiment ofFIG.5, the current supply unit127is connected to one pixel PX, but the current supply unit127may be shared by a plurality of pixels PX. For example, as illustrated inFIG.6, the first transistor51of the current supply unit127may be electrically connected to the first transistor401of each pixel PX of the pixel unit110to form a current mirror circuit. In another embodiment, the current supply unit127may be provided for every row, and the current supply unit127of each row may be shared by a plurality of pixels PXs in the same row. In the above-described embodiment, the pixel includes P-type transistors, but the present disclosure embodiment is not limited thereto. In one or embodiments, the pixel may include N-type transistors, and in this case, the pixel may be driven by a signal in which the level of the signal applied to the P-type transistors is inverted. FIG.7is a diagram for explaining driving of a pixel according to an embodiment of the present disclosure. FIG.7illustrates an example of driving a pixel in a first row. Referring toFIG.7, the pixel PX may be driven in a data-writing period {circle around (1)} and a light-emitting period {circle around (2)} during a single frame. The light-emitting period {circle around (2)} may be driven by dividing into a first subframe SF1to an n-th subframe SFn. In the data-writing period {circle around (1)}, the bit value of the image data DATA from the data driving unit125may be recorded in the memory503in the pixel PX. In each subframe of light-emitting period {circle around (2)}, a clock signal CK is applied to the PWM controller501, and the PWM controller501may generate a PWM signal based on the bit value and clock signal CK of the image data DATA recorded in memory503. The lengths of time allocated to the first subframe SF1to the n-th subframe SFn may be different from one another. For example, a first length T/2{circumflex over ( )}0 may be allocated to the first subframe SF1, a second length T/2{circumflex over ( )}1 may be allocated to a second subframe SF2, and a third length T/2{circumflex over ( )}2 may be allocated to a third subframe SF3, and an n-th length T/2{circumflex over ( )}(n−1) may be allocated to the n-th subframe SFn. The image data DATA may be represented by n bits including the most significant bit MSB and the least significant bit LSB. The order from the most significant bit MSB to the least significant bit LSB may correspond to the order from the first subframe SF1to the n-th subframe SFn. The clock signal CK includes a first clock signal CK1to an n-th clock signal CKn, and the first clock signal CK1to the n-th clock signal CKn may be sequentially output in order corresponding to the order of first subframe SF1to n-th subframe SFn. The length of clock signal CK may vary depending on a subframe. For example, the first clock signal CK1corresponding to the first subframe SF1allocated to the most significant bit MSB of the image data DATA may have the first length T/2{circumflex over ( )}0, a second clock signal CK2corresponding to the second subframe SF2allocated to a next higher bit MSB-1 of the image data DATA may have the second length T/2{circumflex over ( )}1, and the n-th clock signal CKn corresponding to an n-th subframe SFTn allocated to the least significant bit LSB of the image data DATA may have n-th length T/2{circumflex over ( )}(n−1). For each of the first subframe SF1to the n-th subframe SFn, the PWM controller501reads the corresponding bit value of the image data DATA from the memory503, and may control the pulse width of the PWM signal based on the signal width of the clock signal CK and the bit value of the image data DATA. The PWM controller501may generate the PWM signal (PWM) based on the clock signal CK output from the first subframe SF1to the n-th subframe SFn and the bit value of the image data DATA. InFIG.7, an embodiment in which the image data DATA has n bit values of 101 . . . 1 is illustrated. The PWM controller501may output a pulse having a pulse width of first length T based on a bit value 1 of MSB of the image data DATA and the first clock signal CK1. The PWM controller501may turn off the pulse output for a second length T/2 based on a bit value 0 of MSB-1 of the image data DATA and the second clock signal CK2. The PWM controller501may output a pulse having a pulse width of n-th length T/2{circumflex over ( )}(n−1) based on the bit value 1 of the LSB of the image data DATA and the n-th clock signal CKn. The luminous element ED may emit light or may not emit light during a single frame according to the pulse output of the PWM signal. The luminous element ED may emit light for a time corresponding to the pulse width when the pulse output is turned on. The luminous element ED may not emit light as long as the pulse output is turned off. FIG.8is a diagram for explaining driving of a pixel according to another embodiment of the present disclosure. FIG.8is an example of driving a pixel in a first row. Referring toFIG.8, the pixel PX may be driven in a data-writing period {circle around (1)} and a light-emitting period {circle around (2)} during a single frame. The light-emitting period {circle around (2)} may be driven by dividing into the first subframe SF1to n-th subframe SFn. At this time, the order of expression of first subframe SF1to n-th subframe SFn may be different from the embodiment ofFIG.7.FIG.8is an embodiment in which the third subframe SF3is expressed earlier than the second subframe SF2. The clock signal CK and the bit order of image data DATA may also be determined corresponding to the expression order of the subframe. The order of expression of the subframe may be preset or changed. FIG.9is a diagram for explaining driving of a pixel with a serial clock signal according to an embodiment of the present disclosure. As mentioned above, the display device30according to an embodiment may convert n parallel clock signals into a serial clock signal through the parallel to serial converter. The parallel to serial converter may be an element which is composed of a logic circuit including an OR gate. That is, when any one of a plurality of parallel clock signals input to the parallel to serial converter has high level, the parallel to serial converter may output a serial clock signal having a high level in a corresponding time period. The serial clock signal may include information of edges (rising edges and/or falling edges) included in each of the plurality of parallel clock signals. FIG.9shows an example in which a PWM signal is generated by 5-bit data (odd number) per frame. Referring toFIG.9, during the light emitting period of the single frame, a plurality of clock signals CK1, CK3, and CK5may be generated by the clock generator129in synchronization with 5-bit data and may be converted into a serial clock signal Serial CK by the parallel to serial converter. The clock generator129according to an embodiment of the present disclosure may generate only clock signals corresponding to odd-numbered bits among bits included in the image data but is not limited thereto. Each of the plurality of clock signals CK1, CK3, and CK5may be applied at the same time as the time allocated to the most significant bit MSB, MSB-2, and LSB bits of 5-bit data. The serial clock signal Serial CK may be applied to the PWM controller501, and the PWM controller501may generate a PWM signal based on a bit value of 5-bit data written in the memory503and the serial clock signal Serial CK. The PWM controller501may read the bit value of 5-bit data from the memory503and control the pulse width of the PWM signal based on the time interval between edges and the bit values of the bit data. Specifically, the PWM controller501according to an embodiment of the present disclosure may distinguish bit values of 5-bit data based on the edge of the serial clock signal Serial CK. That is, reading a bit value (1) corresponding to the most significant bit MSB is performed based on the first edge E1, reading a bit value (0) corresponding to MSB-1 is performed based on the second edge E2, reading a bit value (0) corresponding to MSB-2 is performed based on the third edge E3, reading a bit value (1) corresponding to MSB-3 is performed based on the forth edge E4, and reading a bit value (1) corresponding to the least significant bit LSB is performed based on the fifth edge E5. In this case, the first edge E1, the third edge E3, and the fifth edge E5may be rising edges, and the second edge E2and the fourth edge E4may be falling edges. According to the above-described embodiment, the PWM controller501may read the bit value of the odd-numbered bit of the bit data when a rising edge is input and read the bit value of the even-numbered bit of the bit data when a falling edge is input. FIG.10is a diagram for explaining driving of a pixel with a serial clock signal according to another embodiment of the present disclosure. FIG.10shows an example in which a PWM signal is generated by 6-bit data (even number) per frame. Referring toFIG.10, similarly, during the light emission period of the single frame, a plurality of clock signals CK1, CK3, and CK5may be generated by the clock generator129in synchronization with 6-bit data and may be converted into a serial clock signal Serial CK by the parallel to serial converter. Each of the plurality of clock signals CK1, CK3, and CK5may be applied at the same time as the time allocated to the most significant bit MSB, MSB-2, and MSB-4 bits of 6-bit data. The serial clock signal Serial CK may be applied to the PWM controller501, and the PWM controller501may generate a PWM signal based on a bit value of 6-bit data written in the memory503and the serial clock signal Serial CK. The PWM controller501may read the bit value of 6-bit data from the memory503and control the pulse width of the PWM signal based on the time interval between edges and the bit values of the bit data. Specifically, the PWM controller501according to an embodiment of the present disclosure may distinguish bit values of 6-bit data based on the edge of the serial clock signal Serial CK. That is, reading a bit value (1) corresponding to the most significant bit MSB is performed based on the first edge E1, reading a bit value (0) corresponding to MSB-1 is performed based on the second edge E2, reading a bit value (0) corresponding to MSB-2 is performed based on the third edge E3, reading a bit value (1) corresponding to MSB-3 is performed based on the forth edge E4, and reading a bit value (1) corresponding to LSB+1 is performed based on the fifth edge E5. In this case, the first edge E1, the third edge E3, and the fifth edge E5may be rising edges, and the second edge E2and the fourth edge E4may be falling edges. On the other hand, since the bit value corresponding to the least significant bit LSB is read based on the sixth edge E6, the PWM controller501generates a PWM signal through ON Time to which a predetermined time is added to the serial clock Serial CK. In this case, the predetermined time may be at least a time exceeding T/2{circumflex over ( )}6, which is the time allocated to the LSB. FIG.9andFIG.10are provided as examples, and any suitable manner capable of generating a PWM signal based on a serial clock signal and controlling the pulse width of the PWM signal may be applied. FIG.11is a diagram for explaining driving of a pixel with a serial clock according to another embodiment of the present disclosure. FIG.11may show an example in which a PWM controller set only rising edge as a reference for reading a bit value of bit data. During the light emitting period of the single frame, a plurality of clock signals CK1to CK5may be generated by the clock generator129in synchronization with 5-bit data and may be converted into a serial clock signal Serial CK by the parallel to serial converter. The PWM controller according to an embodiment of the present disclosure may read the bit value corresponding to the most significant bit MSB based on the first edge E1, the bit value corresponding to MSB-1 based on the second edge E2, the bit value corresponding to MSB-2 based on the third edge E3, the bit value corresponding to MSB-3 based on the forth edge E4, and the bit value corresponding LSB based on the fifth edge E5. At this time, all of the first edge E1to the fifth edge E5may be rising edges. Meanwhile, in the present embodiment, since only the rising edge serves as a reference for reading a bit value, the signal width of the clock signal may be independent of PWM generation. Accordingly, the signal widths of the plurality of clock signals CK1to CK5may be freely generated unless they do not overlap between the clock signals. For example, the clock signals CK1to CK5may be generated in the form of an impulse generating only a rising edge. Through this embodiment, power consumption generated on the clock line CL can be reduced. FIG.12is a circuit diagram illustrating a pixel PX driving apparatus according to an embodiment of the present disclosure. Referring toFIG.12, the pixel PX driving apparatus may include a pixel circuit including a first pixel circuit1210connected to a luminous element ED (also referred as to an emitter) and a second pixel circuit1220and driving circuit1230connected to the pixel circuit. Although only one pixel circuit is illustrated inFIG.12for simplification of the drawing, a plurality of pixel circuits may be connected in parallel to a common power supply (e.g., driving circuit). The first pixel circuit1210may be a high voltage driving circuit and the second pixel circuit1220may be a low voltage driving circuit. The second pixel circuit1220may include a plurality of logic circuits. The luminous element ED may selectively emit light for every subframe based on a bit value (logic level) of image data provided from the data driving unit125during a single frame, thereby adjusting the light-emission time within the single frame to display gradation. The first pixel circuit1210may control light-emission and non-emission of the luminous element ED in response to the control signal applied to each of the plurality of subframes during a single frame. The control signal may be a pulse width modulation (PWM) signal. The first pixel circuit1210may include a first transistor1211, a second transistor1212, a third transistor1213, and a level shifter1214. Hereinafter, an electrical connection connecting a pixel positive power VDD_P and a pixel negative power GND_P is referred to as a ‘pixel line’. The first transistor1211may be connected in series on the pixel line and may transmit or block a driving current to the luminous element ED in response to the control signal. The first transistor1211may transmit or block the driving current to the luminous element ED in response to the PWM signal. A gate of the first transistor1211may be connected to an output terminal of the level shifter1214, a first terminal of the first transistor1211may be connected to the second terminal of the second transistor1212, and a second terminal of the first transistor1211may be connected to the luminous element ED. The first transistor1211may be turned on or off according to the voltage output from the level shifter1214. The light-emission time of the luminous element ED may be adjusted according to the turn-on or turn-off time of the first transistor1211. The first transistor1211may be turned on when a gate-on-level signal is applied to the gate and transfers the driving current output from the second transistor1212to the luminous element ED, so that the luminous element ED may emit light. The first transistor1211may be turned off when a gate-off level signal is applied to the gate and blocks the driving current output from the second transistor1212to the luminous element ED, so that the luminous element ED may not emit light. During a single frame, the light-emission time and the non-emission time of the luminous element ED are controlled by the turn-on time and the turn-off time of the first transistor1211, so that a color depth may be expressed. The second transistor1212may output the driving current. A gate of the second transistor1212may be connected to the driving circuit1230, the first terminal of the second transistor1212may be connected to the positive pixel power supply (VDD_P), and the second terminal of the second transistor1212may be connected to the first terminal of the first transistor1211. The gate of the second transistor1212may be connected to a gate of a fourth transistor1231, thereby forming a current mirror circuit together with the driving circuit1230. Accordingly, as the fourth transistor of the driving circuit1230is turned on, the second transistor1212which has been turned on may supply a driving current corresponding to the current formed in the driving circuit1230. The driving current may be equal to the current flowing in the driving circuit1230. The third transistor1213may be connected in series on the pixel line and may be connected to a source terminal of the second transistor1212. The level shifter1214may be connected to the second pixel circuit1220. Specifically, the level shifter1214may be connected to an output terminal of the PWM controller1222of the second pixel circuit1220. Since the detailed description of the level shifter1214has been described above with reference toFIG.5, the detailed description thereof will not be provided again. The second pixel circuit1220may store a bit value of image data applied from the data driving unit during a data writing period for every frame, and generate the PWM signal based on the bit value and a clock signal during the light-emitting period. The second pixel circuit1220may include a memory1221and the PWM controller. Since detail descriptions of the memory1221and the PWM controller1222included in the second pixel circuit1220have been described above with reference toFIG.5, the detail descriptions will be omitted. The driving circuit1230may include the fourth transistor1231, a fifth transistor1232and a current source, and the current source may include a sixth transistor1233, an operational amplifier1234and a variable resistor1235. Hereinafter, an electrical connection connecting between a driving positive power supply VDD_D and a driving negative power supply GND_D is referred to as a ‘driving line’. The current source may be connected in series on the driving line, applying a reference current. The reference current may be set to a current sufficient to cause the luminous element to emit light. The fourth transistor1231may be configured to form a current mirror circuit with the second transistor1212. The fourth transistor1231may be connected in series on the driving line and may be connected to the gate of the second transistor1212. The fifth transistor1232may be connected in series on the driving line, may be connected to a gate of the third transistor1213, and may be connected to a source terminal of the fourth transistor1231. A drain terminal of the sixth transistor1233may be connected to a drain terminal of the fourth transistor1231, a gate of the sixth transistor1233may be connected to an output terminal of the operational amplifier1234, and a source terminal of the sixth transistor1233may be connected to a second input terminal (−) of the operational amplifier1234. A first input terminal (+) of the operational amplifier1234may be connected to a reference voltage Vrefand the second input terminal (−) may be connected to the variable resistor1235. As illustrated inFIG.12, the second transistor and the fourth transistor may be implemented as P-type MOSFETs, and the third transistor and the fifth transistor may be implemented as N-type MOSFETs. The gate of the fourth transistor and the drain terminal of the fourth transistor may be short-circuited. The pixel PX driving apparatus according to the embodiment may further include buffer gate BUF connected between the gate of the second transistor and the fourth transistor. In the pixel PX driving apparatus according to the embodiment, even when a voltage drop (IR drop) occurs due to a common impedance phenomenon due to the parallel connection of a plurality of pixels, the Vgs of the second transistor is not affected, thus the influence on the output current flowing in the pixel line can be minimized. An embodiment of the present disclosure may be implemented as a micro LED display device. Recently, as the need for a micro display device as a new display device increases, the development of micro LED on silicon or AMOLED on silicon that forms LEDs on silicon is on the rise, and the demand for power consumption reduction in portable display devices is expected to increase. In the embodiments of the present disclosure, a memory is provided in a pixel to enable current driving, and in the case of a still image, the driving unit only needs to transmit a simple driving pulse to the pixel unit, and thus, power consumption may be improved. In the embodiments of the present disclosure, a target gamma value may be set through digital processing, and luminance may be easily adjusted using the current mirror circuit while the set gamma value is maintained. In the embodiments of the present disclosure, a high-resolution display device can be implemented with a circuit configuration mainly based on a low voltage transistor. In the present specification, the present disclosure has been described through limited embodiments, but various embodiments are possible within the scope of the present disclosure. Also, although not explained, it will be said that an equal means is also directly coupled to the present disclosure. Therefore, the true scope of protection of the present disclosure should be determined by the following claims. | 44,104 |
11862072 | DETAILED DESCRIPTION In the specification, the expression that a first component (or region, layer, part, etc.) is “on”, “connected with”, or “coupled with” a second component means that the first component is directly on, connected with, or coupled with the second component or means that a third component is interposed therebetween. Like reference numerals refer to like components. Also, in drawings, the thickness, ratio, and dimension of components are exaggerated for effectiveness of description of technical contents. The term “and/or” includes one or more combinations of the associated listed items. The terms “first”, “second”, etc. are used to describe various components, but the components are not limited by the terms. The terms are used only to differentiate one component from another component. For example, without departing from the scope and spirit of the present disclosure, a first component may be referred to as a second component, and similarly, the second component may be referred to as the first component. The articles “a,” “an,” and “the” are singular in that they have a single referent, but the use of the singular form in the specification should not preclude the presence of more than one referent. Also, the terms “under”, “beneath”, “on”, “above”, etc. are used to describe a relationship between components illustrated in a drawing. The terms are relative and are described with reference to a direction indicated in the drawing. It will be understood that the terms “include”, “comprise”, “have”, etc. specify the presence of features, numbers, steps, operations, elements, or components, described in the specification, or a combination thereof, not precluding the presence or additional possibility of one or more other features, numbers, steps, operations, elements, or components or a combination thereof. Unless otherwise defined, all terms (including technical terms and scientific terms) used in this specification have the same meaning as commonly understood by those skilled in the art to which the present disclosure belongs. Furthermore, terms such as terms defined in the dictionaries commonly used should be interpreted as having a meaning consistent with the meaning in the context of the related technology, and should not be interpreted in ideal or overly formal meanings unless explicitly defined herein. Hereinafter, embodiments of the present disclosure will be described with reference to accompanying drawings. FIG.1is a block diagram of a display device according to an embodiment of the present disclosure. Referring toFIG.1, a display device DD includes a display panel DP, a driving controller100, a data driving circuit200, and a voltage generator300. The driving controller100receives an input image signal RGB and a control signal CTRL. The driving controller100generates an output image signal DATA by converting a data format of the input image signal RGB so as to be suitable for the interface specification of the data driving circuit200. The driving controller100outputs a scan control signal SCS, a data control signal DCS, and an emission driving control signal ECS. The data driving circuit200receives the data control signal DCS and the output image signal DATA from the driving controller100. The data driving circuit200converts the output image signal DATA into data signals and then outputs the data signals to a plurality of data lines DL1to DLm to be described later. The data signals refer to analog voltages corresponding to a grayscale value of the output image signal DATA. In an embodiment, the data driving circuit200may output one of a data signal corresponding to the output image signal DATA and a bias signal corresponding to a predetermined voltage level to data lines DL1to DLm. The voltage generator300generates voltages necessary to operate the display panel DP. In an embodiment, the voltage generator300generates a first driving voltage ELVDD (or a first voltage), a second driving voltage ELVSS (or a second voltage), a first initialization voltage VINT1(or a third voltage), and a second initialization voltage VINT2(or a fourth voltage). In an embodiment, the first initialization voltage VINT1and the second initialization voltage VINT2may have voltage levels different from each other. In an embodiment, the first initialization voltage VINT1may have the same voltage level as the second initialization voltage VINT2. The display panel DP includes scan lines GIL1to GILn, GCL1to GCLn, GWL1to GWLn, and GCCL1to GCCLn, emission control lines EML11to EMLln and EML21to EML2n, and the data lines DL1to DLm, and pixels PX. The display panel DP may further include a scan driving circuit SD and an emission driving circuit EDC. In an embodiment, the scan driving circuit SD may be arranged on a first side of the display panel DP. The scan lines GIL1to GILn, GCL1to GCLn, GWL1to GWLn, and GCCL1to GCCLn extend from the scan driving circuit SD in a first direction DR1. The emission driving circuit EDC is arranged on a second side of the display panel DP. The emission control lines EML11to EMLln and EML21to EML2nextend from the emission driving circuit EDC in a direction opposite to the first direction DR1. The scan lines GIL1to GILn, GCL1to GCLn, GWL1to GWLn, and GCCL1to GCCLn and the emission control lines EML11to EMLln and EML21to EML2nare arranged spaced from one another in a second direction DR2. The data lines DL1to DLm extend from the data driving circuit200in a direction opposite to the second direction DR2, and are arranged to be spaced from one another in the first direction DR1. In the example shown inFIG.1, the scan driving circuit SD and the emission driving circuit EDC are arranged to face each other with the pixels PX interposed therebetween, but the present disclosure is not limited thereto. For example, the scan driving circuit SD and the emission driving circuit EDC may be positioned adjacent to each other on one of the first side and the second side of the display panel DP. In an embodiment, the scan driving circuit SD and the emission driving circuit EDC may be implemented with one circuit. The plurality of pixels PX are electrically connected to the scan lines GIL1to GILn, GCL1to GCLn, GWL1to GWLn, and GCCL1to GCCLn, the emission control lines EML11to EMLln and EML21to EML2n, and the data lines DL1to DLm. Each of the plurality of pixels PX may be electrically connected to four scan lines and two emission control lines. For example, as shown inFIG.1, a first row of pixels may be connected to the scan lines GILL GCL1, GWL1, and GCCL1and the emission control lines EML11and EML21. Also, a second row of pixels may be connected to the scan lines GIL2, GIL2, GWL2, and GCCL2and the emission control lines EML12and EML22. Each of the plurality of pixels PX includes a light emitting element ED (seeFIG.2) and a pixel circuit for controlling the emission of the light emitting element ED. The pixel circuit may include one or more transistors and one or more capacitors. The scan driving circuit SD and the emission driving circuit EDC may include transistors formed through the same process as transistors of the pixel circuit. Each of the plurality of pixels PX receives the first driving voltage ELVDD, the second driving voltage ELVSS, the first initialization voltage VINT1, and the second initialization voltage VINT2from the voltage generator300. The scan driving circuit SD receives the scan control signal SCS from the driving controller100. The scan driving circuit SD may output scan signals to the scan lines GIL1to GILn, GCL1to GCLn, GWL1to GWLn, and GCCL1to GCCLn in response to the scan control signal SCS. The emission driving circuit EDC may output emission control signals to emission control lines EML11to EMLln and EML21to EML2nin response to the emission driving control signal ECS from the driving controller100. The driving controller100according to an embodiment of the present disclosure may determine an operating frequency and may control the data driving circuit200, the scan driving circuit SD, and the emission driving circuit EDC depending on the determined operating frequency. FIG.2is a circuit diagram of a pixel, according to an embodiment of the present disclosure. FIG.2illustrates an equivalent circuit diagram of a pixel PXij connected to the i-th data line DLi among the data lines DL1to DLm, the j-th scan lines GILj, GCLj, GWLj, and GCCLj among the scan lines GIL1to GILn, GCL1to GCLn, GWL1to GWLn, and GCCL1to GCCLn and the j-th emission control lines EML1jand EML2jamong the emission control lines EML11to EMLln and EML21to EML2n, which are illustrated inFIG.1. Each of the plurality of pixels PX shown inFIG.1may have the same circuit configuration as the equivalent circuit diagram of the pixel PXij shown inFIG.2. Referring toFIG.2, a pixel PXij of a display device according to an embodiment includes at least one light emitting element ED and a pixel circuit. The pixel circuit may include first to ninth transistors T1, T2, T3, T4, T5, T6, T7, T8, and T9and first to third capacitors Cst, Chold, and Cb. In an embodiment, the light emitting element ED may be a light emitting diode or a nano emitting diode. In an embodiment, some of the first to ninth transistors T1to T9are P-type transistors having LTPS as a semiconductor layer. The other(s) thereof may be an N-type transistor having an oxide semiconductor as a semiconductor layer. In an embodiment, each of the first to fourth and sixth to eighth transistors T1to T4and T6to T8is a P-type transistor, and the fifth transistor T5and the ninth transistor T9are N-type transistors. Moreover, a circuit configuration of the pixel PXij according to an embodiment of the present disclosure is not limited to an embodiment inFIG.2. The pixel PXij illustrated inFIG.2is only an example, and the circuit configuration of the pixel PXij may be altered as required. The scan lines GILj, GCLj, GWLj, and GCCLj may transmit the scan signals GIj, GCj, GWj, and GCCj, respectively. The emission control lines EML1jand EML2jmay transmit the emission control signals EM1jand EM2j, respectively. The data line DLi transmits one of the data signal Di and a bias signal Bi. The data signal Di may have a voltage level corresponding to the input image signal RGB that is input to the display device DD (seeFIG.1). The first to fourth voltage lines VL1, VL2, VL3, and VL4may deliver the first driving voltage ELVDD, the second driving voltage ELVSS, the first initialization voltage VINT1, and the second initialization voltage VINT2, respectively. The third voltage line VL3and the fourth voltage line VL4may be referred to as a “first initialization voltage line” and a “second initialization voltage line”, respectively. The first transistor T1includes a first electrode electrically connected to the first voltage line VL1via the eighth transistor T8, a second electrode electrically connected to an anode of the light emitting element ED via the sixth transistor T6, and a gate electrode connected to the first node N1. The second transistor T2includes a first electrode connected to the data line DLi, a second electrode connected to the first electrode of the first transistor T1, and a gate electrode connected to the scan line GWLj. The second transistor T2may be turned on in response to the scan signal GWj received through the scan line GWLj so as to deliver one of the data signal Di from the data line DLi or the bias signal Bi to the first electrode of the first transistor T1. The third transistor T3includes a first electrode connected to the second electrode of the first transistor T1, a second electrode connected to the second node N2, and a gate electrode connected to the scan line GCLj. The third transistor T3may be turned on in response to the scan signal GCj received through the scan line GCLj so as to electrically connect the second electrode of the first transistor T1and the second node N2. The fourth transistor T4includes a first electrode connected to the second node N2, a second electrode connected to the third voltage line VL3through which the first initialization voltage VINT1is delivered, and a gate electrode connected to the scan line GILj. The fourth transistor T4is turned on in response to the scan signal GIj received through the scan line GILj so as to deliver the first initialization voltage VINT1to the second node N2. The first initialization voltage VINT1may be provided to a gate electrode of the first transistor T1through the fifth transistor T5. The first initialization voltage VINT1may be a voltage for initializing the gate electrode of the first transistor T1. The fifth transistor T5includes a first electrode connected to the first node N1, a second electrode connected to the second node N2, and a gate electrode connected to the scan line GCCLj. The fifth transistor T5is turned on in response to the scan signal GCCj supplied from the scan line GCCLj so as to electrically connect the second node N2and the first node N1. The sixth transistor T6includes a first electrode connected to the second electrode of the first transistor T1, a second electrode connected to the anode of the light emitting element ED, and a gate electrode connected to the emission control line EML2j. The sixth transistor T6may be turned on in response to the emission control signal EM2jreceived through the emission control line EML2jso as to electrically connect the second electrode of the first transistor T1to the light emitting element ED. The seventh transistor T7includes a first electrode connected to the anode of the light emitting element ED, a second electrode connected to the fourth voltage line VL4, and a gate electrode connected to the scan line GILj. The seventh transistor T7may be turned on in response to the scan signal GIj received through the scan line GILj such that the fourth initialization voltage line VL4is electrically connected to the anode of the light emitting element ED. Accordingly, when the seventh transistor T7is turned on, the anode of the light emitting device ED may be initialized to the second initialization voltage VINT2. The eighth fifth transistor T8includes a first electrode connected to the first voltage line VL1, a second electrode connected to the first electrode of the first transistor T1, and a gate electrode connected to the emission control line EML1j. The eighth transistor T8is turned on in response to the emission control signal EM1jreceived through the emission control line EML1jso as to deliver the first driving voltage ELVDD to the first electrode of the first transistor T1. The ninth transistor T9includes a first electrode connected to the first electrode of the first transistor T1, a second electrode connected to a third node N3, and a gate electrode connected to the scan line GCCLj. The ninth transistor T9is turned on in response to the scan signal GCCj received through the scan line GCCLj so as to electrically connect the first electrode of the first transistor T1and the third node N3. The first capacitor Cst is connected between the third node N3and the first node N1. The second capacitor Chold is connected between the first voltage line VL1and the third node N3. The third capacitor Cb is connected between the second node N2and the scan line GCLj. FIG.3is a timing diagram of scan signals and emission control signals for describing an operation of the pixel shown inFIG.2. Referring toFIG.3, the scan signal GIj provided to the gate electrode of the seventh transistor T7may be the same as or different from the scan signal GIj provided to the gate electrode of the fourth transistor T4. In an embodiment, when the scan signal provided to the gate electrode of the fourth transistor T4is a j-th scan signal GIj, the scan signal provided to the gate electrode of the seventh transistor T7is a (j+1)-th scan signal GIj+1. FIGS.4A to4Dare diagrams for describing an operation of a pixel illustrated inFIG.2. Referring toFIGS.2,3, and4A to4D, first to eighth periods P1to P8mean operating states or operating periods of the pixel PXij. When, during the first to sixth periods P1to P6, the emission control signal EM1jis at a low level and the scan signal GCCj is at a high level, the fifth transistor T5, the eighth transistor T8, and the ninth transistor T9are turned on. Referring toFIGS.2,3, and4A, when the scan signal GIj is at a low level in each of the first period P1, third period P3, and the fifth period P5, the fourth transistor T4and the seventh transistor T7are turned on. Accordingly, the first initialization voltage VINT1may be delivered to the first node N1(i.e., a gate electrode of the first transistor T1) through the fourth transistor T4and the fifth transistor T5. Moreover, the anode of the light emitting element ED may be initialized to the second initialization voltage VINT2through the seventh transistor T7. The first period P1, the third period P3, and the fifth period P5may be initialization periods for initializing the gate electrode of the first transistor T1and the anode of the light emitting element ED. Referring toFIGS.2,3, and4B, when the scan signal GCj is at a low level during each of the second period P2, the fourth period P4, and the sixth period P6, the third transistor T3is turned on. Accordingly, a voltage obtained by subtracting the first driving voltage ELVDD by a threshold voltage (referred to as “Vth”) of the first transistor T1may be provided to the second node N2, that is, one end of the first capacitor Cst, through the third transistor T3. At this time, because the eighth transistor T8and the ninth transistor T9are turned on, the first driving voltage ELVDD is provided to the third node N3, that is, the other end of the first capacitor Cst. Accordingly, a voltage difference between opposite ends of the first capacitor Cst is the same as the threshold voltage Vth of the first transistor T1. Each of the second period P2, the fourth period P4, and the sixth period P6may be a compensation period for compensating for the threshold voltage Vth of the first transistor T1. The pixel PXij that alternately repeats the first period P1, the third period P3, and the fifth period P5for initializing the gate electrode of the first transistor T1and the anode of the light emitting element ED, and the second period P2, the fourth period P4, and the sixth period P6for compensating for the threshold voltage Vth of the first transistor T1may sufficiently secure initialization and compensation time. Accordingly, the data signal Di in the previous frame may have a minimal effect on the current frame. FIG.3shows that the pixel PXij alternately performs an initialization period and a compensation period three times, but the present disclosure is not limited thereto. The number of times that the initialization period is repeated and the number of times that the compensation period is repeated may be variously changed. When initialization and compensation operations are completed (i.e., when the sixth period P6ends), the emission control signal EM1jrises to a high level. Referring toFIGS.2,3, and4C, when the scan signal GWj falls to a low level during the seventh period P7, the second transistor T2is turned on. A voltage level (referred to as “Vdata” described below) corresponding to the data signal Di of the data line DLi may be provided to the third node N3through the second transistor T2and the eighth transistor T9. When the voltage level Vdata corresponding to the data signal Di is provided to the third node N3, that is, one end of the first capacitor Cst, the voltage level of the gate electrode of the first transistor T1changes to “Vdata−Vth”. The seventh period P7may be a write period for providing the voltage level Vdata corresponding to the data signal Di to one end of the first capacitor Cst. When the seventh period P7ends, the scan signal GCCj falls from a high level to a low level. That is, during the first to seventh periods P1to P7, the scan signal GCCj may be maintained at a high level. Referring toFIGS.2,3, and4D, when the emission control signals EMU and EM2jfalls to a low level during the eighth period P8, a current path may be formed from the first voltage line VL1to the light emitting element ED through the eighth transistor T8, the first transistor T1, and the sixth transistor T6. A current flowing through the light emitting element ED is proportional to “(Vgs−Vth)2” that is the square of a difference between a gate-source voltage (referred to as “Vgs”) of the first transistor T1and the threshold voltage Vth of the first transistor T1. Because the voltage level of the gate electrode of the first transistor T1is “Vdata−Vth”, the current flowing through the light emitting element ED is proportional to “(ELVDD−Vdata)2” that is the square of a difference between the first driving voltage ELVDD and the voltage level Vdata corresponding to the data signal Di. That is, the threshold voltage Vth of the first transistor T1may not affect the current flowing through the light emitting element ED. The eighth period P8may be an emission period of the light emitting element ED. Because the scan signal GCCj is at a low level during the eighth period P8that is the emission period, the fifth transistor T5and the ninth transistor T9are turned off. In an embodiment, the fifth transistor T5and the ninth transistor T9are N-type transistors, and thus a leakage current may be minimized compared to a P-type transistor. Accordingly, a voltage between opposite ends of the first capacitor Cst may be maintained uniformly during the emission period. Referring to a voltage level change of the first node N1, in the initialization periods such as the first period P1, the third period P3, and the fifth period P5, the voltage level of the first node N1may correspond to the initialization voltage VINT1. When the scan signal GCj falls to a low level during the second period P2, the fourth period P4, and the sixth period P6, the third transistor T3is turned on. Accordingly, the gate electrode and the second electrode of the first transistor T1are electrically connected to each other, and the voltage level of the first node N1is increased by a difference between the first driving voltage ELVDD and the threshold voltage Vth of the first transistor T1. That is, during the first to sixth periods P1to P6, the voltage level of the first node N1is changed in synchronization with the transition of scan signals GIj and GCj. When the scan signal GWj falls to a low level in the seventh period P7, the voltage level of the first node N1is increased by a difference (Vdata−Vth) between the voltage level Vdata of the data signal Di and the threshold voltage Vth of the first transistor T1and then is lowered by a kickback voltage Vkb when the scan signal GCCj falls from a high level to a low level. This kickback voltage Vkb is generated by a parasitic capacitance Cp between the scan line GCCLj and the gate electrode of the first transistor T1. The third capacitor Cb is connected between the second node N2and the scan line GCLj. When the scan signal GCj transmitted through the scan line GCLj rises from a low level to a high level, the voltage of the second node N2may be boosted. The fifth transistor T5may be turned on during the first to seventh periods P1to P7, and thus the voltage of the second node N2may be delivered to the first node N1. In particular, when the voltage of the first node N1is maintained at a boosting level and the scan signal GWj falls to a low level at a point in time when the scan signal GCj rises from a low level to a high level at the end of the sixth period P6, the voltage of the first node N1is increased by the difference (Vdata−Vth) between the voltage level Vdata of the data signal Di and the threshold voltage Vth of the first transistor T1. Even though the voltage of the first node N1is lowered by the kickback voltage Vkb when the scan signal GCCj falls from a high level to a low level, the voltage of the first node N1may be compensated by the boosting voltage by the third capacitor Cb. The third capacitor Cb may be a boosting capacitor. FIG.5is a timing diagram of scan signals and emission control signals for describing an operation of the pixel shown inFIG.2when an operating frequency is a first operating frequency. Referring toFIGS.2and5, when an operating frequency is a first operating frequency (e.g., 120 Hz), each of the first frame F1and the second frame F2may include a first cycle C1and a second cycle C2. When the operating frequency is the first operating frequency, the emission control signals EM1jand EM2jmay fall to an active level (e.g., a low level) during each of the first and second cycles C1and C2. That is, one frame may include two emission periods. In an embodiment, when the first operating frequency is 120 Hz, each of the emission control signals EM1jand EM2jmay have a frequency of 240 Hz. When the operating frequency is the first operating frequency, the scan signal GCCj may rise to an active level (e.g., a high level) during the first cycle C1, the scan signals GIj and GCj may fall to an active level (e.g., a low level) multiple times (e.g., 3 times) during each of the first and second cycles C1and C2. When the operating frequency is the first operating frequency, the scan signal GWj may fall to an active level (e.g., a low level) during the first cycle C1, and may be maintained at an inactive level (e.g., a high level) during the second cycle C2. That is, the first cycle C1may be a cycle during which the data signal Di is provided, and the second cycle C2may be a cycle during which the data signal Di is not provided. When the scan signal GWj is at a low level during the seventh period P7of the first cycle C1, the second transistor T2is turned on and the voltage level Vdata corresponding to the data signal Di is stored in the first capacitor Cst. Afterward, in an emission period in which the sixth and eighth transistors T6and T8are turned on, a current corresponding to charges stored in the capacitor Cst may be provided to the light emitting element ED. Because the scan signal GWj is maintained at a high level during the second cycle C2, a new data signal Di is not received. In an emission period when the sixth and eighth transistors T6and T8are turned on during the second cycle C2, a current corresponding to charges stored in the capacitor Cst during the first cycle C1may be provided to the light emitting element ED. That is, when the operating frequency is the first operating frequency (e.g., 120 Hz), a current corresponding to the data signal Di received during the first cycle C1may be provided to the light emitting element ED during each of the first cycle C1and the second cycle C2. Accordingly, when the operating frequency is the first operating frequency (e.g., 120 Hz), a data write operation may be performed during only the first cycle C1. However, light is emitted depending on the same data signal Di during each of the first cycle C1and the second cycle C2such that an effect having an operating frequency of 240 Hz is generated. FIG.6is a timing diagram of scan signals and emission control signals for describing an operation of the pixel shown inFIG.2when an operating frequency is a second operating frequency. Referring toFIGS.2and6, when an operating frequency is a second operating frequency, each of the first frame F1and the second frame F2may include the first cycle C1and the second cycle C2. When the operating frequency is the second operating frequency, one period may include the first frame F1and the second frame F2. The second operating frequency may be a lower than the first operating frequency. In an embodiment, the first operating frequency may be 120 Hz, and the second operating frequency may be 60 Hz. When the operating frequency is the second operating frequency, the emission control signals EM1jand EM2jmay fall to an active level (e.g., a low level) during the first and second cycles C1and C2of each of the first frame F1and the second frame F2. That is, one frame may include two emission periods. When the operating frequency is the second operating frequency, the scan signal GCCj rise to an active level (e.g., a high level) during the first cycle C1of the first frame F1and then is maintained at an inactive level (e.g., a low level) during the second cycle C2of the first frame F1and the first and second cycles C1and C2of the second frame F2. The scan signals GIj and GCj may fall to an active level (e.g., a low level) during the first and second cycles C1and C2of each of the first frame F1and the second frame F2multiple times (e.g., 3 times). When the operating frequency is the second operating frequency, the scan signal GWj may fall to an active level (e.g., a low level) during the first cycle C1of each of the first frame F1and the second frame F2, and may be maintained at an inactive level (e.g., a high level) during the second cycle C2of each of the first frame F1and the second frame F2. When the scan signal GWj is at a low level during the seventh period P7of the first cycle C1of the first frame F1, the second transistor T2is turned on and the voltage level Vdata corresponding to the data signal Di is stored in the first capacitor Cst. Afterward, in an emission period in which the sixth and eighth transistors T6and T8are turned on, a current corresponding to charges stored in the capacitor Cst may be provided to the light emitting element ED. Because the scan signal GWj is maintained at a high level during the second cycle C2of the first frame F1, the second transistor T2is turned off. During the second cycle C2of the first frame F1, the scan signal GCCj is at a low level and the emission control signal EM1jis at a low level. In this case, the first driving voltage ELVDD is provided to the first electrode of the first transistor T1. That is, the first driving voltage ELVDD may be applied to the first electrode of the first transistor T1during the second cycle C2of the first frame F1. When the scan signal GWj has a low level during the first cycle C1of the second frame F2, the bias signal Bi provided through the data line DLi may be applied to the first electrode of the first transistor T1. At this time, because the scan signal GCCj is at a low level, the ninth transistor T9is turned off, and thus the bias signal Bi is not stored in the first capacitor Cst. The ninth period P9, during which the scan signal GWj has a low level during the first cycle C1of the second frame F2, may be referred to as a “bias period”. Because the scan signal GWj is maintained at a high level during the second cycle C2of the second frame F2, the second transistor T2is turned off. During the second cycle C2of the second frame F2, the scan signal GCCj is at a low level and the emission control signal EM1jis at a low level. In this case, the first driving voltage ELVDD is provided to the first electrode of the first transistor T1. That is, the first driving voltage ELVDD may be applied to the first electrode of the first transistor T1during the second cycle C2of the second frame F2. When the operating frequency is the second operating frequency, the first cycle C1of the first frame F1may be referred to as an “address scan cycle” during which the valid data signal Di is provided. Each of the second cycle C2of the first frame F1, the first cycle C1of the second frame F2, and the second cycle C2of the second frame F2may be referred to as a “self-scan cycle” during which the valid data signal Di is not provided. In an embodiment, the first cycle C1of the second frame F2is a cycle during which the bias signal Bi is applied to the first electrode of the first transistor T1. Each of the second cycle C2of the first frame F1and the second cycle C2of the second frame F2is a cycle during which the first driving voltage ELVDD is applied to the first electrode of the first transistor T1. The first driving voltage ELVDD and the bias signal Bi may be alternately applied to the first electrode of the first transistor T1. As a voltage applied to the first electrode of the first transistor T1is periodically changed, a change in luminance due to hysteresis characteristics of the first transistor T1may be minimized. Meanwhile, because the scan signal GCCj is maintained at a low level during the second cycle C2of the first frame F1and the first and second cycles C1and C2of the second frame F2, the fifth transistor T5is turned off. Accordingly, even when the scan signal GCj is toggled during each of the second cycle C2of the first frame F1, the first cycle C1of the second frame F2, and the second cycle C2of the second frame F2, a voltage level of the first node N1does not change by the scan signal GCj. When the operating frequency is the second operating frequency (e.g., 60 Hz), the same scan signal GCCj is maintained at a low level during the second cycle C2of the first frame F1and each of the first and second cycles C1and C2of the second frame F2, and thus a new data signal Di is not transmitted to the capacitor Cst. Meanwhile, the emission control signals EM1jand EM2jfalls to a low level during the second cycle C2of the first frame F1and the first and second cycles C1and C2of the second frame F2, and thus the sixth and eighth transistors T6and T8may be turned on. In an emission period in which the sixth and eighth transistors T6and T8are turned on, a current corresponding to charges stored in the capacitor Cst may be provided to the light emitting element ED. That is, a current corresponding to the data signal Di received during the first cycle C1of the first frame F1may be provided to the light emitting element ED during the second cycle C2of the first frame F1and the first and second cycles C1and C2of the second frame F2. When the operating frequency is the second operating frequency (e.g., 60 Hz), a data write operation may be performed during only the first cycle C1of the first frame F1. However, light may be emitted depending on the same data signal Di during the second cycle C2of the first frame F1and the first and second cycles C1and C2of the second frame F2. Accordingly, the same effect as the operating frequency of 240 Hz may be achieved. FIG.7is an equivalent circuit diagram of a pixel according to an embodiment of the present disclosure. A pixel PXAij illustrated inFIG.7includes a configuration similar to the pixel PXij shown inFIG.2, and thus the same reference numerals are used for the same components, and additional descriptions are omitted to avoid redundancy. Referring toFIG.7, the pixel PXAij includes a third capacitor Cb1connected between the first node N1and the scan line GCLj. When the scan signal GCj supplied from the scan line GCLj rises from a low level to a high level, the voltage of the first node N1may be boosted by the third capacitor Cb1. When the voltage of the first node N1is maintained at a boosting level and then the scan signal GWj falls to a low level at a point in time when the scan signal GCj rises from a low level to a high level at the end of the sixth period P6as illustrated inFIG.3, the voltage of the first node N1is increased by the difference (Vdata−Vth) between the voltage level Vdata of the data signal Di and the threshold voltage Vth of the first transistor T1. Even though the voltage of the first node N1is lowered by the kickback voltage Vkb when the scan signal GCCj falls from a high level to a low level, the voltage of the first node N1may be compensated by the boosting voltage by the third capacitor Cb1. The third capacitor Cb1may be a boosting capacitor. FIG.8is an equivalent circuit diagram of a pixel, according to an embodiment of the present disclosure. A pixel PXBij illustrated inFIG.8includes a configuration similar to the pixel PXij shown inFIG.2, and thus the same reference numerals are used for the same components, and additional descriptions are omitted to avoid redundancy. Referring toFIG.8, the pixel PXBij includes a third capacitor Cb2connected between the third node N3and the scan line GCLj. When the scan signal GCj supplied from the scan line GCLj rises from a low level to a high level, the voltage of the third node N3may be boosted by the third capacitor Cb2. When the voltage of the third node N3is maintained at a boosting level and then the scan signal GWj falls to a low level at a point in time when the scan signal GCj rises from a low level to a high level at the end of the sixth period P6as illustrated inFIG.3, the voltage of the third node N3is increased by the voltage level Vdata of the data signal Di. Even though the voltage of the first node N1is lowered by the kickback voltage Vkb when the scan signal GCCj falls from a high level to a low level, the voltage of the first node N1may be compensated by the boosting voltage by the third capacitor Cb2. The third capacitor Cb2may be a boosting capacitor. FIG.9is a circuit diagram of a pixel, according to an embodiment of the present disclosure. FIG.9illustrates an equivalent circuit diagram of a pixel PXij connected to the i-th data line DLi among the data lines DL1to DLm, the j-th scan lines GILj, GCLj, GWLj, and GCCLj among the scan lines GIL1to GILn, GCL1to GCLn, GWL1to GWLn, and GCCL1to GCCLn and the j-th emission control lines EML1jand EML2jamong the emission control lines EML11to EMLln and EML21to EML2n, which are illustrated inFIG.1. Each of the plurality of pixels PX shown inFIG.1may have the same circuit configuration as the equivalent circuit diagram of the pixel PXCij shown inFIG.9. Referring toFIG.9, the pixel PXCij of a display device according to an embodiment includes at least one light emitting element ED and a pixel circuit. The pixel circuit may include first to fourth and sixth to ninth transistors T1to T4and T6to T9, and first and second capacitors Cst and Chold. In an embodiment, the light emitting element ED may be a light emitting diode. In an embodiment, the pixel PXCij shown inFIG.9does not include the fifth transistor T5and the third capacitor Cb of the pixel PXij shown inFIG.2. In an embodiment, some of the first to fourth and sixth to ninth transistors T1to T4and T6to T9are P-type transistors having LTPS as a semiconductor layer. The other(s) thereof may be an N-type transistor having an oxide semiconductor as a semiconductor layer. In an embodiment, each of the first, second, and sixth to eighth transistors T1, T2, and T6to T8is a P-type transistor, and each of the third, fourth, and ninth transistors T3, T4, and T9is an N-type transistor. The scan lines GILj, GCLj, GWLj, and GCCLj may deliver the scan signals GIj, GCj, GWj, and GCCj, respectively. The emission control lines EML1jand EML2jmay deliver the emission control signals EM1jand EM2j, respectively. The data line DLi transmits one of the data signal Di and the bias signal Bi. The data signal Di may have a voltage level corresponding to the input image signal RGB that is input to the display device DD (seeFIG.1). The first to fourth voltage lines VL1, VL2, VL3, and VL4may deliver the first driving voltage ELVDD, the second driving voltage ELVSS, the first initialization voltage VINT1, and the second initialization voltage VINT2, respectively. The third voltage line VL3and the fourth voltage line VL4may be referred to as a “first initialization voltage line” and a “second initialization voltage line”, respectively. The first transistor T1includes a first electrode electrically connected to the first voltage line VL1via the eighth transistor T8, a second electrode electrically connected to an anode of the light emitting element ED via the sixth transistor T6, and a gate electrode connected to the first node N1. The second transistor T2includes a first electrode connected to the data line DLi, a second electrode connected to the first electrode of the first transistor T1, and a gate electrode connected to the scan line GWLj. The third transistor T3includes a first electrode connected to the second electrode of the first transistor T1, a second electrode connected to the first node N1, and a gate electrode connected to the scan line GCLj. The fourth transistor T4includes a first electrode connected to the first node N1, a second electrode connected to the third voltage line VL3, through which the first initialization voltage VINT1is delivered, and a gate electrode connected to the scan line GILj. The sixth transistor T6includes a first electrode connected to the second electrode of the first transistor T1, a second electrode connected to the anode of the light emitting element ED, and a gate electrode connected to the emission control line EML2j. The seventh transistor T7includes a first electrode connected to the anode of the light emitting element ED, a second electrode connected to the fourth voltage line VL4, and a gate electrode connected to the scan line GWLj. The eighth transistor T8includes a first electrode connected to the first voltage line VL1, a second electrode connected to the first electrode of the first transistor T1, and a gate electrode connected to the emission control line EML1j. The eighth transistor T8is turned on in response to the emission control signal EM1jreceived through the emission control line EML1jso as to deliver the first driving voltage ELVDD to the first electrode of the first transistor T1. The ninth transistor T9includes a first electrode connected to the first electrode of the first transistor T1, a second electrode connected to a third node N3, and a gate electrode connected to the scan line GCCLj. The first capacitor Cst is connected between the third node N3and the first node N1. The second capacitor Chold is connected between the first voltage line VL1and the third node N3. FIGS.10and11are timing diagrams of scan signals and emission control signals for describing an operation of the pixel shown inFIG.9. FIG.10is a timing diagram of scan signals and emission control signals for describing an operation of the pixel shown inFIG.9when an operating frequency is a first operating frequency. Referring toFIGS.9and10, when an operating frequency is a first operating frequency (e.g., 120 Hz), each of the first frame F1and the second frame F2may include a first cycle C1and a second cycle C2. When the operating frequency is the first operating frequency, the emission control signals EM1jand EM2jmay falls to an active level (e.g., a low level) during each of the first and second cycles C1and C2. That is, one frame may include two emission periods. In an embodiment, when the first operating frequency is 120 Hz, each of the emission control signals EM1jand EM2jmay have a frequency of 240 Hz. When the operating frequency is the first operating frequency, the scan signal GCCj may rise to an active level (e.g., a high level) during the first cycle C1, the scan signals GIj and GCj may rise to an active level (e.g., a high level) multiple times (e.g., twice) during each of the first and second cycles C1and C2. When the operating frequency is the first operating frequency, the scan signal GWj may falls to an active level (e.g., a low level) during the first cycle C1, and may be maintained at an inactive level (e.g., a high level) during the second cycle C2. That is, the first cycle C1may be a cycle during which the data signal Di is provided, and the second cycle C2may be a cycle during which the data signal Di is not provided. When the scan signal GWj is at a low level during the first cycle C1, the second transistor T2is turned on and the voltage level Vdata corresponding to the data signal Di is stored in the first capacitor Cst. Afterward, in an emission period in which the sixth and eighth transistors T6and T8are turned on, a current corresponding to charges stored in the capacitor Cst may be provided to the light emitting element ED. Because the scan signal GWj is maintained at a high level during the second cycle C2, a new data signal Di is not received. In an emission period when the sixth and eighth transistors T6and T8during the second cycle C2are turned on, a current corresponding to charges stored in the capacitor Cst during the first cycle C1may be provided to the light emitting element ED. That is, when the operating frequency is the first operating frequency, a current corresponding to the data signal Di received during the first cycle C1may be provided to the light emitting element ED during each of the first cycle C1and the second cycle C2. FIG.11is a timing diagram of scan signals and emission control signals for describing an operation of the pixel shown inFIG.9when an operating frequency is a second operating frequency. Referring toFIGS.9and11, when an operating frequency is a second operating frequency, each of the first frame F1and the second frame F2may include the first cycle C1and the second cycle C2. When the operating frequency is the second operating frequency, one period may include the first frame F1and the second frame F2. The second operating frequency may be a lower frequency than the first operating frequency. In an embodiment, the first operating frequency may be 120 Hz, and the second operating frequency may be 60 Hz. When the operating frequency is the second operating frequency, the emission control signals EMU and EM2jmay fall to an active level (e.g., a low level) during the first and second cycles C1and C2of each of the first frame F1and the second frame F2. That is, one frame may include two emission periods. When the operating frequency is the second operating frequency, the scan signal GCCj rises to an active level (e.g., a high level) during the first cycle C1of the first frame F1and then is maintained at an inactive level (e.g., a low level) during the second cycle C2of the first frame F1and the first and second cycles C1and C2of the second frame F2. The scan signals GIj and GCj may fall to an active level (e.g., a low level) during the first cycle C1of the first frame F1multiple times (e.g., 2 times). The scan signals GIj and GCj may be maintained at a low level in the second cycle C2of the first frame F1and the first and second cycles C1and C2of the second frame F2. When the operating frequency is the second operating frequency, the scan signal GWj may fall to an active level (e.g., a low level) during the first cycle C1of each of the first frame F1and the second frame F2, and may be maintained at an inactive level (e.g., a high level) during the second cycle C2of each of the first frame F1and the second frame F2. A time during which the scan signal GWj is maintained at an active level (e.g., a low level) during the first cycle C1of each of the first frame F1and the second frame F2may be variously changed within a range in which all the emission control signals EMU and EM2jare maintained at a high level. When the scan signal GWj is at a low level during the first cycle C1of the first frame F1, the second transistor T2is turned on and the voltage level Vdata corresponding to the data signal Di is stored in the first capacitor Cst. Afterward, in an emission period in which the sixth and eighth transistors T6and T8are turned on, a current corresponding to charges stored in the capacitor Cst may be provided to the light emitting element ED. Because the scan signal GWj is maintained at a high level during the second cycle C2of the first frame F1, no new data signal Di is received. In an emission period when the sixth and eighth transistors T6and T8during the second cycle C2are turned on, a current corresponding to charges stored in the capacitor Cst during the first cycle C1may be provided to the light emitting element ED. When the scan signal GWj is a low level during the first cycle C1of the second frame F2, the second transistor T2may be turned on, and the bias signal Bi may be delivered to the first electrode of the first transistor T1. The bias signal Bi provided through the data line DLi may be applied to the first electrode of the first transistor T1. At this time, because the scan signal GCCj is at a low level, the ninth transistor T9is turned off, and thus the bias signal Bi is not stored in the first capacitor Cst. In an emission period when the sixth and eighth transistors T6and T8during the first cycle C1of the second frame F2are turned on, a current corresponding to charges stored in the capacitor Cst during the first cycle C1may be provided to the light emitting element ED. Because the scan signal GWj is maintained at a high level during the second cycle C2of the second frame F2, no new data signal Di is received. In an emission period when the sixth and eighth transistors T6and T8during the second cycle C2of the second frame F2are turned on, a current corresponding to charges stored in the capacitor Cst during the first cycle C1may be provided to the light emitting element ED. When the operating frequency is the second operating frequency, the first cycle C1of the first frame F1may be referred to as an “address scan cycle” during which the valid data signal Di is provided. Each of the second cycle C2of the first frame F1, the first cycle C1of the second frame F2, and the second cycle C2of the second frame F2may be referred to as a “self-scan cycle” during which the valid data signal Di is not provided. In an embodiment, the first cycle C1of the second frame F2is a cycle during which the bias signal Bi is applied to the first electrode of the first transistor T1. Each of the second cycle C2of the first frame F1and the second cycle C2of the second frame F2is a cycle during which the first driving voltage ELVDD is applied to the first electrode of the first transistor T1. The first driving voltage ELVDD and the bias signal Bi may be alternately applied to the first electrode of the first transistor T1. As a voltage applied to the first electrode of the first transistor T1is periodically changed, a change in luminance due to hysteresis characteristics of the first transistor T1may be minimized. When the operating frequency is the second operating frequency (e.g., 60 Hz), a data write operation may be performed during only the first cycle C1of the first frame F1. However, light may be emitted depending on the same data signal Di during the second cycle C2of the first frame F1and the first and second cycles C1and C2of the second frame F2. Accordingly, the same effect as the operating frequency of 240 Hz may be achieved. Although an embodiment of the present disclosure has been described for illustrative purposes, those skilled in the art will appreciate that various modifications, and substitutions are possible, without departing from the scope and spirit of the present disclosure as disclosed in the accompanying claims. Accordingly, the technical scope of the present disclosure is not limited to the detailed description of this specification, but should be defined by the claims. A compensation period for compensating for a threshold voltage of a first transistor and a data write period for storing a data signal in a first capacitor may be separated in a pixel of a display device having such a configuration. Accordingly, the threshold voltage compensation time of the first transistor may be sufficiently secured. A pixel may further include a boosting capacitor to compensate for a voltage level of a signal provided to a gate electrode of the first transistor according to a change in a signal level of a scan signal. Accordingly, it is possible to minimize the distortion of an image displayed on pixels. Furthermore, when the display device operates in a mode of a low frequency lower than a normal frequency, a first driving voltage and a bias voltage may be alternately applied to a first electrode of the first transistor. Accordingly, deterioration of image quality due to a hysteresis characteristic of the first transistor may be prevented. While the present disclosure has been described with reference to embodiments thereof, it will be apparent to those of ordinary skill in the art that various changes and modifications may be made thereto without departing from the spirit and scope of the present disclosure as set forth in the following claims. | 52,319 |
11862073 | DETAILED DESCRIPTION The invention now will be described more fully hereinafter with reference to the accompanying drawings, in which various embodiments are shown. This invention may, however, be embodied in many different forms, and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art. In the specification, the expression that a first component (or region, layer, part, etc.) is “on”, “connected with”, or “coupled with” a second component means that the first component is directly on, connected directly with, or coupled directly with the second component or means that a third component is interposed therebetween. The same reference numeral refers to the same component. In addition, in drawings, thicknesses, proportions, and dimensions of components may be exaggerated to describe the technical features effectively. The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting. As used herein, “a”, “an,” “the,” and “at least one” do not denote a limitation of quantity, and are intended to include both the singular and plural, unless the context clearly indicates otherwise. For example, “an element” has the same meaning as “at least one element,” unless the context clearly indicates otherwise. “At least one” is not to be construed as limiting “a” or “an.” “Or” means “and/or.” As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items. The terms “first”, “second”, etc. are used to describe various components, but the components are not limited by the terms. The terms are only used to distinguish one component from another component. For example, without departing from the scope and spirit of the present disclosure, a first component may be referred to as a “second component”, and similarly, the second component may be referred to as the “first component”. Also, the terms “under”, “beneath”, “on”, “above”, etc. are used to describe a relationship between components illustrated in a drawing. The terms are relative and are described with reference to a direction indicated in the drawing. It will be understood that the terms “include”, “comprise”, “have”, etc. specify the presence of features, numbers, steps, operations, elements, or components, described in the specification, or a combination thereof, not precluding the presence or additional possibility of one or more other features, numbers, steps, operations, elements, or components or a combination thereof. Unless otherwise defined, all terms (including technical terms and scientific terms) used in this specification have the same meaning as commonly understood by those skilled in the art to which the present disclosure belongs. Furthermore, terms such as terms defined in the dictionaries commonly used should be interpreted as having a meaning consistent with the meaning in the context of the related technology, and should not be interpreted in ideal or overly formal meanings unless explicitly defined herein. Hereinafter, embodiments of the disclosure will be described in detail with reference to the accompanying drawings. FIG.1is a perspective view of a display device according to an embodiment of the disclosure. Referring toFIG.1, an embodiment of a display device DD may be a portable terminal. The portable terminal may include a tablet personal computer (PC), a smartphone, a personal digital assistant (PDA), a portable multimedia player (PMP), a game console, a wristwatch-type electronic device, and the like. However, the disclosure is not limited thereto. Embodiments of the disclosure may be used for small and medium-sized electronic devices such as a personal computer, a notebook computer, a kiosk, an automotive navigation unit, and a camera, in addition to large-sized electronic equipment such as a television or an outside billboard. The above examples are provided only as an embodiment, and it is obvious that the display device DD may be applied to any other electronic device(s) without departing from the concept of the disclosure. In an embodiment, as illustrated inFIG.1, a display surface on which a first image IM1and a second image IM2are displayed is parallel to a plane defined by a first direction DR1and a second direction DR2. The display device DD includes a plurality of areas that are distinguished from each other on the display surface. The display surface includes a display area DA in which the first image IM1and the second image IM2are displayed, and a non-display area NDA adjacent to the display area DA. The non-display area NDA may be referred to as a bezel area. In an embodiment, for example, the display area DA may be in the shape of a quadrangle. The non-display area NDA surrounds the display area DA. In an embodiment, for example, the display device DD may have a partially curved shape. In such an embodiment, a portion of the display area DA may have a curved shape. The display area DA of the display device DD includes a first display area DA1and a second display area DA2. In a specific application program, the first image IM1may be displayed in the first display area DA1, and the second image IM2may be displayed in the second display area DA2. In an embodiment, for example, the first image IM1may be a video, and the second image IM2may be a still image or an image (e.g., a game control keypad or text information) having a long change period. The display device DD according to an embodiment may drive the first display area DA1, in which the video is displayed, at a frequency higher than or equal to the normal frequency and may drive the second display area DA2, in which the still image is displayed, at a frequency lower than the normal frequency. The display device DD may reduce power consumption by lowering a driving frequency of the second display area DA2. Each of the first display area DA1and the second display area DA2may have a given size and may be changed by an application program. In an embodiment, when the still image is displayed in the first display area DA1and the video is displayed in the second display area DA2, the first display area DA1may be driven at a frequency lower than the normal frequency, and the second display area DA2may be driven at a frequency higher than or equal to the normal frequency. In an embodiment, the display area DA may be divided into three or more display areas. A driving frequency of each of the three or more display areas may be determined depending on a type (e.g., a still image or a video) of an image that is displayed therein. FIGS.2A and2Bare perspective views of a display device DD2according to an embodiment of the disclosure.FIG.2Ashows the display device DD2in an unfolded state, andFIG.2Bshows the display device DD2in a folded state. In an embodiment, as illustrated inFIGS.2A and2B, the display device DD2includes the display area DA and the non-display area NDA. The display device DD2may display an image through the display area DA. The display area DA may include a plane defined by the first direction DR1and the second direction DR2, in a state where the display device DD2is unfolded. A thickness direction of the display device DD2may be parallel to a third direction DR3intersecting the first direction DR1and the second direction DR2. Accordingly, front surfaces (or upper surfaces) and bottom surfaces (or lower surfaces) of members constituting the display device DD2may be defined with respect to the third direction DR3. In an embodiment, for example, the display area DA may be in the shape of a quadrangle. The non-display area NDA surrounds the display area DA. The display area DA may include a first non-folding area NFA1, a folding area FA, and a second non-folding area NFA2. The folding area FA may be bent about a folding axis FX extending in the second direction DR2. When the display device DD2is folded, the first non-folding area NFA1and the second non-folding area NFA2may face each other. Accordingly, in a state where the display device DD2is fully folded, the display area DA may not be exposed to the outside, which may be referred to as “in-folding”. This is only an example, and the operation of the display device DD2is not limited thereto. In an embodiment of the disclosure, when the display device DD2is folded, the first non-folding area NFA1and the second non-folding area NFA2may be opposite to each other. Accordingly, in a state where the display device DD2is folded, the first non-folding area NFA1may be exposed to the outside, which may be referred to as “out-folding”. In an embodiment, the display device DD2may be configured to operate only one of the in-folding and the out-folding. Alternatively, the display device DD2may be configured to operate both the in-folding and the out-folding. In such an embodiment, the same area of the display device DD2, for example, the folding area FA, may be in-folded or out-folded (or may be folded inwardly and outwardly). Alternatively, a partial area of the display device DD2may be in-folded, and another partial area thereof may be out-folded. In an embodiment, the display device DD2may include a single folding area and two non-folding areas as illustrated inFIGS.2A and2B, but the number of folding areas and the number of non-folding areas are not limited thereto. In an alternative embodiment, for example, the display device DD2may include a plurality of non-folding areas, the number of which is more than two, and a plurality of folding areas, and each of the plurality of folding areas may be interposed between non-folding areas adjacent to each other from among the plurality of non-folding areas. An embodiment in which the folding axis FX is parallel to a short side (or parallel to the minor axis) of the display device DD2is illustrated inFIGS.2A and2B. However, the disclosure is not limited thereto. In an alternative embodiment, for example, the folding axis FX may extend in a direction parallel to a long side (or the major axis) of the display device DD2, for example, the first direction DR1. An embodiment in which the first non-folding area NFA1, the folding area FA, and the second non-folding area NFA2are sequentially arranged in the first direction DR1is illustrated inFIGS.2A and2B. However, the disclosure is not limited thereto. In an alternative embodiment, for example, the first non-folding area NFA1, the folding area FA, and the second non-folding area NFA2may be sequentially arranged in the second direction DR2. The plurality of display areas DA1and DA2may be defined in the display area DA of the display device DD2. An embodiment where the plurality of display areas includes two display areas DA1and DA2is illustrated inFIG.2A, but the number of display areas DA1and DA2is not limited thereto. The plurality of display areas DA1and DA2may include the first display area DA1and the second display area DA2. In an embodiment, for example, the first display area DA1may refer to an area where the first image IM1is displayed, and the second display area DA2may refer to an area in which the second image IM2is displayed. In an embodiment, for example, the first image IM1may be a video, and the second image IM2may be a still image or an image (e.g., text information) having a long change period. The display device DD2according to an embodiment may operate differently depending on an operating mode. The operating mode may include a normal frequency mode and a multi-frequency mode. In the normal frequency mode, the display device DD2may drive both the first display area DA1and the second display area DA2at a normal frequency. In the multi-frequency mode, the display device DD2according to an embodiment may drive the first display area DA1, in which the first image IM1is displayed, at a first driving frequency and may drive the second display area DA2, in which the second image IM2is displayed, at a second driving frequency lower than the normal frequency. In such an embodiment, the first driving frequency may be higher than or equal to the normal frequency. Each of the first display area DA1and the second display area DA2may have a given size and may be changed by an application program. In an embodiment, the first display area DA1may correspond to the first non-folding area NFA1, and the second display area DA2may correspond to the second non-folding area NFA2. In addition, a first portion of the folding area FA may correspond to the first display area DA1, and a second portion of the folding area FA may correspond to the second display area DA2. In an embodiment, the whole folding area FA may correspond to only one of the first display area DA1and the second display area DA2. In an embodiment, the first display area DA1may correspond to the first portion of the first non-folding area NFA1, and the second display area DA2may correspond to the second portion of the first non-folding area NFA1, the folding area FA, and the second non-folding area NFA2. In such an embodiment, the size of the second display area DA2may be larger than the size of the first display area DA1. In an embodiment, the first display area DA1may correspond to the first non-folding area NFA1, the folding area FA, and the first portion of the second non-folding area NFA2, and the second display area DA2may be the second portion of the second non-folding area NFA2. In such an embodiment, the size of the first display area DA1may be larger than the size of the second display area DA2. As illustrated inFIG.2B, in a state where the folding area FA is folded, the first display area DA1may correspond to the first non-folding area NFA1, and the second display area DA2may correspond to the folding area FA and the second non-folding area NFA2. FIGS.2A and2Billustrates an embodiment where the display device DD2has a single folding area. However, the disclosure is not limited thereto. In an alternative embodiment, for example, the disclosure may also be applied to a display device having two or more folding areas, a rollable display device, or a slidable display device. Hereinafter, embodiments of the display device DD illustrated inFIG.1will be described in detail. However, features of embodiments of the display device DD illustrated inFIG.1described herein may be identically applied to other alternative embodiments, e.g., the display device DD2illustrated inFIGS.2A and2B. FIG.3Ais a diagram for describing an operation of a display device in a normal frequency mode.FIG.3Bis a diagram for describing an operation of a display device in a multi-frequency mode. Referring toFIG.3A, the first image IM1that is displayed in the first display area DA1may be a video, and the second image IM2that is displayed in the second display area DA2may be a still image or an image (e.g., a game control keypad) having a long change period. The first image IM1displayed in the first display area DA1and the second image IM2displayed in the second display area DA2illustrated inFIG.1are an example, and various images may be displayed on the display device DD. In a normal frequency mode NFM, driving frequencies of the first display area DA1and the second display area DA2of the display device DD correspond to a normal frequency. In an embodiment, for example, the normal frequency may be 120 hertz (Hz). In the normal frequency mode NFM, images each including first to 120th frames F1to F120may be displayed in the first display area DA1and the second display area DA2of the display device DD for 1 second. Referring toFIG.3B, in a multi-frequency mode MFM, the display device DD may set a driving frequency of the first display area DA1, in which the first image IM1(i.e., a video) is displayed, to the first driving frequency, and may set a driving frequency of the second display area DA2, in which the second image IM2(i.e., a still image) is displayed, to a second driving frequency lower than the first driving frequency. When the normal frequency is 120 Hz, the first driving frequency may be 120 Hz, and the second driving frequency may be 1 Hz. The first driving frequency and the second driving frequency may be variously changed. In an embodiment, for example, the first driving frequency may be 120 Hz, which is the same frequency as the normal frequency, or may be 144 Hz higher than the normal frequency, and the second driving frequency may be one selected from 60 Hz, 30 Hz, Hz, 10 Hz, and 1 Hz lower than the normal frequency. In the multi-frequency mode MFM, when the first driving frequency is 120 Hz and the second driving frequency is 1 Hz, the first image IM1corresponding to each of the first to 120th frames F1to F120may be displayed in the first display area DA1of the display device DD for 1 second. With regard to only the first frame F1, the second image IM2may be displayed in the second display area DA2; with regard to the remaining frames F2to F120, an image may not be displayed. An operation of the display device DD in the multi-frequency mode MFM will be described in detail later. FIG.4is a block diagram of a display device according to an embodiment of the disclosure. Referring toFIG.4, an embodiment of the display device DD includes a display panel DP, a driving controller100, a data driving circuit200, and a voltage generator300. The driving controller100receives an input image signal RGB and a control signal CTRL. The driving controller100generates an output image signal DATA by converting a data format of the input image signal RGB in compliance with the specification for an interface with the data driving circuit200. The driving controller100outputs a scan control signal SCS, a data control signal DCS, and an emission control signal ECS. The driving controller100according to an embodiment of the disclosure may determine an operating mode to be one of the normal frequency mode and the multi-frequency mode, based on the input image signal RGB. In an embodiment, the driving controller100may determine an operating mode to be one of the normal frequency mode and the multi-frequency mode, based on mode information included in the control signal CTRL. The data driving circuit200receives the data control signal DCS and the output image signal DATA from the driving controller100. The data driving circuit200converts the output image signal DATA into data signals and then outputs the data signals to a plurality of data lines DL1to DLm to be described later. The data signals refer to analog voltages corresponding to a grayscale value of the output image signal DATA. The voltage generator300generates voltages used for an operation of the display panel DP. In an embodiment, the voltage generator300generates a first driving voltage ELVDD, a second driving voltage ELVSS, a first initialization voltage VINT1, and a second initialization voltage VINT2. The display panel DP includes scan lines GIL1to GILn, GCL1to GCLn, and GWL1to GWLn+1, emission control lines EML1to EMLn, the data lines DL1to DLm, and the pixels PX. The display panel DP may further include a scan driving circuit SD and an emission driving circuit EDC. In an embodiment, the scan driving circuit SD is disposed on a first side of the display panel DP. The scan lines GIL1to GILn, GCL1to GCLn, and GWL1to GWLn+1 extend from the scan driving circuit SD in the second direction DR2. The emission driving circuit EDC is disposed on a second side of the display panel DP. The emission control lines EML1to EMLn extend from the emission driving circuit EDC in a direction opposite to the second direction DR2. The scan lines GIL1to GILn, GCL1to GCLn, and GWL1to GWLn+1 and the emission control lines EML1to EMLn are arranged to be spaced from each other in the first direction DR1. The data lines DL1to DLm extend from the data driving circuit200in the first direction DR1and are arranged to be spaced from each other in the second direction DR2. In an embodiment, as illustrated inFIG.4, the scan driving circuit SD and the emission driving circuit EDC are arranged to face each other, with the pixels PX interposed therebetween, but the disclosure is not limited thereto. In an alternative embodiment, for example, the scan driving circuit SD and the emission driving circuit EDC may be disposed adjacent to each other on the first side or the second side of the display panel DP. In such an embodiment, the scan driving circuit SD and the emission driving circuit EDC may be implemented with one circuit. The plurality of pixels PX are electrically connected with the scan lines GIL1to GILn, GCL1to GCLn, and GWL1to GWLn+1, the emission control lines EML1to EMLn, and the data lines DL1to DLm. Each of the plurality of pixels PX may be electrically connected with four scan lines and one emission control line. In an embodiment, for example, as illustrated inFIG.4, the pixels PX in a first row may be connected with the scan lines GIL′, GCL1, GWL1, and GWL2and the emission control line EML1. In such an embodiment, the pixels PX in a j-th row may be connected with the scan lines GILj, GCLj, GWLj, and GWLj+1 and the emission control line EMLj. Each of the plurality of pixels PX includes a light emitting device ED (refer toFIG.5) and a pixel circuit PXC (refer toFIG.5) for controlling the emission of the light emitting device ED. The pixel circuit PXC may include one or more transistors and one or more capacitors. The scan driving circuit SD and the emission driving circuit EDC may include transistors formed through a same process as the pixel circuit PXC. Each of the plurality of pixels PX receives the first driving voltage ELVDD, the second driving voltage ELVSS, the first initialization voltage VINT1, and the second initialization voltage VINT2from the voltage generator300. The scan driving circuit SD receives the scan control signal SCS from the driving controller100. The scan driving circuit SD may output scan signals to the scan lines GIL1to GILn, GCL1to GCLn, and GWL1to GWLn+1 in response to the scan control signal SCS. A circuit configuration and an operation of the scan driving circuit SD will be described in detail later. The driving controller100according to an embodiment may determine the operating mode based on the input image signal RGB, may divide the display panel DP into the first display area DA1(refer toFIG.1) and the second display area DA2(refer toFIG.1) based on the determined operating mode, and may set driving frequencies of the first display area DA1and the second display area DA2independently of each other. In an embodiment, for example, in the normal node, the driving controller100drives the first display area DA1and the second display area DA2at the normal frequency (e.g., 120 Hz). In such an embodiment, in the multi-frequency mode, the driving controller100may drive the first display area DA1at the first driving frequency (e.g., 120 Hz) and may drive the second display area DA2at the second driving frequency (e.g., 1 Hz). FIG.5is an equivalent circuit diagram of a pixel according to an embodiment of the disclosure. FIG.5illustrates an embodiment of a pixel PXij which is connected with the i-th data line DLi of the data lines DL1to DLm (refer toFIG.4), the j-th scan lines GILj, GCLj, and GWLj and the (j+1)-th scan line GWLj+1 of the scan lines GIL1to GILn, GCL1to GCLn, and GWL1to GWLn+1 (refer toFIG.4), and the j-th emission control line EMLj of the emission control lines EML1to EMLn (refer to FIG.4). A circuit configuration of each of the plurality of pixels PX illustrated inFIG.4may be identical to an equivalent circuit configuration of the pixel PXij illustrated inFIG.5. In a display device according to an embodiment, the pixel PXij includes a pixel circuit PXC and at least one light emitting device ED. In an embodiment, the light emitting device ED may be an organic light emitting diode. The pixel circuit PXC includes first to seventh transistors T1, T2, T3, T4, T5, T6, and T7and a capacitor Cst. In an embodiment, the third and fourth transistors T3and T4of the first to seventh transistors T1to T7are N-type transistors including an oxide semiconductor layer, and each of the first, second, fifth, sixth, and seventh transistors T1, T2, T5, T6, and T7is a P-type transistor including a low-temperature polycrystalline silicon (LTPS) semiconductor layer. However, the disclosure is not limited thereto. In an alternative embodiment, for example, all the first to seventh transistors T1to T7may be P-type transistors or N-type transistors. In an embodiment, at least one selected from the first to seventh transistors T1to T7may be an N-type transistor, and the remaining transistors may be P-type transistors. However, the pixel circuit configuration according to embodiments of the disclosure is not limited toFIG.5. The pixel circuit PXC illustrated inFIG.5is only an example. In an embodiment, for example, the configuration of the pixel circuit PXC may be modified and implemented. The j-th scan lines GILj, GCLj, and GWLj may respectively transfer scan signals GIj, GCj, and GWj, and the (j+1)-th scan line GWLj+1 may transfer a (j+1)-th scan signal GWj+1. The emission control line EMLj transfers an emission signal EMj, and the i-th data line DLi transfers an i-th data signal Di. In the following description, the i-th data signal Di is referred to as a “data signal Di”. The data signal Di may have a voltage level corresponding to the input image signal RGB input to the display device DD (refer toFIG.4) or a voltage level corresponding to a bias voltage. The bias voltage will be described in detail later. First to fourth driving voltage lines VL1, VL2, VL3, and VL4may respectively transfer the first driving voltage ELVDD, the second driving voltage ELVSS, the first initialization voltage VINT′, and the second initialization voltage VINT2. The first transistor T1includes a first electrode connected with the first driving voltage line VL1through the fifth transistor T5, a second electrode electrically connected with an anode of the light emitting device ED through the sixth transistor T6, and a gate electrode connected with a first end of the capacitor Cst. The first transistor T1may receive the data signal Di transferred through the data line DLi based on a switching operation of the second transistor T2and may supply a driving current Id to the light emitting device ED. The second transistor T2includes a first electrode connected with the data line DLi, a second electrode connected with the first electrode of the first transistor T1, and a gate electrode connected with the scan line GWLj. The second transistor T2may be turned on based on the scan signal GWj transferred through the scan line GWLj and may transfer the data signal Di from the data line DLi to the first electrode of the first transistor T1. The third transistor T3includes a first electrode connected with the gate electrode of the first transistor T1, a second electrode connected with the second electrode of the first transistor T1, and a gate electrode connected with the scan line GCLj. The third transistor T3may be turned on based on the scan signal GCj transferred through the scan line GCLj, and thus, the gate electrode and the second electrode of the first transistor T1may be connected with each other, that is, the first transistor T1may be diode-connected. The fourth transistor T4includes a first electrode connected with the gate electrode of the first transistor T1, a second electrode connected with the third driving voltage line VL3through which the first initialization voltage VINT1is transferred, and a gate electrode connected with the scan line GILj. The fourth transistor T4may be turned on based on the scan signal GIj transferred through the scan line GILj, and thus, the first initialization voltage VINT′ may be transferred to the gate electrode of the first transistor T1, such that a voltage of the gate electrode of the first transistor T1may be initialized. This operation may be referred to as an “an initialization operation”. The fifth transistor T5includes a first electrode connected with the first driving voltage line VL1, a second electrode connected with the first electrode of the first transistor T1, and a gate electrode connected with the emission control line EMLj. The sixth transistor T6includes a first electrode connected with the second electrode of the first transistor T1, a second electrode connected with the anode of the light emitting device ED, and a gate electrode connected with the emission control line EMLj. The fifth transistor T5and the sixth transistor T6may be simultaneously turned on based on the emission signal EMj transferred through the emission control line EMLj, such that the first driving voltage ELVDD may be compensated for through the diode-connected transistor T1to be supplied to the light emitting device ED. The seventh transistor T7includes a first electrode connected with the second electrode of the sixth transistor T6, a second electrode connected with the fourth driving voltage line VL4, and a gate electrode connected with the scan line GWLj+1. The seventh transistor T7is turned on based on the scan signal GWj+1 transferred through the scan line GWLj+1 and bypasses a current of the anode of the light emitting device ED to the fourth driving voltage line VL4. The first end of the capacitor Cst is connected with the gate electrode of the first transistor T1as described above, and a second end of the capacitor Cst is connected with the first driving voltage line VL1. A cathode of the light emitting device ED may be connected with the second driving voltage line VL2that transfers the second driving voltage ELVS S. A structure of the pixel PXij according to an embodiment is not limited to the structure illustrated inFIG.5. In an embodiment, for example, in one pixel, the number of transistors, the number of capacitors, and the connection relationship thereof may be variously modified. FIG.6is a timing diagram for describing an operation of a pixel illustrated inFIG.5. An operation of a display device according to an embodiment will be described with reference toFIGS.5and6. Referring toFIGS.5and6, the scan signal GIj of a high level is provided through the scan line GILj during the initialization period within one frame Fs. When the fourth transistor T4is turned on in response to the scan signal GIj of the high level, the first initialization voltage VINT1is supplied to the gate electrode of the first transistor T1through the fourth transistor T4such that the first transistor T1is initialized. Next, when the scan signal GCj of the high level is supplied through the scan line GCLj during a data programming and compensation period, the third transistor T3is turned on. The first transistor T1is diode-connected by the third transistor T3thus turned on and is forward-biased. Also, the second transistor T2is turned on by the scan signal GWj of a low level. As such, a compensation voltage, which is obtained by subtracting a threshold voltage of the first transistor T1from a voltage of the data signal Di supplied from the data line DLi, is applied to the gate electrode of the first transistor T1. That is, a gate voltage applied to the gate electrode of the first transistor T1may be the compensation voltage. In this case, as the first driving voltage ELVDD and the compensation voltage are respectively applied to opposite ends of the capacitor Cst, charges corresponding to a voltage difference of the opposite ends of the capacitor Cst may be stored in the capacitor Cst. During the data programming and compensation period, the seventh transistor T7is turned on in response to the scan signal GWj+1 of the low level transferred through the scan line GWLj+1. A portion of the driving current Id may be drained through the seventh transistor T7as a bypass current Ibp. In the case where the light emitting device ED emits a light under the condition that a minimum current of the first transistor T1flows as a driving current for the purpose of displaying a black image, the black image may not be normally displayed. Accordingly, the seventh transistor T7of the pixel PXij according to an embodiment of the disclosure may drain a portion of the minimum current of the first transistor T1to a current path, which is different from a current path to the light emitting device ED, as the bypass current Ibp. Herein, the minimum current of the first transistor T1means a current flowing under the condition that a gate-source voltage of the first transistor T1is smaller than the threshold voltage, that is, the first transistor T1is turned off. As a minimum driving current (e.g., a current of 10 pA or less) is transferred to the light emitting device ED, with the first transistor T1turned off, an image of black luminance is expressed. When the minimum driving current for displaying a black image flows, the influence of a bypass transfer of the bypass current Ibp may be great. However, when a large driving current for displaying an image such as a normal image or a white image flows, there may be almost no influence of the bypass current Ibp. Accordingly, when a driving current for displaying a black image flows, a light emitting current Ted of the light emitting device ED, which corresponds to a result of subtracting the bypass current Ibp drained through the seventh transistor T7from the driving current Id, may have a minimum current amount to such an extent as to accurately express a black image. Accordingly, a contrast ratio may be improved by accurately implementing an image of black luminance by using the seventh transistor T7. In an embodiment, the bypass signal is the scan signal GWj+1 of the low level but is not limited thereto. Next, during an emission period, the emission signal EMj supplied from the emission control line EMLj transitions from the high level to the low level. During the emission period, the fifth transistor T5and the sixth transistor T6are turned on by the emission signal EMj of the low level. In this case, the driving current Id is generated depending on a difference between the gate voltage of the gate electrode of the first transistor T1and the first driving voltage ELVDD and is supplied to the light emitting device ED through the sixth transistor T6. That is, the current led flows through the light emitting device ED. FIG.7illustrates scan signals GI1to GI3840in the normal frequency mode NFM and the multi-frequency mode MFM. An embodiment of the scan signals GI1to GI3840are illustrated inFIG.7. In such an embodiment, the frequency of the scan signals GI1to GI3840is 120 Hz in the normal frequency mode NFM. In an embodiment, in the multi-frequency mode MFM, the scan signals Gil to GI1920correspond to the first display area DA1of the display device DD illustrated inFIG.1, and the scan signals GI1921to GI3840correspond to the second display area DA2of the display device DD. In the multi-frequency mode MFM, the scan signals GI1to GI1920may be activated to the high level in each of the first to 120th frames F1to F120, and the scan signals GI1921to GI3840may be activated to the high level only in the first frame F1. That is, in the multi-frequency mode MFM, the frequency of each of the scan signals GI1to GI1920is 120 Hz, and the frequency of each of the scan signals GI1921to GI3840may be 1 Hz. In such an embodiment, the first frame F1may correspond to a driving period DRP in which the second display area DA2is driven, and the second to 120th frames F2to F120may correspond to a non-driving period NDRP in which the second display area DA2is not driven. Accordingly, the first display area DA1in which a video is displayed may be driven in response to the scan signals GI1to GI1920of the first driving frequency (e.g., 120 Hz), and the second display area DA2in which a still image is displayed may be driven in response to the scan signals GI1921to GI3840of the second driving frequency (e.g., 1 Hz). In such an embodiment, as the first display area DA1in which a video is displayed is driven by using the first driving frequency, the display quality of the video may be maintained. In such an embodiment, because the second display area DA2in which a still image is displayed is driven by using the second driving frequency lower than the first driving frequency, power consumption may be reduced. FIG.7illustrates only an embodiment of the scan signals GI1to GI3840. However, as in the scan signals GI1to GI3840, the scan driving circuit SD (refer toFIG.4) may generate scan signals GC1to GC3840. FIG.8illustrates scan signals GW1to GW3841in the normal frequency mode NFM and the multi-frequency mode MFM. An embodiment of the scan signals GW1to GW3841are illustrated inFIG.8. In such an embodiment, the frequency of the scan signals GW1to GW3841is 120 Hz in the normal frequency mode NFM. In such an embodiment, the frequency of the scan signals GW1to GW3841is 120 Hz in the multi-frequency mode MFM. That is, the frequency of the scan signals GW1to GW3841in the multi-frequency mode MFM is the same as that in the normal frequency mode NFM. Referring toFIGS.1,4,7, and8, in the normal frequency mode NFM, the driving controller100provides the data driving circuit200with the output image signal DATA corresponding to the input image signal RGB. Accordingly, voltage levels of data signals that are provided to the data lines DL1to DLm may be determined by the output image signal DATA. During the first frame F1of the multi-frequency mode MFM, the driving controller100provides the data driving circuit200with the output image signal DATA corresponding to the input image signal RGB. When the first display area DA1is driven in each of the second to 120th frames F2to F120of the multi-frequency mode MFM, the driving controller100provides the data driving circuit200with the output image signal DATA corresponding to the input image signal RGB. When the second display area DA2is driven in each of the second to 120th frames F2to F120of the multi-frequency mode MFM, the driving controller100provides the data driving circuit200with the output image signal DATA corresponding to a bias signal. Referring back toFIG.5, in the normal frequency mode NFM, the data signal Di corresponding to the input image signal RGB may be provided to the i-th data line DLi. During the first frame F1of the multi-frequency mode MFM, the data signal Di corresponding to the input image signal RGB may be provided to the i-th data line DLi. During the second to 120th frames F2to F120of the multi-frequency mode MFM, the data signal Di corresponding to the bias signal may be provided to the i-th data line DLi. During the second to 120th frames F2to F120of the multi-frequency mode MFM, the scan signals GIj and GCj may be maintained at the low level being a disable level (refer toFIG.7), and the valid data signal Di may not be provided to the i-th data line DLi. The threshold voltage of the first transistor T1may also change depending on a gate-source voltage of the first transistor T1. In an embodiment, for example, the threshold voltage of the first transistor T1may have a first average level during the low-to-high transition of the gate-source voltage and may have a second average level different from the first average level during the high-to-low transition of the gate-source voltage. Different current-voltage (I-V) characteristic curves may be drawn due to the first average level and the second average level. The dependency of the threshold voltage on the gate-source voltage may be referred to as a “hysteresis of a transistor”. According to the hysteresis characteristic of the first transistor T1, the driving current of the first transistor T1, which is determined by the data signal Di of the current frame, may be affected by the data signal Di applied in the previous frame. In an embodiment, for example, where the data signal Di for displaying an image of a low gray scale is provided in a previous frame and then the data signal Di for displaying an image of a specific gray scale is provided in a current frame, an image of a gray scale higher than the specific gray scale of the current frame may be displayed by the light emitting device ED. In an embodiment, where the data signal Di for displaying an image of a high gray scale is provided in a previous frame and then the data signal Di for displaying an image of a specific gray scale is provided in a current frame, an image of a gray scale lower than the specific gray scale of the current frame may be displayed by the light emitting device ED. The issue due to the hysteresis characteristic of the first transistor T1described above may not occur when a change period of the data signal Di is fast, that is, when a driving frequency of the display device DD is high. However, as the driving frequency of the display device DD decreases, the change period of the data signal Di may become longer. Accordingly, a change in luminance according to the hysteresis characteristic of the first transistor T1may be perceived by the user when the display device DD is driven at a low driving frequency. In an embodiment, during the second to 120th frames F2to F120of the multi-frequency mode MFM, the data signal Di of a given voltage level corresponding to the bias signal may be provided to the first electrode of the first transistor T1. The gate-source voltage of the first transistor T1may be initialized by providing a specific voltage to the first electrode of the first transistor T1. Accordingly, a change in luminance of the light emitting device ED due to the hysteresis characteristic of the first transistor T1may decrease. However, in the case where a frequency difference of the first display area DA1and the second display area DA2is great in the multi-frequency mode MFM and where the operating mode changes from the multi-frequency mode MFM to the normal frequency mode NFM after the multi-frequency mode MFM is maintained during a long time, an afterimage may be visually perceived at a boundary of the second display area DA2, which is adjacent to the first display area DA1. FIG.9is a block diagram illustrating a configuration of a driving controller according to an embodiment of the disclosure. Referring toFIGS.4and9, an embodiment of the driving controller100includes an operating mode determiner110and a signal generator120. The operating mode determiner110determines a frequency mode based on the input image signal RGB and the control signal CTRL and outputs a mode signal MD corresponding to the determined frequency mode. In an embodiment, the operating mode determiner110may determine the operating mode based on mode information included in the control signal CTRL provided from the outside (e.g., a main processor or a graphics processor). In an embodiment, for example, while a specific application program is executed, the operating mode determiner110may output the mode signal MD indicating the multi-frequency mode. The mode signal MD may include information about the first driving frequency of the first display area DA1and the second driving frequency of the second display area DA2, in addition to information indicating whether the operating mode is the normal frequency mode or the multi-frequency mode. In an embodiment, the mode signal MD may include information about a start location and/or a boundary area of the second display area DA2. The signal generator120outputs the output image signal DATA, the data control signal DCS, the emission control signal ECS, and the scan control signal SCS in response to the input image signal RGB, the control signal CTRL, and the mode signal MD. When the mode signal MD indicates the normal frequency mode, the signal generator120may output the output image signal DATA, the data control signal DCS, the emission control signal ECS, and the scan control signal SCS such that the first display area DA1(refer toFIG.1) and the second display area DA2(refer toFIG.1) are driven at the first driving frequency. When the mode signal MD indicates the multi-frequency mode, the signal generator120may output the output image signal DATA, the data control signal DCS, the emission control signal ECS, and the scan control signal SCS such that the first display area DA1is driven at the first driving frequency and the second display area DA2are driven at the second driving frequency. While the mode signal MD indicates the multi-frequency mode, the signal generator120may sequentially output the output image signal DATA, a first bias signal BIAS1, and a second bias signal BIAS2. The data driving circuit200, the scan driving circuit SD, and the emission driving circuit EDC operate in response to the output image signal DATA, the data control signal DCS, the emission control signal ECS, and the scan control signal SCS such that an image is displayed in the display panel DP. FIG.10is a diagram for describing a driving method for decreasing a luminance difference due to an afterimage at a boundary between the first and second display areas DA1and DA2. Referring toFIG.10, an embodiment of the display area DA of the display device DD may include a first horizontal line L1to an n-th horizontal line Ln. In an embodiment, for example, as illustrated inFIG.4, the pixels PX belonging to the first horizontal line L1may be connected with the scan lines GILL GCL1, GWL1, and GWL2and the emission control line EML1. In such an embodiment, as illustrated inFIG.4, the pixels PX belonging to the j-th horizontal line Lj may be connected with the scan lines GILj, GCLj, GWLj, and GWLj+1 and the emission control line EMLj. The first display area DA1may include the first horizontal line L1to the k-th horizontal line Lk, and the second display area DA2may include the (k+1)-th horizontal line Lk+1 to the n-th horizontal line Ln. A portion of the second display area DA2, which is adjacent to the first display area DA1, that is, the (k-th)-th horizontal line Lk+1 to the (k+16)-th horizontal line Lk+16 may be provided for the stress boundary diffusion and may be referred to as a “boundary area BR”. Hereinafter, embodiments where the number of horizontal lines included in the boundary area BR is 16 will be described in detail, but the disclosure is not limited thereto. In an embodiment, as shown inFIG.10, the boundary area BR may be included in the second display area DA2, but the disclosure is not limited thereto. In an alternative embodiment, for example, the boundary area BR may include a portion of the first display area DA1and a portion of the second display area DA2. In another alternative embodiment, the boundary area BR may include only a portion of the first display area DA1. The remaining portion of the second display area DA2other than the boundary area BR may be referred to as a non-boundary area NBR. In the multi-frequency mode MFM illustrated inFIG.7, a data signal of a voltage level Vdata (shown inFIG.10) corresponding to the output image signal DATA may be provided to the pixels PX (i.e., first pixels) of the first display area DA1. In the driving period DRP of the multi-frequency mode MFM, a data signal of the voltage level Vdata corresponding to the output image signal DATA may be provided to the pixels PX (i.e., second pixels) of the second display area DA2. In the non-driving period NDRP of the multi-frequency mode MFM, a data signal of a first voltage level Vbias1(shown inFIG.10) corresponding to the first bias signal BIAS1may be provided to pixels of the boundary area BR belonging to the second display area DA2. In the non-driving period NDRP of the multi-frequency mode MFM, a data signal of a second voltage level Vbias2(shown inFIG.10) corresponding to the second bias signal BIAS2different from the first bias signal BIAS1may be provided to pixels of the non-boundary area NBR belonging to the second display area DA2. The first voltage level Vbias1and the second voltage level Vbias2may be different from each other. Data signals that are provided to the pixels PX of the (k+1)-th horizontal line Lk+1 to the (k+16)-th horizontal line Lk+16, that is, pixels of the boundary area BR, may have the same voltage level as or different voltage levels from each other. In an embodiment, a voltage level of data signals that are provided to the pixels PX of the (k+1)-th horizontal line Lk+1 may be (Vp+Vo1), a voltage level of data signals that are provided to the pixels PX of the (k+2)-th horizontal line Lk+2 may be (Vp+Vo2), and a voltage level of data signals that are provided to the pixels PX of the (k+16)-th horizontal line Lk+16 may be (Vp+Vo16). When a reference voltage level Vp and the second voltage level Vbias2have the relationship of “Vp<Vbias2”, offset voltages Vo1to Vo16may have the following relationship: Vo1<Vo2<Vo3<<Vo16. In an embodiment, each of the offset voltages Vo1to Vo16may be greater than or equal to “0”. Also, the voltage level “Vp+Vo16” of the data signals that are provided to the pixels PX of the (k+16)-th horizontal line Lk+16 may be smaller than or equal to the second voltage level Vbias2. FIG.11is a diagram illustrating a relationship between a voltage level of a data signal and a fusion flicker index (FFI) according to a gray scale level of the output image signal DATA. FIG.11shows a relationship between the fusion flicker index (FFI) and a voltage level of the data signal Di (refer toFIG.5) provided to the first electrode of the first transistor T1(refer toFIG.5) during the non-driving period NDRP when a data signal to be provided to the second display area DA2is at a23gray scale level23G, at a32gray scale level32G, at a64gray scale level64G, at a128gray scale level128G, and at a255gray scale level255G. Referring toFIGS.9and11, the second voltage level Vbias2may be set to a voltage level at which the fusion flicker index (FFI) of all the gray scales23G,32G,64G,128G, and255G is minimum when the second driving frequency is at the lowest level. The lowest voltage at which the fusion flicker index (FFI) of all the gray scales23G,32G,64G,128G, and255G is smaller than a reference level FFI REF may be selected as the reference voltage level Vp. The reference level FFI REF may be set to a level at which the user does not perceive a flicker. FIG.12illustrates the data signal Di provided to the i-th data line DLi during the non-driving period NDRP of the multi-frequency mode MFM. Referring toFIG.12, because the first display area DA1is driven at the first driving frequency in the multi-frequency mode MFM, the data signal Di that is provided to the i-th data line DLi while the first display area DA1is driven has the voltage level Vdata corresponding to the output image signal DATA. The data signal Di that is provided to the i-th data line DLi during the non-driving period NDRP (refer toFIG.7) of the multi-frequency mode MFM may have the second voltage level Vbias2corresponding to the second bias signal BIAS2. The gate-source voltage of the first transistor T1(refer toFIG.5) may be initialized by providing the second voltage level Vbias2corresponding to the second bias signal BIAS2to the first electrode of the first transistor T1during the non-driving period NDRP. Accordingly, a change in luminance of the light emitting device ED due to the hysteresis characteristic of the first transistor T1may decrease. However, in the case where a frequency difference of the first display area DA1and the second display area DA2is great in the multi-frequency mode MFM and where the operating mode changes from the multi-frequency mode MFM to the normal frequency mode NFM after the multi-frequency mode MFM is maintained during a long time, an afterimage may be visually perceived at a boundary of the second display area DA2, which is adjacent to the first display area DA1. FIG.13Aillustrates the data signal Di provided to the i-th data line DLi during the non-driving period NDRP of the multi-frequency mode MFM. Referring toFIG.13A, because the first display area DA1is driven at the first driving frequency in the multi-frequency mode MFM, the data signal Di that is provided to the i-th data line DLi while the first display area DA1is driven has the voltage level Vdata corresponding to the output image signal DATA. In the non-driving period NDRP (refer toFIG.7) of the multi-frequency mode MFM, the data signal Di that is provided to the i-th data line DLi while the boundary area BR is driven may have the first voltage level Vbias1corresponding to the first bias signal BIAS1. In the non-driving period NDRP of the multi-frequency mode MFM, the data signal Di that is provided to the i-th data line DLi while the non-boundary area NBR is driven may have the second voltage level Vbias2corresponding to the second bias signal BIAS2. In an embodiment, the first voltage level Vbias1may be lower than the second voltage level Vbias2. FIG.13Bis an enlarged diagram of the data signal Di provided to the i-th data line DLi while the boundary area BR illustrated inFIG.13Ais driven. Referring toFIGS.10and13B, in an embodiment where the boundary area BR includes the (k+1)-th horizontal line Lk+1 to the (k+16)-th horizontal line Lk+16, a voltage level of the data signal Di may stepwise change from “Vp+Vo1” to “Vp+Vo16” while the boundary area BR is driven. That is, the data signal Di having the voltage level of “Vp+Vo1” may be provided to the pixels PX of the (k+1)-th horizontal line Lk+1, the data signal Di having the voltage level of “Vp+Vo2” may be provided to the pixels PX of the (k+2)-th horizontal line Lk+2, and the data signal Di having the voltage level of “Vp+Vo16” may be provided to the pixels PX of the (k+16)-th horizontal line Lk+16. That is, the first voltage level Vbias1stepwise increases from the (k+1)-th horizontal line Lk+1 to the (k+16)-th horizontal line Lk+16. In the pixels PX disposed in the boundary area BR, as a voltage level of a bias signal provided to the first electrode of the first transistor T1(refer toFIG.5) is set differently for each horizontal line, the luminance in the boundary area BR due to the afterimage may gradually change. Even though a luminance difference of the first display area DA1and the second display area DA2due to the afterimage occurs, the luminance may gradually change in the boundary area BR, and thus, the degree to which the user perceives the luminance difference may be minimized. In an embodiment, the first voltage level Vbias1may be greater than or equal to the reference voltage level Vp (refer toFIG.11) and lower than the second voltage level Vbias2. An embodiment in which the first voltage levels Vbias1of the (k+1)-th horizontal line Lk+1 to the (k+16)-th horizontal line Lk+16 are different from each other is illustrated inFIG.13B, but the disclosure is not limited thereto. In an alternative embodiment, for example, in the (k+1)-th horizontal line Lk+1 to the (k+16)-th horizontal line Lk+16, the first voltage level Vbias1may be differently set in units of two horizontal lines. FIGS.14A,14B, and14Cillustrate the data signal Di provided to the i-th data line DLi during the non-driving period NDRP of the multi-frequency mode MFM. Referring toFIGS.14A,14B, and14C, because the first display area DA1is driven at the first driving frequency in the multi-frequency mode MFM, the data signal Di that is provided to the i-th data line DLi while the first display area DA1is driven has the voltage level Vdata corresponding to the output image signal DATA. In the non-driving period NDRP (refer toFIG.7) of the multi-frequency mode MFM, the data signal Di that is provided to the i-th data line DLi while the boundary area BR is driven may have the first voltage level Vbias1corresponding to the first bias signal BIAS1. The first voltage level Vbias1may change in units of a given number of frames. In an embodiment, for example, the first voltage level Vbias1may be Vp1during the second frame F2(refer toFIG.7) belonging to the non-driving period NDRP, may be Vp2during the third frame F3(refer toFIG.7) belonging to the non-driving period NDRP, and may be Vpk during the k-th frame Fk (k is a natural number greater than 1 and less than or equal to 120) belonging to the non-driving period NDRP. In an embodiment, for example, when k is 10, the first voltage level Vbias1may change for each frame to sequentially have Vp1, Vp2, Vp3, Vp4, Vp5, Vp6, Vp7, Vp8, Vp9, Vp10, Vp1, Vp2. . . . In such an embodiment, voltage levels of data signals that are provided to pixels of all horizontal lines in the boundary area BR may be the same as the first voltage level Vbias1. In the pixels PX disposed in the boundary area BR, as a voltage level of a bias signal provided to the first electrode of the first transistor T1(refer toFIG.5) changes periodically, for example, every frame, the afterimage phenomenon in the boundary area BR may decrease. Even though a luminance difference of the first display area DA1and the second display area DA2due to the afterimage occurs, the afterimage phenomenon may decrease in the boundary area BR, and thus, the degree to which the user perceives the luminance difference may be minimized. In an embodiment, a change period of the first voltage level Vbias1may be variously modified. In an embodiment, for example, the first voltage level Vbias1may change in units of two frames. In such an embodiment, the first voltage level Vbias1may change for each frame to sequentially and repeatedly have Vp1, Vp1, Vp2, Vp2, Vp3, Vp3, Vp4, Vp4, Vp5, Vp5, Vp6, and Vp6. In the non-driving period NDRP of the multi-frequency mode MFM, the data signal Di that is provided to the i-th data line DLi while the non-boundary area NBR is driven may have the second voltage level Vbias2corresponding to the second bias signal BIAS2. In an embodiment, the first voltage level Vbias1may be lower than the second voltage level Vbias2. FIG.15is a flowchart illustrating an operation of a driving controller according to an embodiment of the disclosure. Referring toFIGS.9and15, initially (e.g., after power-up), the operating mode of the operating mode determiner110of the driving controller100may be set to the normal frequency mode. The operating mode determiner110determines the frequency mode in response to the input image signal RGB and the control signal CTRL. In an embodiment, for example, in one frame, when a part (e.g., an image signal corresponding to the first display area DA1(refer toFIG.1)) of the input image signal RGB is a video and the remaining part (e.g., an image signal corresponding to the second display area DA2(refer toFIG.1)) of the image signal is a still image (in operation S100), the operating mode determiner110changes the operating mode to the multi-frequency mode and outputs the mode signal MD corresponding to the determined frequency mode (in operation S110). The mode signal MD may include information about the first driving frequency of the first display area DA1and the second driving frequency of the second display area DA2, in addition to information indicating whether the operating mode is the normal frequency mode or the multi-frequency mode. Also, the mode signal MD may include information about a start location and/or a boundary area of the second display area DA2. FIG.16is a flowchart illustrating an operation of a driving controller in a multi-frequency mode according to an embodiment of the disclosure. Referring toFIGS.9,10, and16, during the multi-frequency mode, the first display area DA1may be driven at the first driving frequency, and the second display area DA2may be driven at the second driving frequency lower than the first driving frequency. While the mode signal MD indicates the multi-frequency mode, the signal generator120of the driving controller100may sequentially output the output image signal DATA, the first bias signal BIAS1, and the second bias signal BIAS2. When the first display area DA1is driven (in operation S200), the signal generator120outputs the output image signal DATA corresponding to the input image signal RGB (in operation S210). When the boundary area BR is driven (in operation S220), the signal generator120outputs the first bias signal BIAS1(in operation S230). When the non-boundary area NBR is driven (in operation S220), the signal generator120outputs the second bias signal BIAS2(in operation S240). When the input image signal RGB of the whole frame corresponds to a video, the operating mode determiner110changes the frequency mode to the normal frequency mode and outputs the mode signal MD corresponding to the determined frequency mode (in operation S250). In embodiments of the disclosure, when a video is displayed in a first display area and a still image is displayed in a second display area, a display device may operate in a multi-frequency mode in which the first display area is driven at a first driving frequency and the second display area is driven at a second driving frequency. In the multi-frequency mode, a given bias voltage may be provided to data lines of a boundary of the second display area, which is adjacent to the first display area. In such an embodiment, the reduction of a display quality may be effectively prevented by setting a voltage level of the bias voltage in a way such that a luminance difference due to an afterimage is not visually perceived at the boundary. The invention should not be construed as being limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete and will fully convey the concept of the invention to those skilled in the art. While the invention has been particularly shown and described with reference to embodiments thereof, it will be understood by those of ordinary skill in the art that various changes in form and details may be made therein without departing from the spirit or scope of the invention as defined by the following claims. | 62,239 |
11862074 | DETAILED DESCRIPTION In the specification, the expression that a first component (or region, layer, part, etc.) is “on”, “connected with”, or “coupled with” a second component means that the first component is directly on, connected with, or coupled with the second component or means that a third component is interposed therebetween. Like reference numerals refer to like components. Also, in drawings, the thickness, ratio, and dimension of components may be exaggerated for effectiveness of description of technical contents. The term “and/or” includes one or more combinations of the associated listed items. The terms “first”, “second”, etc. are used to describe various components, but the components are not limited by the terms. The terms are used only to differentiate one component from another component. For example, without departing from the scope and spirit of the present disclosure, a first component may be referred to as a second component, and similarly, the second component may be referred to as the first component. The articles “a,” “an,” and “the” are singular in that they have a single referent, but the use of the singular form in the specification should not preclude the presence of more than one referent. Also, the terms “under”, “beneath”, “on”, “above”, etc. are used to describe a relationship between components illustrated in a drawing. The terms are relative and are described with reference to a direction indicated in the drawing. It will be understood that the terms “include”, “comprise”, “have”, etc. specify the presence of features, numbers, steps, operations, elements, or components, described in the specification, or a combination thereof, not precluding the presence or additional possibility of one or more other features, numbers, steps, operations, elements, or components or a combination thereof. Hereinafter, embodiments of the present disclosure will be described with reference to accompanying drawings. FIG.1is a perspective view of a display device, according to an embodiment of the present disclosure. Referring toFIG.1, a portable terminal is illustrated as an example of a display device DD according to an embodiment of the present disclosure. The portable terminal may include a tablet personal computer (PC), a smartphone, a personal digital assistant (PDA), a portable multimedia player (PMP), a game console, a wristwatch-type electronic device, and the like. However, the present disclosure is not limited thereto. The present disclosure may be used for small and medium electronic devices such as a personal computer, a notebook computer, a kiosk, a car navigation unit, and a camera, in addition to large-sized electronic equipment such as a television or an outside billboard. The above embodiments are provided only as an example since the display device DD may be applied to other electronic device(s) without departing from the concept of the present disclosure. As shown inFIG.1, a display surface, on which an image IM is displayed, is parallel to a plane defined by a first direction DR1and a second direction DR2. The display device DD includes a plurality of separate areas on the display surface. The display surface includes a display area DA, in which the image IM is displayed, and a non-display area NDA adjacent to the display area DA. The non-display area NDA may be referred to as a bezel area. In an embodiment, no image is displayed in the non-display area NDA or no pixels are present in the non-display area NDA. For example, the display area DA may have a rectangular shape. The non-display area NDA surrounds the display area DA. Also, although not illustrated, for example, the display device DD may have a shape that is partially curved. As a result, one area of the display area DA may have a curved shape. A front surface (alternatively, an upper surface or a first surface) and a rear surface (alternatively, a lower surface or a second surface) of each of members are defined in a direction in which the image IM is displayed, that is, the third direction DR3. However, directions that the first, second, and third directions DR1, DR2, and DR3indicate may be relative in concept and may be changed to different directions. The display device DD according to an embodiment of the present disclosure may detect a user input applied from the outside. The user input includes various external inputs such as a touch of a part of a user's body, light, heat, pressure, or the like. FIG.2is an exploded perspective view of a display device, according to an embodiment of the present disclosure.FIG.2illustrates components of the display device DD simply to explain a stacked relationship between the components. As shown inFIG.2, the display device DD includes a window WM, a display module DM, and a lower case BC. The display module DM includes a display panel DP and an input sensing layer ISP. According to an embodiment of the present disclosure, the display panel DP may include a light emitting display panel. For example, the display panel DP may be an organic light emitting display panel, an inorganic light emitting display panel, a quantum dot light emitting display panel. An emission layer of the organic light emitting display layer may include an organic light emitting material. An emission layer of the inorganic light emitting display panel may include an inorganic light emitting material. An emission layer of the quantum dot light emitting display panel may include a quantum dot, a quantum rod, or the like. Hereinafter, in an embodiment, the description is provided under the assumption that the display panel DP is an organic light emitting display panel. The display panel DP may output the image IM, and the output image IM may be displayed through the display surface IS. The input sensing layer ISP may be disposed on the display panel DP to sense an external input. The input sensing layer ISP may be directly disposed on the display panel DP. According to an embodiment of the present disclosure, the input sensing layer ISP may be formed on the display panel DP by a subsequent process. That is, when the input sensing layer ISP is directly disposed on the display panel DP, an inner adhesive film (not illustrated) is not interposed between the input sensing layer ISP and the display panel DP. However, the inner adhesive film may be interposed between the input sensing layer ISP and the display panel DP. In this case, the input sensing layer ISP is not manufactured together with the display panel DP through the subsequent processes. That is, the input sensing layer ISP may be manufactured through a process separate from that of the display panel DP and may then be fixed on an upper surface of the display panel DP by the inner adhesive film. The window WM may be formed of a transparent material capable of outputting the image IM. For example, the window WM may be formed of glass, sapphire, plastic, etc. While the window WM is illustrated as being a single layer, embodiments of the disclosure are not limited thereto. For example, the window WM may include a plurality of layers. The non-display area NDA of the display device DD described above may correspond to an area that is defined by printing a material including a given color on one area of the window WM. As an example of the present disclosure, the window WM may include a light blocking pattern for defining the non-display area NDA. The light blocking pattern that is a colored organic film may be formed, for example, in a coating manner. The window WM may be coupled to the display module DM through an adhesive film. As an example of the present disclosure, the adhesive film may include an optically clear adhesive (OCA) film. However, the adhesive film is not limited thereto. For example, the adhesive film may include an adhesive or a sticking agent. For example, the adhesive film may include an optically clear resin (OCR) or a pressure sensitive adhesive (PSA) film. An anti-reflection layer may be further interposed between the window WM and the display module DM. The anti-reflection layer decreases the reflectivity of external light incident from above the window WM. The anti-reflection layer according to an embodiment of the present disclosure may include a retarder and a polarizer. The retarder may be a film type or a liquid crystal coating type and may include a half-wavelength (λ/2) retarder and/or a quarter-wavelength (λ/4) retarder. The polarizer may be a film type or a liquid crystal coating type. The film type may include a stretch-type synthetic resin film, and the liquid crystal coating type may include liquid crystals arranged in a given direction. The retarder and the polarizer may be implemented with one polarization film. As an example of the present disclosure, the anti-reflection layer may also include color filters. The arrangement of the color filters may be determined in consideration of colors of light generated from a plurality of pixels PX (seeFIG.3) included in the display panel DP. Also, the anti-reflection layer may further include a light blocking pattern. The display module DM may display the image IM depending on an electrical signal and may transmit/receive information about an external input. The display module DM may be defined as a pixel area PA and a peripheral area NPA. The pixel area PA may be defined as an area through which the image IM provided from the display area DA is output. Also, the pixel area PA may be defined as an area in which the input sensing layer ISP senses an external input applied from the outside. The peripheral area NPA is adjacent to the pixel area PA. For example, the peripheral area NPA may surround the pixel area PA. However, this is illustrated merely as an example. The peripheral area NPA may be defined in various shapes and is not limited to a specific embodiment. According to an embodiment, the pixel area PA of the display module DM may correspond to at least part of the display area DA. The display module DM may further include a flexible circuit board FCB, a driving controller100(e.g., a control circuit), a data driving circuit DDC, and a power manager300(e.g., a power managing circuit such as a power management integrated circuit). The flexible circuit board FCB is connected to the display panel DP to electrically connect the display panel DP to the main circuit board MCB. The flexible circuit board FCB may include a plurality of driving elements. The plurality of driving elements may include the driving controller100for driving the display panel DP, and the power manager300. As an example of the present disclosure, the data driving circuit DDC is disposed on the display panel DP. However, the present disclosure is not limited thereto. In an embodiment, the data driving circuit DDC may be disposed on the flexible circuit board FCB. Moreover, the data driving circuit DDC may include at least one integrated circuit chip. In an embodiment, the driving controller100and the power manager300may be disposed on a main circuit board, and the data driving circuit DDC may be disposed on the flexible circuit board FCB. In this case, the main circuit board may be electrically connected to the display panel DP through the flexible circuit board FCB. In an embodiment, the driving controller100may be arranged on the display panel DP. In an embodiment, the driving controller100and the data driving circuit DDC may be integrated onto a single chip. Although not shown inFIG.2, the display module DM may further include an input sensing circuit for controlling an input sensing layer. FIG.3is a block diagram of the display device shown inFIG.1. Referring toFIG.3, the display device DD includes the display panel DP, the driving controller100, and the power manager300. The driving controller100receives an input image signal RGB and a control signal CTRL. The driving controller100outputs a scan control signal SCS, a data control signal DCS, an output image signal DS, and an emission control signal ECS. The power manager300generates voltages used to operate the display panel DP. In an embodiment, the power manager300generates a first driving voltage ELVDD, a second driving voltage ELVSS, a first initialization voltage VINT1, and a second initialization voltage VINT2. In an embodiment, the power manager300may operate under the control of the driving controller100. In an embodiment, the second driving voltage ELVSS is less than the first driving voltage ELVDD. The display panel DP includes scan lines GIL1to GILn, GCL1to GCLn, and GWL1to GWLn+1, light emitting control lines EML1to EMLn, the data lines DL1to DLm, and the pixels PX. A scan driving circuit SDC, an emission driving circuit EDC, and the data driving circuit DDC may be disposed on the display panel DP. The scan driving circuit SDC may receive the scan control signal SCS from the driving controller100to drive the scan lines GIL1to GILn, GCL1to GCLn, and GWL1to GWLn+1. The scan lines GIL1to GILn, GCL1to GCLn, and GWL1to GWLn+1 may extend from the scan driving circuit SDC in the first direction DR1. The emission driving circuit EDC may receive the emission control signal ECS from the driving controller100to drive the emission control lines EML1to EMLn. The emission control lines EML1to EMLn may extend from the emission driving circuit EDC in a direction opposite to the first direction DR1. The data driving circuit DDC receives the data control signal DCS and the output image signal DS from the driving controller100. The data driving circuit DDC converts the output image signal DS into data signals and then outputs the data signals to a plurality of data lines DL1to DLm to be described later. In an embodiment, the data signals are analog voltages corresponding to a grayscale level of the output image signal DS. For example, the output image signal DS may include a grayscale for each pixel PX of the display panel DP. The scan lines GILL to GILn, GCL1to GCLn, and GWL1to GWLn+1 and the emission control lines EML1to EMLn are arranged spaced from one another in the second direction DR2. The data lines DL1to DLm may extend from the data driving circuit DDC in a direction opposite to the second direction DR2, and may be are arranged spaced from one another in the first direction DR1. In an embodiment, the scan driving circuit SDC, the emission driving circuit EDC, and the data driving circuit DDC may be positioned in the non-pixel area NPA of the display panel DP, and may be respectively arranged on a first side, a second side, and a third side of the display panel DP. In an embodiment, the first side may be the non-pixel area NPA adjacent to a left side of the pixel area PA; the second side may be the non-pixel area NPA adjacent to a right side of the pixel area PA; and, the third side may be the non-pixel area NPA adjacent to a lower side of the pixel area PA. However, the present disclosure is not limited thereto. In the example shown inFIG.3, the scan driving circuit SDC and the emission driving circuit EDC are arranged to face each other with the pixels PX interposed therebetween, but the present disclosure is not limited thereto. For example, the scan driving circuit SDC and the emission driving circuit EDC may be positioned adjacent to each other on one of the first side and the second side of the display panel DP. In an embodiment, the scan driving circuit SDC and the emission driving circuit EDC may be implemented with a single circuit. The plurality of pixels PX are electrically connected to the scan lines GIL1to GILn, GCL1to GCLn, and GWL1to GWLn+1, the emission control lines EML1to EMLn, and the data lines DL1to DLm. Each of the plurality of pixels PX may be electrically connected to four scan lines and one emission control line. For example, as shown inFIG.3, a first row of pixels may be connected to the scan lines GILL GCL1, GWL1, and GWL2and the emission control line EML1. Furthermore, the j-th row of pixels may be connected to the scan lines GILj, GCLj, GWLj, and GWLj+1 and the emission control line EMLj. The plurality of pixels PX may be positioned in the pixel area PA. Each of the plurality of pixels PX includes a light emitting element ED (seeFIG.4) and a pixel circuit PXC (seeFIG.4) for controlling the light emission of the light emitting element ED. The pixel circuit PXC may include one or more transistors and one or more capacitors. The scan driving circuit SDC and the emission driving circuit EDC may include transistors formed through the same process as the pixel circuit PXC. Each of the plurality of pixels PX receives the first driving voltage ELVDD, the second driving voltage ELVSS, the first initialization voltage VINT1, and the second initialization voltage VINT2from the power manager300. The first driving voltage ELVDD may be higher than the second driving voltage ELVSS. The data driving circuit DDC according to an embodiment receives the first driving voltage ELVDD from the power manager300to generate a first reference voltage and a second reference voltage that are used to drive the data lines DL1to DLm. The specific circuit configuration and operation of the data driving circuit DDC will be described in detail later. FIG.4is a circuit diagram of a pixel, according to an embodiment of the present disclosure. FIG.4illustrates an equivalent circuit diagram of a pixel PXij connected to the i-th data line DLi among the data lines DL1to DLm, the j-th scan lines GILj, GCLj, and GWLj and the (j+1)-th scan line GWLj+1 among the scan lines GIL1to GILn, GCL1to GCLn, and GWL1to GWLn+1, and the j-th emission control line EMLj among the emission control lines EML1to EMLn, which are illustrated inFIG.3. Each of the plurality of pixels PX shown inFIG.3may have the same circuit configuration as the equivalent circuit diagram of the pixel PXij shown inFIG.4. Referring toFIG.4, the pixel PXij of a display device according to an embodiment includes the pixel circuit PXC and the at least one light emitting element ED. In an embodiment, the light emitting element ED may be a light emitting diode. In an embodiment, it is described that the one pixel PXij includes the one light emitting element ED. The pixel circuit PXC includes first to seventh transistors T1, T2, T3, T4, T5, T6, and T7and a capacitor Cst. In an embodiment, the third and fourth transistors T3and T4among the first to seventh transistors T1to T7are N-type transistors by using an oxide semiconductor as a semiconductor layer. Each of the first, second, fifth, sixth, and seventh transistors T1, T2, T5, T6, and T7is a P-type transistor having a low-temperature polycrystalline silicon (LTPS) semiconductor layer. However, the present disclosure is not limited thereto, and all of the first to seventh transistors T1to T7may be P-type transistors or N-type transistors. In an embodiment, at least one of the first to seventh transistors T1to T7may be an N-type transistor, and the remaining transistors may be P-type transistors. Moreover, the circuit configuration of a pixel according to an embodiment of the present disclosure is not limited toFIG.4. The pixel circuit PXC illustrated inFIG.4is merely an example. For example, the configuration of the pixel circuit PXC may be modified and implemented. The scan lines GILj, GCLj, GWLj, and GWLj+1 may deliver scan signals GIj, GCj, GWj, and GWj+1, respectively. The emission control line EMLj may deliver an emission control signal EMj. The data line DLi delivers a data signal Di. The data signal Di may have a voltage level corresponding to the image signal RGB input to the display device DD (seeFIG.3). First to fourth driving voltage lines VL1, VL2, VL3, and VL4may deliver the first driving voltage ELVDD, the second driving voltage ELVSS, the first initialization voltage VINT1, and the second initialization voltage VINT2, respectively. The first transistor T1includes a first electrode connected to the first driving voltage line VL1via the fifth transistor T5, a second electrode electrically connected to an anode of the light emitting element ED via the sixth transistor T6, and a gate electrode connected to one end of the capacitor Cst. The first transistor T1may receive the data signal Di delivered through the data line DLi depending on the switching operation of the second transistor T2and then may supply a driving current Id to the light emitting element ED. The second transistor T2includes a first electrode connected to the data line DLi, a second electrode connected to the first electrode of the first transistor T1, and a gate electrode connected to the scan line GWLj. The second transistor T2may be turned on in response to the scan signal GWj received through the scan line GWLj and then may deliver the data signal Di delivered from the data line DLi to the first electrode of the first transistor T1. For example, the scan signal GWj may be applied to a gate electrode of the second transistor T2. The third transistor T3includes a first electrode connected to the gate electrode of the first transistor T1, a second electrode connected to the second electrode of the first transistor T1, and a gate electrode connected to the scan line GCLj. The third transistor T3may be turned on in response to the scan signal GCj received through the scan line GCLj, and thus, the gate electrode and the second electrode of the first transistor T1may be connected, that is, the first transistor T1may be diode-connected. For example, the scan signal GCj may be applied to a gate electrode of the third transistor T3. The fourth transistor T4includes a first electrode connected to the gate electrode of the first transistor T1, a second electrode connected to the third driving voltage line VL3through which the first initialization voltage VINT1is supplied, and a gate electrode connected to the scan line GILj. The fourth transistor T4may be turned on in response to the scan signal GIj received through the scan line GILj and then may perform an initialization operation of initializing a voltage of the gate electrode of the first transistor T1by supplying the first initialization voltage VINT1to the gate electrode of the first transistor T1. For example, the scan signal GIj may be applied to a gate electrode of the fourth transistor T4. The fifth transistor T5includes a first electrode connected to the first driving voltage line VL1, a second electrode connected to the first electrode of the first transistor T1, and a gate electrode connected to the emission control line EMLj. The sixth transistor T6includes a first electrode connected to the second electrode of the first transistor T1, a second electrode connected to the anode of the light emitting element ED, and a gate electrode connected to the emission control line EMLj. The fifth transistor T5and the sixth transistor T6may be simultaneously turned on in response to the emission control signal EMj received through the emission control line EMLj. In this way, the first driving voltage ELVDD may be compensated through the first transistor T1thus diode-connected and may be supplied to the light emitting element ED. For example, the emission control signal EMj may be applied to gate electrodes of the fifth transistor T5and the sixth transistor T6. The seventh transistor T7includes a first electrode connected to the second electrode of the sixth transistor T6, a second electrode connected to the fourth driving voltage line VL4, and a gate electrode connected to the scan line GWLj+1. The seventh transistor T7is turned on in response to the scan signal GWj+1 received through the scan line GWLj+1 and bypasses a current of the anode of the light emitting element ED to the fourth driving voltage line VL4. For example, the scan signal GWj+1 may be applied to a gate electrode of the seventh transistor T7. As described above, one end of the capacitor Cst is connected to the gate electrode of the first transistor T1, and the other end of the capacitor Cst is connected to the first driving voltage line VL1. The cathode of the light emitting element ED may be connected to the second driving voltage line VL2that delivers the second driving voltage ELVSS. A structure of the pixel PXij according to an embodiment is not limited to the structure shown inFIG.4. The number of transistors included in the one pixel PXij, the number of capacitors included in the one pixel PXij, and the connection relationship thereof may be variously modified. FIG.5is a circuit diagram of a data driving circuit, according to an embodiment of the present disclosure. Referring toFIG.5, a data driving circuit200includes a reference voltage generator210and an output circuit220. The data driving circuit200may be used to implement the data driving circuit DDC ofFIG.3. The reference voltage generator210receives the first driving voltage ELVDD from the power manager300shown inFIG.3and outputs a first reference voltage AVC_VREG1and a second reference voltage AVC_VREF1. The first reference voltage AVC_VREG1and the second reference voltage AVC_VREF1are generated based on the first driving voltage ELVDD. The reference voltage generator210includes a noise filter NC1(e.g., a filtering circuit), a first voltage generator211, a second voltage generator212, and a third voltage generator213. The noise filter NC1receives the first driving voltage ELVDD and outputs a filtered driving voltage ELVDD-F. In an embodiment, the noise filter NC1outputs the filtered driving voltage ELVDD-F by removing low-frequency components included in the first driving voltage ELVDD. For example, components included in the first driving voltage less than a certain frequency or frequency range may be removed or attenuated by the noise filter NC1. In an embodiment, the noise filter NC1is implemented by a high pass filter. The noise filter NC1may include a resistor R11and a capacitor C11. The resistor R11is connected between an input terminal IN1and a second node N2. The capacitor C11may be connected between the second node N2and a ground terminal. The second node N2may be an output node to which the filtered driving voltage ELVDD-F is output. A cut-off frequency of the noise filter NC1may be determined depending on a resistance value of the resistor R11and the capacitance of the capacitor C11. Accordingly, the resistance value of the resistor R11and the capacitance of the capacitor C11may be set to be suitable for the characteristics of the display device DD. In an embodiment, the resistor R11is a variable resistor whose resistance may be adjusted by a voltage output by the voltage generator210to change the cut-off frequency. In an embodiment, the capacitor C11is a variable capacitor whose capacitance may be adjusted by a voltage output by the voltage generator210to change the cut-off frequency. The circuit configuration of the noise filter NC1is not limited to the embodiment ofFIG.5and may be variously changed. The first voltage generator211receives voltages V1, V2, V3, VLIN1, VSSA, and VSSA_REF and outputs a first voltage VREG1, a second voltage NELVDD, and a third voltage VREF1. The first voltage generator211may include operational amplifiers AP1, AP2, and AP3. The operational amplifiers AP1, AP2, and AP3may output the first voltage VREG1, the second voltage NELVDD and the third voltage VREF1, respectively. In an embodiment, the first voltage VREG1, the second voltage NELVDD, and the third voltage VREF1have different voltage levels from one another. In an embodiment, the first voltage VREG1, the second voltage NELVDD, and the third voltage VREF1have a relationship of “VREG1>NELVDD>VREF1”. In an embodiment, the second voltage NELVDD output from the operational amplifier AP2has the same voltage level as the first driving voltage ELVDD output from the power manager300illustrated inFIG.3. The first voltage VREG1, the second voltage NELVDD, and the third voltage VREF1may be output to a third output terminal OUT3, a first node N1, and a fourth output terminal OUT4, respectively. The circuit configuration of the first voltage generator211is not limited to the embodiment ofFIG.5and may be variously changed. The second voltage generator212receives the first voltage VREG1, the second voltage NELVDD, and the filtered driving voltage ELVDD-F and outputs the first reference voltage AVC_VREG1. The first reference voltage AVC_VREG1may be output to the first output terminal OUT1. The second voltage generator212includes resistors R1, R2, R3, and R4and an operational amplifier AP4. The resistor R1is connected between the third output terminal OUT3and a first input terminal (+) of the operational amplifier AP4. The resistor R2is connected between the first input terminal (+) of the operational amplifier AP4and the second node N2. The resistor R3is connected between the first node N1and a second input terminal (−) of the operational amplifier AP4. The resistor R4is connected between the second input terminal (−) of the operational amplifier AP4and the first output terminal OUT1. The first reference voltage AVC_VREG1output from the second voltage generator212may be calculated by Equation 1 below. AVC_VREG1=(ELVDD-F−NELVDD)+VREG1 [Equation 1] The circuit configuration of the second voltage generator212is not limited to the embodiment ofFIG.5and may be variously changed. The third voltage generator213receives the second voltage NELVDD, the third voltage VREF1, and the filtered driving voltage ELVDD-F and outputs the second reference voltage AVC_VREF1. The second reference voltage AVC_VREF1may be output to a second output terminal OUT2. The third voltage generator213includes resistors R5, R6, R7, and R8and an operational amplifier AP5. The resistor R5is connected between the fourth output terminal OUT4and a first input terminal (+) of the operational amplifier AP5. The resistor R6is connected between the first input terminal (+) of the operational amplifier AP5and the second node N2. The resistor R7is connected between the first node N1and a second input terminal (−) of the operational amplifier AP5. The resistor R8is connected between the second input terminal (−) of the operational amplifier AP5and the second output terminal OUT2. The second reference voltage AVC_VREF1output from the third voltage generator213may be calculated by Equation 2 below. AVC_VREF1=(ELVDD-F−NELVDD)+VREF1 [Equation 2] The circuit configuration of the third voltage generator213is not limited to the embodiment ofFIG.5and may be variously changed. The output circuit220outputs the data signal Di having a voltage level corresponding to the output image signal DS based on the first reference voltage AVC_VREG1and the second reference voltage AVC_VREF1. The output circuit220includes a resistor string221, a digital-to-analog converter222, and a buffer223. The resistor string221may include a plurality of resistors connected between the first output terminal OUT1and the second output terminal OUT2. For example, the resistors of the resistor string221may be connected in series with one another. The resistor string221may output voltages of connection nodes between a plurality of resistors as gamma reference voltages. For example, a node between each pair of the resistors may output a different one of the gamma reference voltages and the gamma reference voltages may have levels that are different from one another. The digital-to-analog converter222receives the output image signal DS from the driving controller100shown inFIG.3. The digital-to-analog converter222outputs the data signal Di corresponding to the output image signal DS corresponding to the i-th data line DLi among the plurality of gamma reference voltages from the resistor string221. The buffer223outputs the data signal Di from the digital-to-analog converter222to the i-th data line DLi. FIG.5illustrates that only the output circuit220outputs the data signal Di to the i-th data line DLi. However, the output circuit220may drive all of the data lines DL1to DLn illustrated inFIG.3in the same method as a method of driving the i-th data line DLi. As mentioned above, a voltage level of the data signal Di output from the output circuit220may correspond to the output image signal DS. However, the voltage level of the data signal Di may be changed depending on voltage levels of the first reference voltage AVC_VREG1and the second reference voltage AVC_VREF1. As can be observed from Equation 1 and Equation 2, the second voltage generator212and the third voltage generator213may output the first reference voltage AVC_VREG1and the second reference voltage AVC_VREF1based on the filtered driving voltage ELVDD-F, respectively. In an embodiment, the second voltage NELVDD has the same voltage level as the first driving voltage ELVDD output from the power manager300illustrated inFIG.3. However, the first driving voltage ELVDD provided to the display panel DP and the data driving circuit DDC may be changed to a voltage level different from the first driving voltage ELVDD output from the power manager300by a voltage drop in the display panel DP, a contact resistor between the flexible circuit board FCB and the display panel DP, and the like. In this case, a difference occurs between the second voltage NELVDD and the first driving voltage ELVDD, which is actually provided to the display panel DP and the data driving circuit DDC. The second voltage generator212and the third voltage generator213receive the filtered driving voltage ELVDD-F and respectively output the first reference voltage AVC_VREG1and the second reference voltage AVC_VREF1based on the filtered driving voltage ELVDD-F. That is, the second voltage generator212and the third voltage generator213may generate the first reference voltage AVC_VREG1and the second reference voltage AVC_VREF1, by reflecting a voltage level of the filtered driving voltage ELVDD-F obtained by removing noise from the first driving voltage ELVDD substantially provided to the display panel DP. Thus the display quality of an image displayed on the display panel DP may be prevented from deteriorating. FIGS.6A and6Bare diagrams illustrating a change in a voltage level of the first reference voltage AVC_VREG1according to a voltage level of a first driving voltage. Referring toFIGS.3,5and6A, the first driving voltage ELVDD generated by the power manager300may be provided to the data driving circuit DDC and the display panel DP. As shown inFIG.4, the first driving voltage ELVDD provided to the display panel DP may be provided to the pixel PXij through the first driving voltage line VL1. The first driving voltage line VL1extends in the first direction DR1and/or the second direction DR2, and high-frequency noise may be reduced by wiring resistors and capacitance components. However, the first driving voltage ELVDD provided directly from the power manager300to the input terminal IN1of the data driving circuit DDC may include a noise component (e.g., ripple). When the reference voltage generator210does not include the noise filter NC1, the first driving voltage ELVDD may be provided directly to the second voltage generator212. In this case, the noise component included in the first driving voltage ELVDD may be delivered to the first input terminal (+) of the operational amplifier AP4in the second voltage generator212. Accordingly, the first reference voltage AVC_VREG1output from the operational amplifier AP4may include a noise component similar to the first driving voltage ELVDD. When the reference voltage generator210does not include the noise filter NC1, the first driving voltage ELVDD may be provided directly to the third voltage generator213. In this case, although not shown inFIG.6A, the second reference voltage AVC_VREF1may also include a noise component similar to that of the first driving voltage ELVDD. Because the output circuit220outputs the data signal Di based on the first reference voltage AVC_VREG1and the second reference voltage AVC_VREF1, the voltage level of the data signal Di may be changed when the first reference voltage AVC_VREG1and the second reference voltage AVC_VREF1including noise components. When the voltage level of the data signal Di is changed even though the same output data signal DS is input to the output circuit220, the luminance of the image displayed on the display panel DP may be changed. The reference voltage generator210illustrated inFIG.5includes the noise filter NC1. The noise filter NC1may output the filtered driving voltage ELVDD-F obtaining by removing high-frequency components included in the first driving voltage ELVDD. Because the filtered driving voltage ELVDD-F is provided to the first input terminal (+) of the operational amplifier AP4, the first reference voltage AVC_VREG1output from the operational amplifier AP4is not affected by the noise component of the first driving voltage ELVDD. Likewise, because the filtered driving voltage ELVDD-F is provided to the first input terminal (+) of the operational amplifier AP5, the second reference voltage AVC_VREF1output from the operational amplifier AP5is not affected by the noise component of the first driving voltage ELVDD. Accordingly, as shown inFIG.6B, even though the first driving voltage ELVDD includes noise components, the luminance of an image displayed on the display panel DP may be maintained at a stable level. FIG.7is a circuit diagram of a data driving circuit, according to an embodiment of the present disclosure. A data driving circuit200-1shown inFIG.7has a configuration similar to the data driving circuit200shown inFIG.5other than a noise filter NC2. Accordingly, the same reference numerals are used for the same circuit configurations, and additional descriptions are omitted to avoid redundancy. The data driving circuit200-1may be used to implement the data driving circuit DDC ofFIG.3. The data driving circuit200-1illustrated inFIG.7includes a reference voltage generator210-1and an output circuit220. The reference voltage generator210-1includes the noise filter NC2, the first voltage generator211, the second voltage generator212, and the third voltage generator213. The noise filter NC2receives the first driving voltage ELVDD and outputs the filtered driving voltage ELVDD-F. In an embodiment, the noise filter NC2outputs the filtered driving voltage ELVDD-F by removing high-frequency components included in the first driving voltage ELVDD. For example, components included in the first driving voltage ELVDD greater than a certain frequency or frequency range may be removed or attenuated by the noise filter NC2. In an embodiment, the noise filter NC2is implemented by a low pass filter. The noise filter NC2may include inverters IV1and IV2(e.g., inverter circuits). The inverters IV1and IV2are connected in series between the second node N2and the input terminal IN1. That is, the inverter IV2receives the first driving voltage ELVDD received from the input terminal IN1. The output of inverter IV2is provided as an input to the inverter IV1. The inverter IV1receives an output of the inverter IV2and outputs the filtered driving voltage ELVDD-F. The inverters IV1and IV2may receive the second voltage NELVDD and the voltage VSSA. In an embodiment, each of the inverters IV1and IV2may be an operational amplifier (or an inverting amplifier). For example, the second voltage NELVDD and the voltage VSSA may be applied to power supply terminals of the inverters IV1and IV2. The inverters IV1and IV2may output the filtered driving voltage ELVDD-F obtaining by removing high-frequency components included in the first driving voltage ELVDD. The circuit configuration of the noise filter NC2is not limited to the embodiment ofFIG.7and may be variously changed. FIG.8is a circuit diagram of a data driving circuit, according to an embodiment of the present disclosure. A data driving circuit200-2shown inFIG.8has a configuration similar to the data driving circuit200shown inFIG.5other than a noise filter NC3. Accordingly, the same reference numerals are used for the same circuit configurations, and additional descriptions are omitted to avoid redundancy. The data driving circuit200-2may be used to implement the data driving circuit DDC ofFIG.3. The data driving circuit200-2illustrated inFIG.8includes a reference voltage generator210-2and the output circuit220. The reference voltage generator210-2includes the noise filter NC3, the first voltage generator211, the second voltage generator212, and the third voltage generator213. The noise filter NC3receives the first driving voltage ELVDD and outputs the filtered driving voltage ELVDD-F. In an embodiment, the noise filter NC3outputs the filtered driving voltage ELVDD-F by removing high-frequency components included in the first driving voltage ELVDD. For example, components included in the first driving voltage ELVDD greater than a certain frequency or frequency range may be removed or attenuated by the noise filter NC3. The noise filter NC3may include a first switching circuit SWC1, resistors R21, R22, R23, and R24, a second switching circuit SWC2, and capacitors C21, C22, C23, and C24. In an embodiment, the first switching circuit SWC1selects at least one of the resistors R21, R22, R23, and R24in response to the first switching signal SW1so as to be connected between the input terminal IN1and the second node N2. In an embodiment, the resistors R21, R22, R23, and R24have different resistance values from one another. In an embodiment, the resistors R21, R22, R23, and R24have the same resistance as one another. The second switching circuit SWC2selects at least one of the capacitors C21, C22, C23, and C24in response to the second switching signal SW2so as to be connected between the second node N2and a ground terminal. In an embodiment, the capacitors C21, C22, C23, and C24have different capacitances from one another. In an embodiment, the capacitors C21, C22, C23, and C24have the same capacitance as one another. A cut-off frequency of the noise filter NC3may be determined depending on a resistance value of the resistor(s) connected between the input terminal IN1and the second node N2and capacitance of the capacitor(s) connected between the second node N2and the ground terminal. In an embodiment, the cut-off frequency is inversely proportional to a product of the resistance value (referred to as “R”) and the capacitance (referred to as “C”), that is, “R×C”. Accordingly, the resistance values of the resistors R21, R22, R23, and R24and the capacitances of the capacitors C21, C22, C23, and C24are set to be suitable for the characteristics of the display device DD. In an embodiment, at least one of the resistors R21, R22, R23, and R24is connected between the input terminal IN1and the second node N2via the first switching circuit SWC1; and, at least one of the capacitors C21, C22, C23, and C24is connected between the second node N2and the ground terminal via the second switching circuit SWC2. WhileFIG.8shows a single signal SW1being applied to the first switching circuit SWC1, in an embodiment this single signal may be replaced with a distinct switching signal for each of the internal switches of the first switching circuit SWC1so that one or more of the internal switches may open while one or more remaining switches may be closed. WhileFIG.8shows a single signal SW2being applied to the second switching circuit SWC2, in an embodiment this single signal may be replaced with a distinct switching signal for each of the internal switches of the second switching circuit SWC2so that one or more of the internal switches may be opened while one or more remaining switches may be closed. Thus, the cutoff frequency of the noise filter NC3may be adjusted by differently closing and opening internal switches of the switching circuits SWC1and SWC2. FIG.9is a circuit diagram of a data driving circuit, according to an embodiment of the present disclosure. A data driving circuit200-3shown inFIG.9has a configuration similar to the data driving circuit200shown inFIG.5other than a noise filter NC4. Accordingly, the same reference numerals are used for the same circuit configurations, and additional descriptions are omitted to avoid redundancy. The data driving circuit200-3may be used to implement the data driving circuit DDC ofFIG.3. The data driving circuit200-3illustrated inFIG.9includes a reference voltage generator210-3and the output circuit220. The reference voltage generator210-3includes the noise filter NC4, the first voltage generator211, the second voltage generator212, and the third voltage generator213. The noise filter NC4receives the first driving voltage ELVDD and outputs the filtered driving voltage ELVDD-F. In an embodiment, the noise filter NC4outputs the filtered driving voltage ELVDD-F by removing high-frequency components included in the first driving voltage ELVDD. For example, components included in the first driving voltage ELVDD greater than a certain frequency or frequency range may be removed or attenuated by the noise filter NC4. The noise filter NC4may include inverter strings IV11, IV12, IV13, and IV14and a third switching circuit SWC3. In an embodiment, the inverter strings IV11, IV12, IV13, and IV14include a different numbers of inverters. For example, the inverter strings IV11, IV12, IV13, and IV14may include 2, 4, 6, and 8 inverters, respectively. Inverters included in each of the inverter strings IV11, IV12, IV13, and IV14may be sequentially connected in series between the second node N2and the third switching circuit SWC3. Each of the inverters in the inverter strings IV11, IV12, IV13, and IV14may receive the second voltage NELVDD and the voltage VSSA in the same manner as the inverters IV1and IV2shown inFIG.7. In an embodiment, the third switching circuit SWC3selects one of the inverter strings IV11, IV12, IV13, and IV14in response to the third switching signal SW3so as to be connected between the input terminal IN1and the second node N2. As the number of inverters included in each of the inverter strings IV11, IV12, IV13, and IV14increases, a resistance value and capacitance may increase and thus the cut-off frequency may be lowered. For example, a cut-off frequency of the inverter string IV12is lower than that of the inverter string IV11. The display device DD may output the third switching signal SW3such that one or at least one of the inverter strings IV11, IV12, IV13, and IV14is connected to the input terminal IN1and the second node N2depending on the required cut-off frequency. The circuit configuration of the noise filter NC4is not limited to the embodiment ofFIG.9and may be variously changed. At least one embodiment of the disclosure provides a display device configured to generate a reference voltage in conjunction with a driving voltage that is suitable for a display panel. In particular, after noise components included in the driving voltage provided to the display panel from a power manager are filtered out to generate a filtered driving voltage, the reference voltage is generated from the filtered driving voltage, and thus display quality may be prevented from deteriorating. In at least one embodiment of the disclosure, a data driving circuit of a display device filters noise from a driving voltage before using it to generate a data signal that is applied to a pixel of the display device. The data driving circuit generates a first reference voltage based on a sum of a first voltage and a difference between the filtered driving voltage and a second other voltage, generates a second reference voltage based on a sum of a third other voltage and the difference, and generates the data signal based on image data and the two reference voltages. While the present disclosure has been described with reference to embodiments thereof, it will be apparent to those of ordinary skill in the art that various changes and modifications may be made thereto without departing from the spirit and scope of the present disclosure as set forth in the following claims. | 48,148 |
11862075 | DETAILED DESCRIPTION OF EMBODIMENTS The technical solutions of the present disclosure are further described below with reference to specific embodiments, and it should be noted that the scope of protection of the present disclosure is not limited to the following description. The present disclosure provides a drive circuit, which, as shown inFIG.1, includes: a first module, generating display data based on image information; a second module, generating a display signal based on the display data and a plurality of clock signals; and a third module, outputting a constant current based on the display signal. In the above, any two adjacent clock signals of the plurality of clock signals differ by M complete clock cycles, 0≤M<1. Image data information transmitted from outside can be stored in an SRAM (Static Random-Access Memory); and relevant processing can be performed on the image information by the first module to generate the display data and store the same. The processing may include adding one clock cycle to the actual display data, that is, the first module is further configured to add one clock cycle to the display data and then output the same as final display data; definitely, this operation is not necessary, and depending on the selection, whether to execute this operation needs to be selected according to a PWM generation circuit or a specific PWM generation method. An initial display signal with a corresponding width, also referred to as an initial PWM signal, is generated according to the size of display data, which can be realized by means of counting. Optionally, a counting module can be constituted by a flip-flop, and the counting module counts the display data based on a GCLK clock signal, and generates the initial display signal with a width of integer clock cycles (integer number of clock cycles). The second module performs subsequent processing based on the initial display signal. It can be understood that generating the initial display signal by means of counting also may be performed outside the second module, and in this case, the second module receives the above initial display signal and performs subsequent processing. That is to say, the counting module can be included in the second module, in the first module, even independent of the first module and the second module and located between the two, which receives the display data of the first module, converts the same into the initial display signal and then inputs the same into the second module. A previous clock signal and a next clock signal have a fixed phase difference therebetween, i.e., M*T, T being the clock cycle; it can be understood that the next clock signal is delayed by M clock cycles compared with the previous clock signal; a second clock signal is delayed by M clock cycles compared with a first clock signal; and a third clock signal is delayed by 2*M clock cycles compared with the first clock signal, and so on. The second module is in fact a pulse width signal generating device (PWM generating device), which may consist of a flip-flop and/or a logical circuit. It can be understood that the magnitude of M determines the accuracy of the display signal or the PWM signal. M is the minimum scale (relative to the clock cycle T) at which the display signal can be represented, that is, the display signal can be accurate to M*T. It can be understood that the width of the above initial display signal is integer clock cycles. That is to say, it represents the integral part of actual display data, which is also the mainstream technology in the prior art, i.e., the drive chip processes the display data of integer clock cycles, and performs control according to the display signal of integer clock cycles. The disadvantage thereof lies in that the display accuracy is impaired. For example, the actual display data is 3.2*T, but only 3*T is actually displayed. However, in the present disclosure, not only an integral part of the display data is acquired, but also a fractional part of the actual display data is acquired by a delayed clock signal. For example, if the display data is 3.2*T, 3*T (initial display signal) can be obtained by counting, and the fractional part 0.2*T is also obtained by using a delayed clock signal (M=0.2), so that the display signal finally obtained is the actual 3.2*T. In the above, T is the clock cycle. It can be seen that when M=0.2, the accuracy of the display signal which can be generated in the present disclosure is 0.2*T. In this manner, relatively high display accuracy can be obtained at the minimum expense without increasing the sampling frequency or increasing the chip cost and power consumption. The third module is connected to the second module and outputs a constant current. The third module can output a constant current during the valid period of the display signal, for example, a high level period of the PWM signal. Specifically, the third module receives a reference current, and the third module receives the display signal output by the second module, and can output a constant current to a column line during the valid period of the display signal, for example, the high level period of the PWM signal, and drive LED lamp beads on the column line where the third module is located to light up. The number of third modules is equal to the number of channels, and the number of channels is generally an integer greater than or equal to 4, preferably 8 channels and 16 channels. In other words, the third module actually is a module that can output a constant current according to the PWM signal. There are various methods for generating the reference current, and the method for generating the reference current is described below with reference toFIG.10. In the present disclosure, a fourth module is used to provide a reference current for the third module. Specifically, the fourth module includes: a reference voltage generation module, a bias module, a current generation module, and a pre-charging module, wherein the reference voltage generation module is configured to provide a reference voltage to the bias module, the bias module is configured to provide a bias current to the current generation module and provide a bias voltage to the pre-charging module, and the current generation module is configured to provide a reference current to the third module. Bandgap can be selected as the reference voltage generation module. Bandgap (Bandgap voltage reference), i.e., a bandgap reference, can realize a voltage reference independent of temperature by using a sum of a voltage having a positive temperature coefficient and a voltage having a negative temperature coefficient, where the temperature coefficients of the two cancel each other. The pre-charging module outputs a pre-charging voltage during a non-display period, wherein the pre-charging voltage is configured to charge the column line and charge the same to a pre-determined potential, so as to solve poor display problems such as relatively dark first line, coupling of high grayscale and low grayscale, cross-board color difference, and a lower ghost image existing in display. In the above, the pre-charging module is an existing technology, and will not be described in detail in the present disclosure. Definitely, it is also possible that the fourth module is not be provided with a pre-charging module. Certainly, in the prior art, due to the influence of factors such as parasitic capacitance, for the LED display, poor display problems such as darker first line, high and low grayscale coupling, cross-board color difference, and a lower ghost image often occur, therefore, substantially all the LED display drive circuits are provided with a pre-charging module. In order to improve the voltage accuracy of this module, a voltage trimming module is usually further provided, i.e., a voltage trimming circuit is provided between the bias module and the pre-charging module, which receives a bias voltage provided by the bias module, and performs voltage trimming by using a register, to obtain a more accurate trimming voltage and input the same into the pre-charging module. Optionally, a current trimming module may be provided between the bias module and the current generation module, and the current trimming module performs current trimming by using a register. Specifically, the current trimming module is located between the bias module and the current generation module, receives a bias current, and provides a high-accuracy trimming current for the current generation module; and the current generation module receives the trimming current and provides a reference current to a current output module. A current with higher accuracy can be obtained through the trimming module. The current trimming module can be obtained by combining current mirrors. The current generation module can generate an accurate reference current to the channel current output module. Optionally, the current generation module further can be connected to an external resistor, wherein the external resistor refers to a resistor outside a chip, and this resistor is adjustable and can be used to adjust a current of a branch where the resistor is located. In some embodiments, as shown inFIG.2, the plurality of clock signals are N clock signals; and the N clock signals may be generated by a fifth module, such as CLK[0]˜CLK[N−1]. N is an integer greater than or equal to 2, and preferably, N is an integer greater than or equal to 4. In this case, M=1/N. That is, adjacent clock signals are different by 1/N clock cycles or the phase difference is 1/N clock cycles; it can be understood that the next clock signal is delayed by 1/N clock cycles from the previous clock signal (for ease of understanding, referring toFIG.9, as can be seen from a certain time point on a time axis, for example, from a time point where a certain rising edge of a first clock signal is located, after this time point, first rising edges of subsequent clock signals are delayed by 1/N, 2/N, . . . , (N−1)/N) in sequence from the rising edge of the first clock signal. Optionally, frequencies of the N clock signals are the same as the frequency of the above GCLK. The fifth module may be one of a delay phase-locked loop DLL, a phase interpolator, and a phase-locked loop PLL. In some embodiments, as shown inFIG.3, the second module includes: a first sub-module, generating a first display signal based on a first clock signal and the display data; and a second sub-module, generating the display signal based on the selected clock signal and the first display signal. The first clock signal is any one of the N clock signals; and the selected clock signal is delayed by i/N complete clock cycles from the first clock signal, where i is an integer between 0 and (N−1). It is assumed that the N clock signals are CLK[0]-CLK[N−1]. The first clock signal is any one of the N signals, for example, CLK[0] or CLK[3] may be selected as the first clock signal. If CLK[0] is selected as the first clock signal, CLK[1] is delayed by 1/N clock cycles from CLK[0], CLK[2] is delayed by 1/N clock cycles from CLK[1], CLK[2] is delayed by 2/N clock cycles from CLK[0], . . . . If CLK[3] is selected as the first clock signal, CLK[4] is delayed by 1/N clock cycles from CLK[3], CLK[0] is delayed by 1/N clock cycles from CLK[N−1], CLK[1] is delayed by 1/N clock cycles from CLK[0], and CLK[2] is delayed by 1/N clock cycles from CLK[1]. It can be understood that in this case, the accuracy of the display data can reach T/N (the accuracy is 1/N with T as unit). The present disclosure generates the fractional part of display data or data related to the fractional part based on the phase difference or delay of a selected clock signal CLK[i] relative to the first clock signal, where i is an integer between 0 and (N−1). Taking CLK[0] being selected as the first clock signal as an example, the fractional part is i*T/N, (N−i)*T/N or others, and a specific numerical value of the fractional part is also related to the selection of the logical module or the subsequent processing. In some embodiments, the above first sub-module is configured to generate a display signal with a width of integer clock cycles; and it represents the integral part of the display data. The first sub-module may receive the above initial display signal and generate a first display signal based on the initial display signal and the first clock signal. It can be understood that the initial display signal actually represents the integral part of the display data. In this case, the display data received by the first sub-module inFIG.3is the initial display signal; and the first sub-module may be optionally a flip-flop, for example, a D flip-flop or an RS flip-flop. When it is a D flip-flop, optionally, the initial display signal is connected to a D terminal, the first clock signal is connected to a CLK terminal, and a Q terminal is used as output. It can be understood that the first display signal also represents the width corresponding to the integral part of the display data, which is the width of integer clock cycles, with a rising edge thereof being aligned with an edge (rising edge or falling edge) of the first clock signal, and the width being the same as that of the initial display signal. In some embodiments, the first sub-module also may include a process of generating an initial display signal of a width of integer clock cycles from the display data. As described above, this function can be realized by means of counting, and the counting module can be constituted by flip-flop. The counting module counts the display data based on the GCLK clock signal, and generates the initial display signal with a width of integer clock cycles. A D flip-flop or an RS flip-flop may be selected as the flip-flop constituting the counting module. The first sub-module obtains the first display signal based on this initial display signal and the first clock signal, as described above. The above second sub-module receives the selected clock signal and the first display signal, and performs corresponding processing and operation on the first display signal so as to output the final display data, wherein the display data may be a display signal containing a fractional part. In some embodiments, the above second sub-module includes an intermediate module, wherein the intermediate module receives the selected clock signal and the first display signal, and generates an intermediate display signal. The intermediate display signal has the same width as the first display signal, and has a delay relative to the first display signal (the delay can be determined by a delay of the selected clock signal that is selected relative to the first clock signal), such as i*T/N, where i is an integer between 0 and (N−1). The intermediate module may be a flip-flop, for example, a D flip-flop or an RS flip-flop. When the intermediate module is a D flip-flop, optionally, the first display signal is connected to a D terminal, the selected clock signal is connected to a CLK terminal, and a Q terminal is used as output. In the present disclosure, the second sub-module further includes a logical module, which can perform a logical operation, such as logical OR, on the intermediate display signal and the first display signal, so as to obtain the final display data. In this case, optionally, as described above, it is also possible to add one clock cycle to the display data, so as to facilitate processing the situation that the display data is decimal fraction, in this case, the width of one clock cycle must be deducted from the display signal after OR (de-widening), for example, a flip-flop may be provided to generate a display signal with a length of one clock cycle aligned with a rising edge of the first display signal or the display signal after the logical OR, invert this display signal, and perform an AND operation on the inverted display signal and the display signal after OR, or perform an XOR operation on this display signal and the display signal after the logical OR. Definitely, other logical circuits also may be used for implementation. It can be understood that the situation that the display data is decimal fraction hardly exists, or the display data may be processed so that the situation that the display data is decimal fraction does not exist, therefore, an OR operation may be directly performed on the first display signal and the intermediate display signal, to directly obtain the final display signal, without considering the problem of adding one clock cycle. Optionally, a logical AND operation also may be performed on the intermediate display signal and the first display signal, in this case, in order to obtain the final display signal, optionally, the width of one clock cycle may be added to the signal obtained after the AND operation (that is to say, the signal obtained after the AND operation is widened by one clock cycle). A PWM signal with the width of one clock cycle may be generated by the flip-flop or other devices, for example, at a moment of a falling edge of the signal obtained after the AND operation or a falling edge of the first display signal, and then an OR operation is performed on this PWM signal and the signal obtained after the AND operation; taking CLK[0] being selected as the first clock signal as an example, if the display data is 3.3*T, and then CLK[i] is taken as the selected clock signal, the intermediate display signal is delayed by i*T/N relative to the first display signal. The initial display signal is 3*T, and the signal obtained after the AND operation is actually 3T−i*T/N. Therefore, i/N may be selected as 7/10, that is, CLK[7] is selected, and N=10, then the signal obtained after the AND operation is actually 2.3*T. Finally, the final display data is obtained by adding the width of one clock cycle on the basis of 2.3*T. Definitely, it is also possible that the width of one clock cycle is not added at the end, but the width of one clock cycle is added when the initial display signal or the first display signal is generated. This method is the same as the method of adding the width of one clock cycle to the signal after AND, that is, a signal with the width of one clock cycle is generated by using the falling edge of the first display signal (or the initial display signal), and is subjected to an OR operation with the first display signal (or the initial display signal). Definitely, optionally, there is also another method for obtaining the final display signal, that is, directly adding one clock cycle to the display data input into the second module and then performing subsequent processing, for example, the initial display data is 3.2*T, and becomes 4.2*T after one clock cycle is added. Assuming that N=10, CLK[0] and CLK[8] are respectively selected as the first clock and the selected clock, the width of the display signal after the AND operation is just 3.2*T. It should be noted herein that CLK[8] is selected instead of CLK[2]. Therefore, if the logical module is AND, attention should be paid to the selection of the clock signal. Definitely, the operation of adding T also can be performed in the second module. Adding one clock cycle can have two advantages, one is that it is applicable to the situation that the display data is decimal, i.e., the integral part is 0, such as 0.3*T (definitely, this case hardly occurs); and the other advantage is that the final display signal can be obtained directly. In some other embodiments, in addition to the above modules, the second sub-module further includes an inverting module, the output terminal of the intermediate module is connected to the input terminal of the inverting module, the inverting module inverts the above intermediate display signal, and the inverted signal and the first display signal are input into the logical module to undergo a logical operation. The logical operation may include logical AND, logical OR, XOR, XNOR, or other logical gates and combinations thereof. When the logical AND or XNOR is selected, an output signal is decimal, for example, 0.2 T; therefore, this technical solution of the present disclosure can directly generate decimal data, which is advantageous to circumstances where decimal display data is needed; definitely, if it is necessary to obtain complete display data according to the situation, it is still necessary to add a certain width to the signal after the AND, and this width can be selected as the width of the integral part represented by the first display signal. For example, a display signal with the width of an integral part may be generated by the flip-flop at the falling edge of the signal after AND, and an OR operation is performed on this display signal and the above display signal of the decimal part. Noting that it is unnecessary to add one clock cycle to the initial display data herein, because the case where the display data to be displayed is decimal actually hardly occurs. Definitely, it is also possible to select to add 1*T, in this case, the display signal with the width of an integral part is no longer generated by the flip-flop, but a display signal with the width of the integral part minus 1*T is generated. It can be understood that the display data generally includes an integral part, where the integral part is not 0. Definitely, the logical modules in the second sub-module are not limited to the several logical modules in the above, and other logical modules also may be used, such as AND, OR, XOR, XNOR, NOT gate, etc. or a combination thereof, as long as they generate the required display signal according to the intermediate display signal and/or the first display signal, which is not limited in the present disclosure. In some embodiments, as shown inFIG.4, the second module (including Mux) outputs one of the N clock signals as the selected clock signal according to the selection signal, wherein the selection signal is generated based on the display data. In fact, the selection signal is used to generate the fractional part of the display data, therefore, the selection signal needs to be generated according to the fractional part of the actual display data, so as to determine which one of the N clock signals is selected. For specific selection, reference can be made to the description in the foregoing part, and the fractional part that is actually generated is not only related to the selected clock signal and the first clock signal, but also related to the finally selected logical module. Therefore, in actual processing, there also may be a processing of widening or de-widening the initial display signal, the first display signal or the signal, a problem of selecting the selection signal, a processing of adding one clock cycle to the initial display data, etc. Definitely, these processings do not necessarily exist based on different applications. In some embodiments, as shown inFIG.5, the drive circuit further includes: a path matching module, wherein the path matching module is configured to eliminate an unknown phase difference between the selected clock signal and the first clock signal. By using the preceding method, as for a plurality of clock signals, after being selected by the Mux, one of the clock signals is input into the second sub-module as selected clock signal, and meanwhile, the first clock signal, such as CLK[0], is directly input into the first sub-module. In the present disclosure, the drive circuit is able to obtain a high-accuracy display signal, which mainly relies on a phase difference between the first clock signal and the selected clock signal, so as to obtain a fractional part, for example, i*T/N and (N−i)*T/N. However, taking the case where CLK[0] is the first clock signal and CLK[i] is the selected clock signal as an example, as the first clock signal CLK[0] and the selected clock signal CLK[i] generated by the fifth module respectively experience different circuits (logical paths or circuit paths) before respectively reaching the first sub-module and the second sub-module, mainly that CLK[0] is directly input into the first sub-module, and CLK[i] is input into the second sub-module after passing through Mux, wherein logical circuits or a combination thereof exist in the Mux, and other electronic components also may exist therein, the two clock signals may have different delays in transmission paths or logical paths. As a result, accurate display data cannot be obtained finally after the logical operation. The path matching module is provided, wherein the path matching module includes two parts, i.e., a selection device and a compensation module. The selection device, with the function equivalent to Mux, is used to replace the preceding Mux, which outputs one from a plurality of clock signals as the selected clock signal based on the selection signal (or the display signal); and the compensation module receives and outputs the first clock signal, and its practical effect is performing certain delay compensation on the first clock signal, i.e., applying a certain delay to the first clock signal, so as to match the delay of the selected clock signal applied by a circuit or a logical path in the selection device through which the selected clock signal passes (the delay can be understood as an unknown phase difference). When the delays of the two parts are equal, the first clock signal and the selected clock signal will maintain a desired phase difference or delay, which can ensure the accuracy of the display signal. In the above, the selection signal is generated based on the display data. Optionally, the logical path of the first clock signal passing through the compensation module is completely the same as the logical path of the selected clock signal passing through the selection device. It can be understood that taking that CLK[0] is the first clock signal and CLK[i] is the selected clock signal as an example, the same logical path means that a circuit path or electronic component(s) that the CLK[0] passes through the compensation module is the same as a circuit path or electronic component(s) that the CLK[i] passes through the selection device, including component type, connection sequence, etc. Thus, in fact, the compensation module is a copy of the circuit path of CLK[i] passing through the selection device. It should be noted that the selection device is fixed and unchangeable, therefore, the compensation module is provided to match a logical path through which a certain clock signal in the selection device passes, i.e., a circuit path or electronic component(s) through which CLK[i] passes through the selection device. The compensation module can be designed according to the selection device to realize the matching. The path matching module provided in an embodiment is as shown inFIG.6toFIG.7. Taking a 8-phase clock signal as an example, it can be seen from the drawings that, in a logical path through which the first clock signal passes, an NAND gate, an NAND gate, an NOR gate, an NAND gate, and buffer are successively provided; a path through which any selected clock signal passes is also an NAND gate, an NAND gate, an NOR gate, an NAND gate, and a buffer. The above buffer is optionally as an inverter, and it is not necessary. The cs may be generated by a decoder, for example, a 3-8 decoder. As described above, in fact, the cs is related to the display data, and it also may be understood that a control word is generated according to the display data, and is used as an input of the decoder. Definitely, the 8-phase clock signal is taken as an example in the above, but when the number of multi-phase clock signals is other numbers, circuit diagrams inside the compensation module and the selection device also need to be adaptively modified, which are all for the purpose of making the logical path of the clock signal passing through the compensation module completely the same as the logical path of the clock signal passing through the selection device, that is to say, the compensation module copies a logical path of a certain clock signal passing through the selection device. In addition, it should be noted that the present disclosure further can include a second module of the following structure, which is different from the preceding second module in that it does not have the above second sub-module, but includes N intermediate modules, and each intermediate module receives one of the N clock signals and the first display signal. Definitely, the clock signals received by various intermediate modules are different from each other. On this basis, each intermediate module (using the same circuit structure as that of the preceding intermediate module, such as a flip-flop) generates the intermediate display signal based on the received clock signal and the first display signal. The second module further includes a selection module, wherein the selection module receives outputs of various intermediate modules and outputs one as a last intermediate display signal based on the display data. The logical processing on the last intermediate display signal and the first display signal is the same as the method and related processing (such as widening, de-widening, or adding one clock cycle to the preceding display data) of the logical operation on the intermediate display signal and the first display signal introduced in the preceding. The second module of the present structure does not select the clocks, but selects the generated display data. The present disclosure further provides a second module of another structure, and this second module is different from the first type of second module in the preceding. As shown inFIG.8, compared with the first type of second module in the preceding, the second module of this structure does not select the plurality of clock signals, but selects the display signals generated by various display signal generation modules, as the selection signal is related to the display data, an output of a corresponding display signal generation module is selected to be output based on the display data. Meanwhile, it should be noted that the second module of such a structure as mentioned below includes a third sub-module, a fourth sub-module, a fifth sub-module, etc., and they are merely names, without necessarily indicating that the first sub-module and the second sub-module in the second module of the preceding structure are included in the second module of the following structure. The second module includes: a third sub-module, generating a first display signal based on the first clock signal and a reset signal; a plurality of fourth sub-modules, wherein each of the fourth sub-modules generates a second display signal based on one clock signal of the second clock signals and the reset signal; and one clock signal of the second clock signals received by each fourth sub-module is different from the other; and a fifth sub-module, receiving the display signals generated by the third sub-module and the plurality of fourth sub-modules, and outputting one of the display signals according to the selection signal. The first clock signal is any one of the N clock signals; and the second clock signals are clock signals among the N clock signals other than the first clock signal; and the reset signal and the selection signal are generated according to the display data. Optionally, the reset signal is the first display signal in the preceding embodiments, that is, the display signal generated according to the first clock signal and the display data in the preceding embodiments, therefore, the same structure or circuit as that of the first sub-module in the preceding embodiments may be used to generate the reset signal, and the reset signal actually also represents an integral part of the display data. The third sub-module and the plurality of fourth sub-modules in the present embodiment are collectively referred to as a display signal generation module hereinafter. Each display signal generation module may be selected as a flip-flop, such as a D flip-flop (DFF) or an RS flip-flop or other flip-flops. The flip-flop has a reset terminal RESET. The reset terminal receives the reset signal. The display signal (the first display signal, one of the plurality of second display signals) output from the fifth sub-module can be directly output as the display signal; the display data also may be processed by adding one T, in this way, the situation that the display data is decimal can be processed, and the obtained signal is directly corresponding to the width corresponding to the initial display data. Definitely, it is also possible that the case of decimal is not considered, and the processing of adding one T is not performed, but a width is added after the fifth sub-module outputs the signal (that is to say, the signal output by the fifth sub-module is widened by one clock cycle). The adding method (i.e., the widening method) has been introduced in the above, and will not be repeated herein. No matter which method is used, in order to obtain a suitable display signal, the selection of the second clock signal, for example, the selection of the selected clock signal as introduced above, needs to be noted, for example, in order to obtain 3.2*T, the second clock signal that actually needs to be selected may be CLK[8] instead of CLK[2]. Optionally, when a flip-flop having a reset terminal is selected, for example, a D flip-flop having a reset terminal, in this case, the third sub-module has a D terminal connected to VSS, a CK terminal receiving the first clock signal, for example, CLK[0], a Q terminal used as output, and a reset terminal RN receiving a reset signal; and the fourth sub-module has a D terminal connected to VDD, a CK terminal receiving a certain second clock signal, for example, CLK[1], a Q terminal used as output, and a reset terminal RN receiving a reset signal. Definitely, logical processing also may be performed on a signal output by the fifth sub-module, for example, inversion is performed on the signal output by the fifth sub-module, and then a logical operation is performed on the inverted signal and the first display signal. That is, the second module further includes a sixth sub-module; and the sixth sub-module inverts the signal output by the fifth sub-module and performs a logical operation on the inverted signal and the first display signal. It should be noted that the inversion in the present disclosure may be realized by an inverter or a NOT gate, and optionally, logical AND is performed on the inverted signal and the first display signal. In addition, the sixth sub-module also may be implemented in the following manner, that is, the sixth sub-module is XOR, and performs an XOR operation on the signal output by the fifth sub-module and the first display signal. In any of the above manners, the data after the logical AND or the XOR may be used as the display signal. Definitely, the width of integer clock cycles also may be added directly to the display signal after the logical AND or the XOR (that is to say, the display signal after the logical AND or the XOR may be widened directly by integer number of clock cycles), and the integer clock cycles may be selected as the width of the first display signal. It should be noted that the width of integer clock cycles refers to the width corresponding to the integral part of the display data, such as 4*T. The method of adding the width has been described above, and will not be repeated herein. It should be understood that the difference therebetween only lies in that the value of the width added may be different. In addition, one clock cycle also may be added to the display data. In this case, the width added will be the integral part minus 1*T. Assuming that the actual display data is 3.3*T, the display data after the addition of T is 4.3*T, and the width added is 3*T. Definitely, the logical module that performs the logical processing or the subsequent processing of the logical module and the processing on the display data are not limited thereto, and different logical processing methods or circuits also may be designed according to requirements, such that the width of integer clock cycles is added to a suitable display signal, so as to generate the display signal which can be accurate to a fractional part, thus improving the display accuracy. Optionally, when a flip-flop with a reset signal is used, for example, a DFF with reset, the third sub-module and the fourth sub-module have the D terminals connected to VDD, the CK terminals receiving a corresponding clock signal, the Q terminals used as output, and the reset terminals RN receiving the reset signal. It can be understood that in the present embodiment, the third sub-module generates the first display signal based on the first clock signal and the reset signal. The reset signal is associated with the display data, and a particular case is that the reset signal can be generated on the basis of the first sub-module of the embodiment of the second module of the preceding structure. There are a plurality of fourth sub-modules, for example, N−1 fourth sub-modules, and each clock signal in the second clock signals is respectively input into each fourth sub-module, such as a CK terminal. That is, the clock signals received by various CK terminals are different from each other. For example, the first one of the second signal generation module receives CLK[1], the second one of the second signal generation module receives CLK[2], and an (N−1)th second signal generation module receives CLK[N−1]. Optionally, the drive circuit further includes: a compensation module, configured to eliminate an unknown delay between the first display signal and the display signal selected by the selection signal. It should be noted that, in the present embodiment, the function of the compensation module is the same as that of the preceding compensation module, and the fifth sub-module is equivalent to the preceding selection device. The logic path of the first display signal passing through the compensation module is identical to the logical path of the display signal selected by the selection signal passing through the selection device (fifth sub-module). This solution can ensure that there is only desired delay between two display signals. Generally, the rising edges or the falling edges of the above two signals are aligned, however, the two signals pass through different circuits, then different delays will be introduced between the two, so that the rising edges or the falling edges cannot be aligned, thus, a signal obtained after the logical processing may not be a theoretical signal, mainly reflected in that the signal width is no longer a theoretical width. By means of compensation, it can be ensured that the delays introduced between the two display signals are the same, so as to ensure that edges (rising edges or falling edges) of the two are still aligned, and thus it can be ensured that a display signal width after the logical processing is a theoretical value or a desired value. The display signal width determines the fractional part of the display signal, and therefore the display signal finally generated is more accurate. Optionally, the number of second modules is H, where H is the number of channels and is an integer greater than or equal to 4. That is to say, for a drive circuit having H output channels, each channel includes a second module. Optionally, there is one fifth module, and it provides the plurality of clock signals to all channels. Correspondingly, each channel may be configured with a compensation module and a selection device respectively. Optionally, the first module of the present disclosure is further configured to add one clock cycle to the display data and output the same. That is, the display data processed by the second module is actually the data generated after 1*T is added to the initial display data, with the purpose that a very rare situation that the display data is decimal, such as 0.3*T, can be processed; on the other hand, after the operation of adding 1*T is performed, no matter for the situation that the display data is decimal or the situation that the integral part of the display data is not 0, a suitable display signal corresponding to the real display data can be obtained for the second module in some structures and using a certain logic, for example, a situation that the second module of the preceding first structure uses the AND gate. In addition, the present disclosure further provides a drive chip, which includes the preceding drive circuit. The present disclosure further provides a display device, which includes display equipment and the preceding drive chip, wherein the drive chip generates a drive signal so as to drive the display equipment to display. The display device in the present embodiment may be understood as a device that can independently complete the display of a signal or an image, such as an advertising screen, a display screen, and a television. The above-mentioned are merely preferred embodiments of the present disclosure, and it should be understood that the present disclosure is not restricted to the forms disclosed herein, and should not be regarded as excluding other embodiments, but may be used in various other combinations, modifications, and environments, and can be altered through the above teachings or technologies or knowledge in related art within a scope of concept described herein. All of the alterations and changes made by a person in the art, without departing from the spirit and scope of the present disclosure, should be within the scope of protection of the claims attached to the present disclosure. | 41,957 |
11862076 | DESCRIPTION OF THE EMBODIMENTS Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art related to the disclosure. It will be further understandable that terms, such as those defined in commonly used dictionaries, should be interpreted to have a meaning that is consistent with their meaning in the related art or the context of the disclosure and should not be interpreted in an idealized or overly formal sense unless particularly so defined in the disclosure. It should be understood that, although terms such as “first”, “second”, and “third” may be used to describe various elements, members, regions, layers, and/or parts, these elements, members, regions, layers, and/or parts are not limited by these terms. These terms are used only to distinguish one element, member, region, layer, or part from another element, member, region, layer, or part. Accordingly, a first “element”, “member”, “region”, “layer”, or “part” in the following description may be termed a second element, member, region, layer, or part without departing from the teachings of the disclosure. The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting. As used herein, the singular forms “a,” “an,” and “the” are intended to include the plural forms including “at least one” as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprise” and/or “include”, when used in this specification, specify the presence of stated features, regions, integers, steps, operations, elements, and/or members, but do not preclude the presence or addition of one or more other features, regions, integers, steps, operations, elements, members, and/or combinations thereof. FIG.1is a schematic system diagram of a display device including a light-emitting diode display module according to the first embodiment of the disclosure. With reference toFIG.1, in this embodiment, a display device1includes a power supply module10and a light-emitting diode display module100. The light-emitting diode display module100is coupled to the power supply module10, and the light-emitting diode display module100receives one set of power supplies (e.g., a high power supply voltage VDD and a low power supply voltage GND) from the power supply module10. Next, the light-emitting diode display module100performs conversion into an auxiliary power supply voltage Vaux according to the received one set of power supplies to drive light-emitting diodes having a relatively low operating voltage based on the high power supply voltage VDD and the auxiliary power supply voltage Vaux. In this embodiment, the light-emitting diode display module100includes a receiver block110, a voltage conversion block120, a scan block131, a first sink block141, a second sink block143, a first data buffer151, a second data buffer153, a third data buffer155, a first light-emitting diode LD1, a second light-emitting diode LD2, and a third light-emitting diode LD3. The first light-emitting diode LD1has a first operating voltage and has a first anode and a first cathode. The second light-emitting diode LD2has a second operating voltage and has a second anode and a second cathode. The second operating voltage is greater than the first operating voltage. The third light-emitting diode LD3has the second operating voltage and has a third anode and a third cathode. For example, the first light-emitting diode LD1is a red light-emitting diode, and the second light-emitting diode LD2and the third light-emitting diode LD3are a green light-emitting diode and a blue light-emitting diode. The scan block131receives the high power supply voltage VDD and the low power supply voltage GND, and is coupled to the first anode of the first light-emitting diode LD1, the second anode of the second light-emitting diode LD2, and the third anode of the third light-emitting diode LD3to provide the high power supply voltage VDD to a corresponding one of the first anode of the first light-emitting diode LD1, the second anode of the second light-emitting diode LD2, and the third anode of the third light-emitting diode LD3based on a first display data XD1. The voltage conversion block120receives the high power supply voltage VDD and the low power supply voltage GND to provide the auxiliary power supply voltage Vaux. The auxiliary power supply voltage Vaux ranges from the high power supply voltage VDD to the low power supply voltage GND. The first sink block141receives the high power supply voltage VDD and the auxiliary power supply voltage Vaux, and is coupled to the first cathode of the first light-emitting diode LD1. Based on a second display data XD2, the first sink block141provides the auxiliary power supply voltage Vaux to the first cathode of the first light-emitting diode LD1, and limits the current flowing through the first light-emitting diode LD1to drive the first light-emitting diode LD1. The second sink block143receives the high power supply voltage VDD and the low power supply voltage GND, and is coupled to the second cathode of the second light-emitting diode LD2and the third cathode of the third light-emitting diode LD3. Based on a third display data XD3, the second sink block143provides the low power supply voltage GND to the second cathode of the second light-emitting diode LD2or the third cathode of the third light-emitting diode LD3, and limits the current flowing through the second light-emitting diode LD2and the third light-emitting diode LD3to drive the second light-emitting diode LD2and the third light-emitting diode LD3. According to the above, the first sink block141driving the first light-emitting diode LD1provides the auxiliary power supply voltage Vaux to the first cathode of the first light-emitting diode LD1. The auxiliary power supply voltage Vaux is higher than the low power supply voltage GND, thus reducing the provided voltage across the first light-emitting diode LD1to meet the voltage requirements of the first light-emitting diode LD1. Accordingly, the power consumption and heat of the first light-emitting diode LD1may be reduced without affecting the driving of the first light-emitting diode LD1. In addition, since the second sink block143still provides the low power supply voltage GND to the second cathode of the second light-emitting diode LD2or the third cathode of the third light-emitting diode LD3, the second light-emitting diode LD2or the third light-emitting diode LD3may still be driven normally without being affected. Moreover, the light-emitting diode display module100receives one set of power supplies (e.g., the high power supply voltage VDD and the low power supply voltage GND) from the power supply module10, that is, the power supply module10does not require to provide voltages of multiple levels. As a result, the light-emitting diode display module100may achieve flexibility and space utilization in use, and is not limited by the number of power supplies output by the power supply module. Next, the receiver block110receives the high power supply voltage VDD, the low power supply voltage GND, and an image signal Simg, and is coupled to the first data buffer151, the second data buffer153, and the third data buffer155to provide the first display data XD1, the second display data XD2, and the third display data XD3based on the image signal Simg. The first data buffer151receives the high power supply voltage VDD, the low power supply voltage GND, and the first display data XD1, and is coupled to the scan block131to provide the first display data XD1to the scan block131. The second data buffer153receives the high power supply voltage VDD, the auxiliary power supply voltage Vaux, and the second display data XD2, and is coupled to the first sink block141to provide the second display data XD2to the first sink block141. The third data buffer155receives the high power supply voltage VDD, the low power supply voltage GND, and the third display data XD3, and is coupled to the second sink block143to provide the third display data XD3to the second sink block143. In this embodiment, the receiver block110may be a receiver card, and the scan block131, the first sink block141, and the second sink block143may each be an integrated circuit. In addition, the voltage conversion block120may include a DC-to-DC converter to reduce the high power supply voltage VDD to the auxiliary power supply voltage Vaux. Moreover, a sum of the first operating voltage of the first light-emitting diode LD1and the auxiliary power supply voltage Vaux is equal to a difference between the high power supply voltage VDD and the low power supply voltage GND. For example, assuming that the high power supply voltage VDD is 3.8 volts (V), the low power supply voltage GND is 0 V, if the first operating voltage of the first light-emitting diode LD1is 3.7 V, the auxiliary power supply voltage Vaux may be 1 V. FIG.2is a schematic system diagram of a display device including a light-emitting diode display module according to the second embodiment of the disclosure. With reference toFIG.1andFIG.2, a display device2is substantially the same as the display device1, and is different in a driver integrated circuit130and a data buffer block150of a light-emitting diode display module100a, where the same or similar elements are labeled using the same or similar reference numerals. In this embodiment, the driver integrated circuit130may integrate at least the scan block131, the first sink block141, and the second sink block143shown inFIG.1, and the data buffer block150may integrate at least the first data buffer151, the second data buffer153, and the third data buffer155shown inFIG.1. FIG.3is a schematic system diagram of a display device including a light-emitting diode display module according to the third embodiment of the disclosure. With reference toFIG.1andFIG.3, a display device3is substantially the same as the display device1, and is different in a motherboard101, a connecting port102, and a light board103of a light-emitting diode display module100b, where the same or similar elements are labeled using the same or similar reference numerals. In this embodiment, the receiver block110and the voltage conversion block120are disposed on the motherboard101(i.e., the control board) of the light-emitting diode display module100b, and the first light-emitting diode LD1, the second light-emitting diode LD2, the third light-emitting diode LD3, the scan block131, the first sink block141, the second sink block143, the first data buffer151, the second data buffer153, and the third data buffer155are disposed on the light board103of the light-emitting diode display module100b. Moreover, the motherboard101and the light board103are bonded to each other through the connecting port102. The number of the motherboard101and the number of the light board103are determined depending on the circuit design. FIG.4is a schematic system diagram of a display device including a light-emitting diode display module according to the fourth embodiment of the disclosure. With reference toFIG.1andFIG.3, a display device4is substantially the same as the display device1, and is different in a motherboard101a, the connecting port102, and a light board103aof a light-emitting diode display module100c, where the same or similar elements are labeled using the same or similar reference numerals. In this embodiment, the receiver block110is disposed on the motherboard101aof the light-emitting diode display module100c, and the voltage conversion block120, the first light-emitting diode LD1, the second light-emitting diode LD2, the third light-emitting diode LD3, the scan block131, the first sink block141, the second sink block143, the first data buffer151, the second data buffer153, and the third data buffer155are disposed on the light board103aof the light-emitting diode display module100c. Moreover, the motherboard101aand the light board103aare bonded to each other through the connecting port102. The number of the motherboard101aand the number of the light board103aare determined depending on the circuit design. In summary of the foregoing, in the light-emitting diode display module of the embodiment of the disclosure, based on a high power supply voltage and a low power supply voltage, the voltage conversion block provides an auxiliary power supply voltage ranging from the high power supply voltage to the low power supply voltage, and uses the high power supply voltage and the auxiliary power supply voltage to drive light-emitting diodes having a relatively low operating voltage. Accordingly, the light-emitting diode display module may receive one set of power supplies (e.g., the high power supply voltage and the low power supply voltage) from the power supply module, that is, the power supply module does not require to provide voltages of multiple levels. As a result, the light-emitting diode display module may achieve flexibility and space utilization in use, and is not limited by the number of power supplies output by the power supply module. It will be apparent to those skilled in the art that various modifications and variations can be made to the disclosed embodiments without departing from the scope or spirit of the disclosure. In view of the foregoing, it is intended that the disclosure covers modifications and variations provided that they fall within the scope of the following claims and their equivalents. | 13,612 |
11862077 | DETAILED DESCRIPTION Technical solutions in some embodiments of the present disclosure will be described clearly and completely below with reference to the accompanying drawings. However, the described embodiments are merely some but not all embodiments of the present disclosure. All other embodiments obtained based on the embodiments of the present disclosure by a person of ordinary skill in the art shall be included in the protection scope of the present disclosure. Unless the context requires otherwise, throughout the description and the claims, the term “comprise” and other forms thereof such as the third-person singular form “comprises” and the present participle form “comprising” are construed as an open and inclusive meaning, i.e., “including, but not limited to”. In the description of the specification, the terms such as “one embodiment”, “some embodiments”, “exemplary embodiments”, “example” or “some examples” are intended to indicate that specific features, structures, materials or characteristics related to the embodiment(s) or example(s) are included in at least one embodiment or example of the present disclosure. Schematic representations of the above terms do not necessarily refer to the same embodiment(s) or example(s). In addition, the specific features, structures, materials, or characteristics may be included in any one or more embodiments or examples in any suitable manner. In the description of some embodiments, terms such as “coupled” and “connected” and derivatives thereof may be used. For example, the term “connected” may be used in the description of some embodiments to indicate that two or more components are in direct physical or electrical contact with each other. As another example, the term “coupled” may be used in the description of some embodiments to indicate that two or more components are in direct physical or electrical contact. However, the term “coupled” or “communicatively coupled” may also mean that two or more component are not in direct contact with each other, but still cooperate or interact with each other. The embodiments disclosed herein are not necessarily limited to the context herein. The phrase “at least one of A, B and C” has a same meaning as the phrase “at least one of A, B or C”, and they both include the following combinations of A, B and C: only A, only B, only C, a combination of A and B, a combination of A and C, a combination of B and C, and a combination of A, B and C. The phrase “A and/or B” includes the following three combinations: only A, only B, and a combination of A and B. As used herein, the term “if” is optionally construed as “when” or “in a case where” or “in response to determining that” or “in response to detecting”, depending on the context. Similarly, the phrase “if it is determined that” or “if [a stated condition or event] is detected” is optionally construed as “in a case where it is determined that” or “in response to determining that” or “in a case where [the stated condition or event] is detected” or “in response to detecting [the stated condition or event]”, depending on the context. In addition, the use of the phrase “based on” is meant to be open and inclusive, since a process, step, calculation, or other action that is “based on” one or more of the stated conditions or values may, in practice, be based on additional conditions or values exceeding those stated. The term “about” or “approximately” as used herein includes a stated value and an average value within an acceptable range of deviation of a particular value determined by a person of ordinary skill in the art, considering the measurement in questions and errors associated with the measurement of a particular quantity (i.e., limitations of the measurement system). In the related art, an image distortion phenomenon may occur on a display apparatus. A reason for the image distortion phenomenon will be described below by taking an example in which the display apparatus is an OLED display apparatus. The OLED display apparatus includes a display panel. The display panel generally includes transistors (such as thin film transistors or metal-oxide semiconductor field-effect transistors) and OLED light-emitting devices. Hereinafter, the description is made by taking an example in which the transistors are the thin film transistors. The thin film transistors and the OLED light-emitting devices age during operation, and aging of the thin film transistors and the OLED light-emitting devices consume a part of a total voltage. In a case where the total voltage of the OLED display apparatus is kept constant, since the thin film transistors and the OLED light-emitting devices consume the part of the total voltage, another part of the total voltage (i.e., a gamma voltage) for displaying an image is reduced. In addition, as shown inFIG.1, a gamma voltage curve of the OLED display apparatus is a linear curve in the related art; and therefore, in a case where the gamma voltage is decreased, a phenomenon of gray scale loss easily occurs. The gray scale loss means that the number of data of gray scales that is finally displayed by the OLED display apparatus is less than the number of data of complete gray scales. For example, as shown inFIG.2, based on the above reason, a 0 gray scale and a 1 gray scale share an output voltage (the output voltage being, for example, the gamma voltage). As a result, it is unable to distinguish between an image for displaying the 0 gray scale and an image for displaying the 1 gray scale, and thus the image distortion phenomenon occurs. As shown inFIG.1, in a coordinate system of the gamma voltage curve, a horizontal coordinate represents gray scales, and the gray scale is a gray scale corresponding to a data voltage that is actually input to realize an image display of the OLED display apparatus. A vertical coordinate represents gamma voltages each corresponding to a gray scale. The data voltage is converted by the gamma voltage. It will be noted that, the gray scales are obtained by dividing a luminance change between brightest luminance and darkest luminance that can be displayed by the display panel of the OLED display apparatus into levels, so as to control luminance of the display panel. Each frame of display image displayed by the display panel is composed of colors displayed by multiple pixels. Generally, each pixel is capable of presenting different colors, and each color is composed of three primary colors of red, green and blue. Each pixel includes sub-pixels such as a red sub-pixel R, a green sub-pixel G and a blue sub-pixel B. Each sub-pixel is capable of presenting different luminance levels, and the gray scales represent levels of different luminance from the darkest luminance to the brightest luminance. The more levels of the different luminance from the darkest luminance to the brightest luminance are, the more delicate the presented image is. Color depth, which may be referred to as color bit depth, is a unit that expresses the number of colors of a digital image in bits. For example, an 8 bit display panel is capable of representing 2 to the 8th power (which equals to 256) luminance levels. The 256 luminance levels may also be referred to as 256 gray scales. For another example, a 10 bit display panel is capable of representing 2 to the 10th power (which equals to 1024) luminance levels. The 1024 luminance levels may also be referred to as 1024 gray scales. Based on this, some embodiments of the present disclosure provides a display apparatus1000, as shown inFIG.20, the display apparatus1000includes a display panel200and a timing controller100coupled to the display panel200. The timing controller100is configured to receive data of gray scales of at least one frame of image to be displayed by the display apparatus1000and output a plurality of data voltages and a plurality of gamma voltages according to the data of gray scales. In some embodiments, as shown inFIG.20, the display apparatus1000further includes a source driver300and a gate driver400. A terminal of the source driver300is coupled to the timing controller100, and another terminal of the source driver300is coupled to the display panel200. A terminal of the gate driver400is coupled to the timing controller100, and another terminal of the gate driver400is coupled to the display panel200. In this case, the timing controller100is further configured to receive a timing control signal Timing, output a source control signal (abbreviated as: SCS) to the source driver300, and output a gate control signal (abbreviated as: GCS) to the gate driver400. In some embodiments, the display panel200includes a plurality of pixels, and each pixel includes pixel drive circuits. For example, the display panel200is an OLED display panel, and the pixel drive circuit is a 2T1C pixel drive circuit. As shown inFIG.19, the 2T1C pixel drive circuit includes a data line (abbreviated as: DL), a gate line (abbreviated as: GL), a storage capacitor Cst, a driving thin film transistor T1, a switching thin film transistor T2and an OLED light-emitting device. An anode of the OLED light-emitting device is connected to a terminal ELVDD for outputting a driving voltage though the driving thin film transistor T1, and a cathode of the OLED light-emitting device is connected to another terminal ELVSS for outputting a low level power supply voltage. Based on this, the source driver300is configured to receive the data voltages, the gamma voltages, and the source control signal that are output by the timing controller100, generate source driving voltages Vdata, and transmit the source driving voltages Vdata to the display panel200through data lines. The gate driver400is configured to receive the gate control signal output by the timing controller100, generate gate driving voltage(s) Vgata, and transmit the gate driving voltage(s) Vgata to the display panel200through at least one GL. The display panel200is capable of displaying an image under cooperation of the source driving voltages Vdata, the gate driving voltage(s) Vgata, the driving voltage and the low level power supply voltage. Some embodiments of the present disclosure also provide a method for improving image display quality. As shown inFIG.3, the method for improving image display quality includes S100to S400. In S100, a total gray scale range of a gamma voltage curve of a display apparatus is divided to obtain a plurality of gray scale ranges. In the embodiments of the present disclosure, in an example where the display panel included in the display apparatus is the 10 bit display panel, the total gray scale range of the gamma voltage curve of the display apparatus is 0 to 1024. For example, a horizontal coordinate of the gamma voltage curve as shown inFIG.11includes 1024 gray scale coordinates, grayscale values of the gray scale coordinates may be 0 gray scale (G0), 1 gray scale (G1), 2 gray scale (G2) . . . 1023 gray scale (G1023). In some embodiments, the number of the gray scale ranges obtained by dividing the total gray scale range may be 2 to 5, and the number of the gray scale ranges may be selectively set according to actual needs. In a process of dividing the 1024 gray scale coordinates in the total gray scale range into the gray scale ranges, for example, the 1024 gray scale coordinates in the total gray scale range may be divided into a low gray scale range, a medium gray scale range and a high gray scale range, and thus three gray scale ranges are obtained; as another example, the 1024 gray scale coordinates in the total gray scale range may be divided into a low gray scale range and a high gray scale range, and thus two gray scale ranges are obtained; as yet another example, the 1024 gray scale coordinates in the total gray scale range may be divided into a low gray scale range, a medium gray scale range, a secondary high gray scale range and a high gray scale range, and thus four gray scale ranges are obtained; as yet another example, the 1024 gray scale coordinates in the total gray scale range may be divided according to other division principles, so that five gray scale ranges are obtained. Here, the number of gray scale coordinates in each gray scale range may be selectively set according to actual needs. In S200, data of gray scales of at least one frame of image to be displayed by the display apparatus is obtained, and a ratio of data of gray scales in each gray scale range to the data of gray scales of the at least one frame of image to be displayed is calculated. In some examples, the at least one frame of image to be displayed by the display apparatus includes the data of gray scales, and the data of gray scales includes at least one of data of red gray scales for displaying a red color, data of green gray scales for displaying a green color, and data of blue gray scales for displaying a blue color. Here, the data of gray scales is a digital signal, and the data of gray scales includes a plurality of gray scale coordinates for displaying a certain color. For example, the obtained data of gray scales of the at least one frame of image to be displayed by the display apparatus includes the data of red gray scales, the data of green gray scales, and the data of blue gray scales. The data of red gray scales, the data of green gray scales, and the data of blue gray scales each include multiple gray scale coordinates. For data of gray scales for displaying a same color, grayscale values of different gray scale coordinates are different. After the data of gray scales is obtained, in data of gray scales for displaying each color (i.e., the data of red gray scales, data of green gray scales, or the data of blue gray scales), a ratio of data of gray scales in each gray scale range is calculated to determine data of gray scales that is mainly displayed in the at least one frame of image to be displayed. For example, the 1024 gray scale coordinates in the total gray scale range are divided into three gray scale ranges. The three gray scale ranges are the low gray scale range (which is, for example, in a range of 0 gray scale to 255 gray scale), the medium gray scale range (which is, for example, in a range of 256 gray scale to 767 gray scale) and the high gray scale range (which is, for example, in a range of 768 gray scale to 1023 gray scale). In an example where the data of red gray scales of the at least one frame of image to be displayed by the display apparatus is obtained, a ratio of gray scale coordinates of data of red gray scales in the low gray scale range, a ratio of gray scale coordinates of data of red gray scales in the medium gray scale range and a ratio of gray scale coordinates of data of red gray scales in the high gray scale range may be respectively calculated. In a case where the ratio of the gray scale coordinates of the data of red gray scales in the low gray scale range is maximum, it means that the at least one frame of image to be displayed is mainly displayed in the low gray scale range. In a case where the ratio of the gray scale coordinates of the data of red gray scales in the medium gray scale range is maximum, it means that the at least one frame of image to be displayed is mainly displayed in the medium gray scale range. In a case where the ratio of the gray scale coordinates of the data of red gray scales in the high gray scale range is maximum, it means that the at least one frame of image to be displayed is mainly displayed in the high gray scale range. In S300, a division value of a gamma voltage range corresponding to each gray scale range of the gamma voltage curve is adjusted according to calculated ratios, so that a division value of a gamma voltage range corresponding to a gray scale range with a maximum ratio is less than a division value of a gamma voltage range corresponding to any remaining gray scale range. In an example where the data of red gray scales of the at least one frame of image to be displayed by the display apparatus is obtained, and the total gray scale range of the gray scale coordinates of the data of red gray scales is divided into the low gray scale range, the medium gray scale range and the high gray scale range, in a case where the ratio of the gray scale coordinates of the data of red gray scales in the low gray scale range is maximum, the division value of the gamma voltage range corresponding to each gray scale range of the gamma voltage curve is adjusted, so that the division value of the gamma voltage range corresponding to the low gray scale range is less than division values of gamma voltage ranges corresponding to both the medium gray scale range and the high gray scale range (as shown inFIGS.12and13). In this way, the low gray scale range of the at least one frame of image to be displayed may have a more detailed voltage subdivision accuracy, which avoids the gray scale loss of the low gray scale range and the image distortion, and improves a capability for presenting a display image in the low gray scale range. As a result, image display quality of the display apparatus is improved. In S400, gamma voltages corresponding to the data of gray scales of the at least one frame of image to be displayed are output according to the adjusted gamma voltage curve. In some embodiments, the adjusted gamma voltage curve is as shown inFIGS.13and16. The division value of the gamma voltage range corresponding to the low gray scale range is less than division values of gamma voltage ranges corresponding to both the medium gray scale range and the high gray scale range inFIG.13; while the division value of the gamma voltage range corresponding to the high gray scale range is less than division values of gamma voltage ranges corresponding to both the medium gray scale range and the low gray scale range inFIG.16. In this case, the gamma voltages corresponding to the data of gray scales of the at least one frame of image to be displayed may be output according to the curve as shown inFIG.13orFIG.16. In some other examples, in a process of adjusting the division value of the gamma voltage range corresponding to each gray scale range of the gamma voltage curve, as shown inFIG.11, a compensation voltage ΔV for the thin film transistors and the OLED light-emitting devices in pixel driving circuits of the display apparatus is further obtained. For example, the compensation voltage ΔV for the thin film transistors and the OLED light-emitting devices in the pixel driving circuits of the display apparatus may be obtained as follows. After threshold voltages Vth of the thin film transistors are compensated every time, a minimum value Vth_min(N) of the threshold voltages Vth of all the thin film transistors in the display apparatus may be counted; and after efficiency η of the OLED light-emitting devices is compensated every time, a minimum value η_min(N) of the efficiency n of all the OLED light emitting-devices in the display apparatus may be counted. In this way, a compensation voltage change ΔV_(N), which is a quotient of the minimum value Vth_min(N) of the threshold voltages Vth of the thin film transistors and the minimum value η_min(N) of the efficiency n of the OLED light-emitting devices is obtained, i.e., ΔV (N)=Vth_min(N) η_min(N). In addition, a quotient of a minimum value Vth_min( ) of the threshold voltages Vth that is counted after initial compensation for the threshold voltages Vth of the thin film transistors and a minimum value η_min(1) of the efficiency n that is counted after initial compensation for the efficiency η of the OLED light-emitting devices, i.e., ΔV(1)=Vth_min(1)÷η_min(1). In this way, the compensation voltage ΔV for the thin film transistors and the OLED light-emitting devices in the display apparatus may be obtained, i.e., ΔV is equal to ΔV (N) minus ΔV_(1) (i.e., ΔV=ΔV_(N)−ΔV_(1)). Based on this, the adjusted gamma voltage curve may be also as shown inFIG.14. That is, on a basis of adjusting the division value of the gamma voltage range corresponding to each gray scale range of the gamma voltage curve, the compensation voltage for the thin film transistors and the OLED light-emitting devices in the display apparatus is also adjusted, and through both of which, the gamma voltages corresponding to the data of gray scales of the at least one frame of image to be displayed are determined. In this case, the gamma voltages corresponding to the data of gray scales of the at least one frame of image to be displayed may also be output according to the curve as shown inFIG.14. As shown inFIG.4, the method for improving the image display quality in some embodiments of the present disclosure further includes S500to S600. In S500, each of the data of gray scales of the at least one frame of image to be displayed is converted into luminance data. In S600, a data voltage corresponding to each of the data of gray scales of the at least one frame of image to be displayed is output according to the gamma voltage and the luminance data corresponding to each of the data of gray scales of the at least one frame of image to be displayed. In some examples, a conversion relationship between the gray scale and luminance conform to a gamma curve. For example, the gamma curve is a gamma curve of 2.2 (as shown inFIG.10). For example, the conversion relationship between the gray scale and the luminance conforms to following formulas. LRequals to GR1024(i.e.,LR=GR1024), LGequals to GG1024(i.e.,LG=GG1024), and LBequals to GB1024(i.e.,LB=GB1024). LRrepresents luminance data corresponding to the data of red gray scale (i.e., the gray scale coordinate), and GRrepresents the data of red gray scale (i.e., the gray scale coordinate). LGrepresents luminance data corresponding to the data of green gray scale (i.e., the gray scale coordinate), and GGrepresents the data of green gray scale (i.e., the gray scale coordinate). LBrepresents luminance data corresponding to the data of blue gray scale (i.e., the gray scale coordinate), and GBrepresents the data of blue gray scale (i.e., the gray scale coordinate). For example, a conversion relationship between the luminance data and the gamma voltage corresponding to each of the data of gray scales of the at least one frame of image to be displayed conforms to following formulas. VRequals to (LR)0.5(i.e., VR=(LR)0.5), VGequals to (LG)0.5(i.e., VG=(LG)0.5), and VBequals to (LB)0.5(i.e., VB=(LB)0.5). VRrepresents a gamma voltage corresponding to the luminance of the data of red gray scale (i.e., the gray scale coordinate); VGrepresents a gamma voltage corresponding to the luminance of the data of green gray scale (i.e., the gray scale coordinate); and VBrepresents a gamma voltage corresponding to the luminance of the data of blue gray scale (i.e., the gray scale coordinate). Here, the data of red gray scale (i.e., the gray scale coordinate) is taken as an example, a conversion relationship between the data voltage and the gamma voltage corresponding the data of red gray scale conforms to following formulas. In a case where VRis equal to or more than 0 V and equal to or less than 2.5 V (i.e., 0V≤VR≤2.5V), the data voltage Data(R) equals to VR0.0098(i.e.,Data(R)=VR0.0098). In a case where VRis equal to or more than 2.5 V and equal to or less than 7.5 V, the data voltage Data(R) equals to VR0.0175(i.e.,Data(R)=VR0.0175). In a case where VRis equal to or more than 7.5 V and equal to or less than 10 V, the data voltage Data(R) equals to VR0.0175(i.e.,Data(R)=VR0.0175). It will be noted that, series numbers of the steps (e.g., S100to S600) are only for a purpose of describing content of each step more clearly, and do not limit an order of implementation of each step. For example, S100and S500may be performed simultaneously, instead of performing S100to S400sequentially before performing S500. As another example, after S400and S500are performed, S600is performed according to results of S400and S500. In some embodiments, as shown inFIG.5, calculating the ratio of the data of gray scales in each gray scale range in S200includes S210to S240. In S210, a weight value corresponding to each gray scale range is set. It will be noted that, since the number of gray scale coordinates in each gray scale range may be different, or grayscale values of gray scale coordinates in each gray scale range may be different, it is necessary to set the weight value corresponding to each gray scale range according to the gray scale ranges actually divided. In S220, a reference quantity of the data of gray scales in each gray scale range is counted. The reference quantity may have different meanings in different examples, which may be selectively set according to actual needs. In some examples, the reference quantity may be the number of the data of gray scales in each gray scale range. For example, the 1024 gray scale coordinates in the total gray scale range are divided into three gray scale ranges. The three gray scale ranges are the low gray scale range (which includes, for example, 0 gray scale to 255 gray scale), the medium gray scale range (which includes, for example, 256 gray scale to 767 gray scale), and the high gray scale range (which includes, for example, 768 gray scale to 1023 gray scale). In the low gray scale range, the reference quantity is the number of data of gray scales (i.e., gray scale coordinates) in the low gray scale range, i.e., 256. In the medium gray scale range, the reference quantity is the number of data of gray scales (i.e., gray scale coordinates) in the medium gray scale range, i.e., 512. In the high gray scale range, the reference quantity is the number of data of gray scales (i.e., gray scale coordinates) in the high gray scale range, i.e., 256. In some other examples, the reference quantity may be a sum of grayscale values of the data of gray scales in each gray scale range. For example, in the case where the 1024 gray scale coordinates in the total gray scale range are still divided into the three gray scale ranges, in the low gray scale range, the reference quantity is a sum SUM_L of grayscale values of the data of gray scales (i.e., grayscale values of the gray scale coordinates) in the low gray scale range (i.e., SUM_L=0+1+2++255); in the medium gray scale range, the reference quantity is a sum SUM_M of grayscale values of the data of gray scales (i.e., grayscale values of the gray scale coordinates) in the medium gray scale range (i.e., SUM_M=256+257+258++767); and in the high gray scale range, the reference quantity is a sum SUM_H of grayscale values of the data of gray scales (i.e., grayscale values of the gray scale coordinates) in the high gray scale range (i.e., SUM_H=768+769+770+ . . . +1023). In S230, a weighted value of the data of gray scales in each gray scale range is calculated and a sum of weighted values corresponding to the gray scale ranges is calculated, according to the reference quantity of the data of gray scales in each gray scale range and the weight value corresponding to the gray scale range. Here, in the case where the reference quantity has different meanings, a manner of calculating the sum of the weighted values corresponding to the gray scale ranges may be different. In some examples, in a case where the reference quantity is the number of the data of gray scales (i.e., gray scale coordinates) in each gray scale range, and the 1024 gray scale coordinates in the total gray scale range are divided into the three gray scale ranges, the reference quantity of the low gray scale range (which includes, for example, 0 gray scale to 255 gray scale) is 256, the reference quantity of the medium gray scale range (which includes, for example, 256 gray scale to 767 gray scale) is 512, and the reference quantity of the high gray scale range (which includes, for example, 768 gray scale to 1023 gray scale) is 256. In this case, a weight value of the low gray range may be set as a, a weight value of the medium gray range may be set as b, and a weight value of the high gray range may be set as c. Based on this, a product of the number of the gray scale coordinates in each gray scale range and the weight value corresponding to the gray scale range may be equal or approximately equal. For example, the weight value a of the low gray range may be set to be equal to 1 (i.e., a=1), the weight value b of the medium gray range may be set to be equal to 0.5 (i.e., b=0.5), and the weight value c of the high gray range may be set to be equal to 1 (i.e., c=1). In this case, the weighted value of the data of gray scales in each gray scale range is calculated. Thus, a weighted value of the data of gray scales in the low gray scale range is a×256, a weighted value of the data of gray scales in the medium gray scale range is b×512, and a weighted value of the data of gray scale in the high gray scale range is c×256. A sum of weighted values corresponding to the three gray scale ranges is calculated, and the sum is (a×256+b×512+c×256). In some other examples, in a case where the reference quantity is the sum of the grayscale values of the data of gray scales (i.e., the grayscale values of the gray scale coordinates) in each gray scale range, and the 1024 gray scale coordinates in the total gray scale range are divided into the three gray scale ranges, the reference quantity of the low gray scale range (which includes, for example, 0 gray scale to 255 gray scale) is SUM_L, the reference quantity of the medium gray scale range (which includes, for example, 256 gray scale to 767 gray scale) is SUM_M, and the reference quantity of the high gray scale range (which includes, for example, 768 gray scale to 1023 gray scale) is SUM_H. In this case, the weight value of the low gray range may be set as a′, the weight value of the medium gray range may be set as b′, and the weight value of the high gray range may be set as c′. In general, SUM_M is greater than SUM_L and less than SUM_H (i.e., SUM_L<SUM_M<SUM_H). Based on this, the weight value corresponding to each gray scale range is inversely related to the sum of the grayscale values of the gray scale coordinates in the gray scale range. For example, it may be set that b′ is greater than a′ and less than c′ (i.e., c′<b′<a′). In this case, the weighted value of the data of gray scales in each gray scale range is calculated. Thus, the weighted value of the data of gray scales in the low gray scale range is a′×SUM_L, the weighted value of the data of gray scales in the medium gray scale range is b′×SUM_M, and the weighted value of the data of gray scales in the high gray scale range is c′×SUM_H. The sum of the weighted values corresponding to the three gray scale ranges is calculated, and the sum is (a′×SUM_L+b′×SUM_M+c′×SUM_H). In S240, the ratio of the data of gray scales in each gray scale range is calculated according to the weighted value of the data of gray scales in each gray scale range and the sum of the weighted values. For example, in the case where the reference quantity is the number of the data of gray scales in each gray scale range, and the1024 gray scale coordinates in the total gray scale range are divided into the three gray scale ranges, the ratio of the data of gray scales in the low gray scale range may be RL, and RLequals to a×256a×256+b×512+c×256(i.e.,RL=a×256a×256+b×512+c×256); the ratio of the data of gray scales in the medium gray scale range may be RM, and RMequals to b×512a×256+b×512+c×256(i.e.,RM=b×512a×256+b×512+c×256); and the ratio of the data of gray scales in the high gray scale range may be RH, and RHequals to c×256a×256+b×512+c×256(i.e.,RH=c×256a×256+b×512+c×256); For example, in the case where the reference quantity is the sum of the grayscale values of the data of gray scales in each gray scale range, and the 1024 gray scale coordinates in the total gray scale range are divided into the three gray scale ranges, the ratio of the data of gray scales in the low gray scale range may be RL′, and RL′ equals to a′×SUM_La′×SUM_L+b′×SUM_M+c′×SUM_H(i.e.,RL′=a′×SUM_La′×SUM_L+b′×SUM_M+c′×SUM_H); the ratio of the data of gray scales in the medium gray scale range may be RM′, and RM′ equals to b′×SUM_Ma′×SUM_L+b′×SUM_M+c′×SUM_H(i.e.,RM′=b′×SUM_Ma′×SUM_L+b′×SUM_M+c′×SUM_H); and the ratio of the data of gray scales in the high gray scale range may be RH′, and RH′ equals to c′×SUM_Ha′×SUM_L+b′×SUM_M+c′×SUM_H(i.e.,RH′=c′×SUM_Ha′×SUM_L+b′×SUM_M+c′×SUM_H). In some embodiments, as shown inFIG.6, adjusting the division value of the gamma voltage range corresponding to each gray scale range of the gamma voltage curve according to the calculated ratios so that the division value of the gamma voltage range corresponding to the gray scale range with the maximum ratio is less than the division value of the gamma voltage range corresponding to any remaining gray scale range in S300includes S310ato S330a. In S310a, the gray scale range with the maximum ratio is determined from the calculated ratios. After the ratio of the data of gray scales in each gray scale range is calculated in S240, the ratios are compared, so that the gray scale range with the maximum ratio may be determined. In an example where the 1024 gray scale coordinates in the total gray scale range are divided into the low gray scale range (which includes, for example, 0 gray scale to 255 gray scale), the medium gray scale range (which includes, for example, 256 gray scale to 767 gray scale), and the high gray scale range (which includes, for example, 768 gray scale to 1023 gray scale), and the low gray scale range is determined to have the maximum ratio, it is assumed that V1equals to 0 V (i.e., V1=0), V9equals to 16 V (i.e., V9=16), and output luminance of the display apparatus ranges from 0 nit to 100 nit. In S320a, reference gamma voltages of two end points corresponding to gray scale coordinates of two end points in the gray scale range with the maximum ratio are calculated. For example, in the low gray range, in an ideal condition, output luminance corresponding to the gray scale coordinates of two end points in the gray scale range that is from 0 gray scale to 255 gray scale is firstly calculated, and then gamma voltages corresponding to the gray scale coordinates of two end points in the gray scale range that is from 0 gray scale to 255 gray scale are calculated. In some embodiments, as shown inFIG.7, S320aincludes S321ato S323a. In S321a, a luminance range that the display apparatus is capable of outputting is obtained. In S322a, reference luminance of two end points corresponding to the gray scale coordinates of two end points in the gray scale range with the maximum ratio are calculated according to the luminance range, the number of the gray scale coordinates in the gray scale range with the maximum ratio and the total number of gray scale coordinates in the total gray scale range. In an example where the data of gray scales is the data of red gray scales and the gray scale range with the maximum ratio is the low gray scale range (which includes, for example, 0 gray scale to 255 gray scale), when the reference gamma voltages of two end points corresponding to the gray scale coordinates of two end points of data of red gray scales in the low gray scale range are calculated, luminance of two end points corresponding to the gray scale coordinates of two end points (i.e., 0 gray scale and 255 gray scale) may be obtained first, and thus the luminance range that the display apparatus is capable of outputting is obtained. For example, the total number of gray scale coordinates in the total gray scale range is 1024, and the number of gray scale coordinates in the low gray scale range is 256. According to the gamma curve of 2.2 and the above conversion formula of the gray scale and the luminance (e.g.,LR=GR1024), luminance of an end point corresponding to 0 gray scale is 0 nit, and luminance of another end point corresponding to 255 gray scale is 4.74 nit (i.e.,100×(2561024)2.2=4.74, where two decimal places are reserved for the result). Therefore, in the low gray scale range, the luminance range that the display apparatus is capable of outputting is between 0 nit and 4.74 nit. In S323a, voltages required for the reference luminance of the two end points are calculated according to the reference luminance of the two end points, and the calculated two voltages are served as the reference gamma voltages of the two end points corresponding to the gray scale coordinates of two end points in the gray scale range with the maximum ratio. For example, according to the above conversion formula of the luminance and the voltage (e.g., VR=(LR)0.5), a reference gamma voltage of an end point corresponding to the 0 gray scale is 0 V, and a reference gamma voltage of another end point corresponding to the 255 gray scale is 2.17 V (i.e., (4.74)0.5=2.17, where two decimal places are reserved for the result). In S330a, the division value of the gamma voltage range corresponding to the gray scale range with the maximum ratio is calculated according to a difference between the reference gamma voltages of two end points corresponding to the gray scale range with the maximum ratio and a difference between the gray scale coordinates of two end points in the gray scale range with the maximum ratio. In an example where the low gray scale range has the maximum ratio, the difference between the gray scale coordinates of two end points in the low gray scale range (i.e., 0 gray scale and 255 gray scale) is 255, and the difference between the reference gamma voltages of the two end points (i.e., 0 V and 2.17 V) is 2.17 V, it may thus be obtained that the division value of the gamma voltage range (i.e., a range of 0 V to 2.17 V) corresponding to the low gray scale range is 0.0085 V (i.e.,2.17-2560.0085, where two significant figures are reserved for the result). For example, in the related art, as shown inFIG.1, the gamma voltage curve is the linear curve. In a case where the horizontal coordinate includes 1024 gray scales, the vertical coordinate includes gamma voltages V1, V2. . . V8, and V9, and V1equals to 0 V, V2equals to 2 V, . . . , V9equals to 16 V, a minimum gray scale output voltage (i.e., a division value of the vertical coordinate) is 0.0156 V (i.e.,161023=0.0156). Thus, it can be seen that, the division values of the gamma voltages (i.e., the vertical coordinate) of the gamma voltage curve of the display apparatus in the related art are all 0.0156 V. In a case where the at least one frame of image to be displayed by the display apparatus in the related art is image(s) in which a certain gray scale range is mainly displayed, the display apparatus has a relatively low capability of subdividing voltages in the certain gray scale range that needs to be mainly displayed, so that image distortion is easy to occur, resulting in poor display effect. However, in the embodiments of the present disclosure, it is possible to determine the gray scale range mainly displayed in the at least one frame of image to be displayed, and subdivide the gamma voltages corresponding to the gray scale range mainly displayed after the gray scale range is determined, i.e., reduce the division value of the gamma voltages corresponding to the gray scale range mainly displayed. For example, the division value of the gamma voltages corresponding to the gray scale range mainly displayed in some embodiments of the present disclosure described above is 0.0085 V, which is less than the division value (i.e., 0.0156 V) of the gamma voltages in the related art. In this way, the voltage subdivision capability of the gray scale range mainly displayed is effectively improved, and the image presentation capability of the gray scale range mainly displayed is improved, and thus the image distortion is avoided. As a result, the display quality of the image is improved. For example, it can be seen fromFIG.12that, the image is an image in a low gray scale state, and a ratio of the low gray scale range is the maximum. In this case, the gamma voltage curve (as shown inFIG.13) may be adjusted, so that the division value of the gamma voltage range corresponding to the low gray scale range is relatively small, which may avoid the image distortion, thereby improving the image presentation capability of the low gray scale range. For example, it can be seen fromFIG.15that, the image is an image in a high gray scale state, and a ratio of the high gray scale range is the maximum. In this case, the gamma voltage curve (as shown inFIG.16) may be adjusted, so that the division value of the gamma voltage range corresponding to the high gray scale range is relatively small, which may avoid the image distortion, thereby improving the image presentation capability of the high gray scale range. In some other embodiments, for the adjusted gamma voltage curve, a division value of a gamma voltage range corresponding to a gray scale range with a secondary maximum ratio is less than a division value of a gamma voltage range corresponding to each gray scale range except the gray scale range with the maximum ratio and the gray scale range with the secondary maximum ratio, and is greater than the division value of the gamma voltage range corresponding to the gray scale range with the maximum ratio. Based on this, as shown inFIG.8, adjusting the division value of the gamma voltage range corresponding to each gray scale range of the gamma voltage curve according to the calculated ratios so that the division value of the gamma voltage range corresponding to the gray scale range with the maximum ratio is less than the division value of the gamma voltage range corresponding to any remaining gray scale range in S300, further includes S310bto S330b. In S310b, a gray scale range with the secondary maximum ratio is determined from the calculated ratios. After the ratio of the data of gray scales in each gray scale range is calculated in S240, the ratios are compared, so that the gray scale range with the secondary maximum ratio may be determined. In an example where the gray scale coordinates in the total gray scale range are divided into three gray scale range (i.e., the low gray scale range, the medium gray scale range and the high gray scale range), and the low gray scale range has the maximum ratio, the gray scale range with the secondary maximum ratio is determined based on this. The gray scale range with the secondary maximum ratio may include one gray scale range (such as the medium gray scale range or the high gray scale range) or multiple gray scale ranges (such as the medium gray scale range and the high gray scale range), which is not limited in the embodiments of the present disclosure. In S320b, reference gamma voltages of two end points corresponding to gray scale coordinates of two end points in the gray scale range with the secondary maximum ratio are calculated. For example, the reference gamma voltages of two end points corresponding to the gray scale coordinates of two end points in the gray scale range with the secondary maximum ratio may be calculated according to the steps of the method described in S321ato S323a. In an example where the data of gray scales is the data of red gray scales and the gray scale range with the secondary maximum ratio is the medium gray scale range (which includes, for example, 256 gray scale to 767 gray scale), when the reference gamma voltages of two end points corresponding to the gray scale coordinates of two end points of data of red gray scales in the medium gray scale range are calculated, luminance of two end points corresponding to the gray scale coordinates of two end points (i.e., 256 gray scale and 767 gray scale) may be obtained first, and thus the luminance range that the display apparatus is capable of outputting is obtained. For example, the total number of the gray scale coordinates in the total gray scale range is 1024, and the number of gray scale coordinates in the medium gray scale range is 512. According to the gamma curve of 2.2 and the above conversion formula of the gray scale and the luminance (e.g.,LR=GR1024), it may be obtained that luminance of an end point corresponding to the 256 gray scale is 4.78 nit, (i.e.,100×(2571024)2.2=4.78), and luminance of another end point corresponding to the 767 gray scale is 53.10 nit (i.e.,100×(7681024)2.2=53.1, where two decimal places are reserved for the result). Therefore, in the medium gray scale range, the luminance range that the display apparatus is capable of outputting is 4.78 nit to 53.10 nit. According to the above conversion formula of the luminance and the voltage (e.g., VR=(LR)0.5), a reference gamma voltage of an end point corresponding to the 256 gray scale is 2.19 V (i.e., (4.78)0.5=2.19), and a reference gamma voltage of another end point corresponding to the 767 gray scale is 7.29 V (i.e., (53.10)0.5=7.29, where two decimal places are reserved for the result). In S330b, the division value of the gamma voltage range corresponding to the gray scale range with the secondary maximum ratio is calculated according to a difference between the reference gamma voltages of two end points corresponding to the gray scale range with the secondary maximum ratio and a difference between the gray scale coordinates of two end points in the gray scale range with the secondary maximum ratio. For example, the difference between the gray scale coordinates of two end points in the medium gray scale range (i.e., 256 gray scale and 767 gray scale) is 512, and the difference between the reference gamma voltages of two end points (i.e., 2.19 V and 7.29 V) is 5.1 V; and therefore, it may be obtained that the division value of the gamma voltage range (i.e., a range of 2.19 V to 7.29 V) corresponding to the medium gray scale range is 0.010∨(i.e.,5.1512=0.010, where two significant figures are reserved for the result). For example, in the above embodiments, a gray scale range except the gray scale range with the maximum ratio and the gray scale range with the secondary maximum ratio is the high gray range. The division value of the gamma voltage range corresponding to the high gray scale range (i.e., a range of 768 gray scale to 1023 gray scale) is calculated according to the steps of the method described in S321ato S323a. It may be obtained that luminance of an end point corresponding to the 768 gray scale is 53.26 nit, (i.e.,100×(7691024)2.2=53.26), and luminance of another end point corresponding to the 1023 gray scale is 100 nit (i.e.,100×(10241024)2.2=100, where two decimal places are reserved for the result). Therefore, in the high gray scale range, the luminance range that the display apparatus is capable of outputting is 53.26 nit to 100 nit. According to the above conversion formula of the luminance and the voltage (e.g., VR=(LR)0.5), a reference gamma voltage of an end point corresponding to the 768 gray scale is 7.30 V (i.e., (53.26)=7.30), where two decimal places are reserved for the result), and a reference gamma voltage of another end point corresponding to the 1023 gray scale is 10 V (i.e., (100)0.5=10). The difference between the gray scale coordinates of two end points in the high gray scale range (i.e., the 768 gray scale and the 1023 gray scale) is 256, and the difference between the reference gamma voltages of two end points (i.e., 7.30 V and 10 V) is 2.7 V; and therefore, it may be obtained that the division value of the gamma voltage range (i.e., a range of 7.30 V to 10 V) corresponding to the high gray scale range is 0.011 V (i.e.,2.7256=0.011, where two significant figures are reserved for the result). Thus, it can be seen that, for the adjusted gamma voltage curve, the division value of the gamma voltage range corresponding to the medium gray scale range with the secondary maximum ratio is less than the division value of the gamma voltage range corresponding to the high gray scale range, and is greater than the division value of the gamma voltage range corresponding to the low gray scale range. In this way, the division value of the gamma voltage range corresponding to each gray scale range may be better adjusted according to whether the gray scale range is mainly displayed in the display image, so that the voltage subdivision capability of the gray scale range mainly display is improved, and the display quality of the display image is improved. In some other embodiments, as shown inFIG.9, adjusting the division value of the gamma voltage range corresponding to each gray scale range of the gamma voltage curve according to the calculated ratios so that the division value of the gamma voltage range corresponding to the gray scale range with the maximum ratio is less than the division value of the gamma voltage range corresponding to any remaining gray scale range in S300, further includes S310cto S320c. In S310c, reference gamma voltages of two end points corresponding to gray scale coordinates of two end points in a continuous gray scale range are calculated. In some examples, in a case where only the division value of the gamma voltage range corresponding to the gray scale range with the maximum ratio is calculated, the continuous gray scale range is a continuous gray scale range composed of gray scale ranges except the gray scale range with the maximum ratio in the plurality of gray scale ranges. In an example where the total gray scale range is divided into the low gray scale range, the medium gray scale range and the high gray scale range, in a case where the gray scale range with the maximum ratio is the low gray scale range, the continuous gray scale range may be composed of the medium gray scale range and the high gray scale range. In some other examples, in a case where the division value of the gamma voltage range corresponding to the gray scale range with the maximum ratio and the division value of the gamma voltage range corresponding to the gray scale range with the secondary maximum ratio are calculated, the continuous gray scale range is a continuous gray scale range composed of gray scale ranges except the gray scale range with the maximum ratio and the gray scale range with the secondary maximum ratio in the plurality of gray scale ranges. In an example where the total gray scale range is divided into four gray scale ranges (i.e., the low gray scale range, the medium gray scale range, the secondary high gray scale range and the high gray scale range), in a case where the gray scale range with the maximum ratio is the low gray scale range, and the gray scale range with the secondary maximum ratio is the medium gray scale range, the continuous gray scale range may be composed of the secondary high gray scale range and the high gray scale range. In S320c, a division value of a gamma voltage range corresponding to the continuous gray scale range is calculated according to a difference between the reference gamma voltages of two end points corresponding to the continuous gray scale range and a difference between the gray scale coordinates of two end points in the continuous gray scale range. For example, the division value of the gamma voltage range corresponding to the continuous gray scale range may be calculated according to the steps of the method described in S321ato S323a, which will not be repeated here. In some embodiments, the display apparatus1000provided in the embodiments of the present disclosure may be an active light-emitting display apparatus, such as the OLED display apparatus; or, the display apparatus1000may also be a passive light-emitting display apparatus, such as a liquid crystal display (LCD) apparatus, which is not limited in the embodiments of the present disclosure. For example, in the case where the display apparatus1000is the active light-emitting display apparatus (e.g., the OLED display apparatus), a total gamma voltage range composed of the gamma voltage ranges corresponding to the plurality of gray scale ranges may be a range that is obtained by subtracting the compensation voltage range from an ideal data voltage range that the display apparatus is capable of providing. The compensation voltage range is a compensation voltage range required for compensating transistors and/or active light-emitting devices (e.g., OLED light-emitting devices) of the display apparatus. Some embodiments of the present disclosure also provide a timing controller100. As shown inFIG.17, the timing controller100includes a data analysis circuit1, a ratio calculation circuit2and a gamma voltage calculation circuit3. In some examples, the data analysis circuit1is configured to divide a total gray scale range of a gamma voltage curve of the display apparatus1000to obtain a plurality of gray scale ranges, and obtain data of gray scales of at least one frame of image to be displayed by the display apparatus1000. A process of dividing the total gray scale range of the gamma voltage curve to obtain the plurality of gray scale ranges, and a process of obtaining the data of gray scales of the at least one frame of image to be displayed by the display apparatus1000may be referred to relevant exemplary descriptions in the above embodiments, which will not be repeated here. In some examples, the ratio calculation circuit2is coupled to the data analysis circuit1. The ratio calculation circuit2is configured to calculate a ratio of data of gray scales in each gray scale range. A process of calculating the ratio of the data of gray scales in each gray scale range may be referred to relevant exemplary descriptions in the above embodiments, which will not be repeated here. In some examples, the gamma voltage calculation circuit3is coupled to the ratio calculation circuit2. The gamma voltage calculation circuit3is configured to: adjust a division value of a gamma voltage range corresponding to each gray scale range of the gamma voltage curve according to calculated ratios, so that a division value of a gamma voltage range corresponding to a gray scale range with a maximum ratio is less than a division value of a gamma voltage range corresponding to any remaining gray scale range; and output gamma voltages corresponding to the data of gray scales of the at least one frame of image to be displayed according to the adjusted gamma voltage curve. A working process of the gamma voltage calculation circuit3may be referred to relevant exemplary descriptions in the above embodiments, which will not be repeated here. Beneficial effects that may be achieved by the timing controller100provided in the embodiments of the present disclosure are the same as the beneficial effects that may be achieved by the method for improving image display quality provided in the above embodiments, which will not be repeated here. In some embodiments, as shown inFIG.17, the timing controller100further includes a luminance conversion circuit4and a data voltage output circuit5. The luminance conversion circuit4is configured to convert each of the data of gray scales of the at least one frame of image to be displayed into luminance data. The data voltage output circuit5is coupled to the luminance conversion circuit4and the gamma voltage calculation circuit3. The data voltage output circuit5is configured to output a data voltage corresponding to each of the data of gray scales of the at least one frame of image to be displayed according to the gamma voltage and the luminance data corresponding to each of the data of gray scales of the at least one frame of image to be displayed. In some embodiments, as shown inFIG.18, the timing controller100further includes a timing conversion circuit6configured to convert a timing control signal Timing into a source control signal SCS and a gate control signal GCS. Display luminance of the display apparatus in some embodiments of the present disclosure is compared with the related art, which is schematically described below. As shown in Table 1, Table 1 is a comparison table of display luminance of the display apparatus in the related art and the display luminance of the display apparatus in some embodiments of the present disclosure. TABLE 1Some embodiments ofThe related artthe present disclosureGrayLuminanceVoltageLuminanceVoltageLuminancescale1 (nit)(v)2 (nit)(v)2 (nit)10.0110.010.0110.010.0120.0410.010.0120.020.0430.0920.040.0430.030.0940.1640.160.1640.040.16 In Table 1, the gray scale is the data of gray scale; luminance1is luminance data converted from the data of gray scale, which may conform to the gamma curve of 2.2; the voltage is a source driving voltage output by the source driver300according to a data voltage output by the timing controller100, the data voltage being output by the timing controller100to the source driver300, and the source driving voltage being output by the source driver300to the display panel200through the DL; and luminance2is actual luminance data output by the OLED light-emitting device (i.e., luminance data output by the display apparatus) after a current generated by the pixel driving circuit of the display apparatus is output to the OLED light-emitting device. It will be noted that, as shown inFIG.1, in the related art, the entire gamma voltage curve is the linear curve. In an example where the total gray scale range of the gamma voltage curve is 0 to 1023, (for example, V1equals to 0 V, V2equals to 2 V, . . . , V9equals to 16 V), a minimum output voltage is 0.0156 V (i.e.,161023=0.0156). As shown inFIG.13, the entire gamma voltage curve in the embodiments of the present disclosure is not a linear curve. In an example where the total gray scale range of the gamma voltage curve is 0 to 1023, and the 1024 gray scale coordinates in the total gray scale range are divided into the low gray scale range (which includes, for example, 0 gray scale to 255 gray scale), the medium gray scale range (which includes, for example, 256 gray scale to 767 gray scale) and the high gray scale range (which includes, for example, 768 gray scale to 1023 gray scale), output luminance of the display apparatus may range from 0 nit to 400 nit, and luminance of an end point corresponding to the 255 gray scale is 18.95 nit (i.e.,400×(2561024)2.2=18.95). In this case, a reference gamma voltage of an end point corresponding to the 255 gray scale is 2.5V (i.e., (6.25)0.5=2.5), and a division value of a corresponding gamma voltage range (i.e., a range of 0 V to 2.5 V) is 0.00976 V (i.e.,2.5256=0.00976). It can be seen from Table 1, the luminance data output by the display apparatus in the embodiments of the present disclosure is substantially the same as the luminance data converted from the data of gray scale. In this way, in the embodiments of the present disclosure, by subdividing division values of gamma voltage ranges corresponding to different gray scale ranges of the at least one frame of image to be displayed by the display apparatus, the gamma voltages and the data voltages corresponding to the output data of gray scales are controlled, and thus it may be possible to avoid the gray scale loss of different gray scale ranges and the image distortion, and improve the capability for presenting the display image in different gray scale ranges. As a result, the image display quality of the display apparatus is improved. Some embodiments of the present disclosure provide a computer-readable storage medium. The computer-readable storage medium has stored thereon computer program instructions that, when run on a processor, cause the processor to perform one or more steps in the method for improving image display quality as described in any one of the above embodiments. The computer-readable storage medium may be, for example, a non-transitory computer-readable storage medium. Some embodiments of the present disclosure provide a computer program product. The computer program product includes computer program instructions, which cause a computer to perform one or more steps in the method for improving image display quality as described in any one of the above embodiments when executed on the computer. Some embodiments of the present disclosure provide a computer program. The computer program causes a computer to perform one or more steps in the method for improving image display quality as described in any one of the above embodiments when executed on the computer. The computer-readable storage medium, the computer program product and the computer program have the same beneficial effects as the method for improving image display quality as described in the embodiments of the present disclosure, which will be not repeated here. The foregoing descriptions are merely specific implementations of the present disclosure, but the protection scope of the present disclosure is not limited thereto. Any changes or replacements that a person skilled in the art could conceive of within the technical scope of the present disclosure shall be included in the protection scope of the present disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims. | 61,445 |
11862078 | DETAILED DESCRIPTION Hereinafter, the present disclosure is described more specifically with reference to the drawings. In the drawings, a portion not related to the description is not illustrated in order to clarify and briefly describe the present disclosure. The same reference numeral is used to denote the same or a very similar part through the specification. The suffixes of elements used in the following description, such as a “module” and a “unit”, are assigned by taking into consideration only the ease of writing this specification, but, in themselves, are not particularly given distinct meanings and roles. Accordingly, the “module” and “unit” may be interchangeably used. It is to be understood that in this application, a term, such as “include” or “have”, is intended to indicate the existence of a characteristic, number, step, operation, element, or component or a combination of them in the specification and does not exclude the existence of one or more other characteristics, numbers, steps, operations, elements, or components or a combination of them or the possible addition of them. Furthermore, in this specification, terms, such as a first and a second, may be used to describe various elements, but these elements are not limited by such terms. Such terms are used to only distinguish one element from another element. FIGS.1ato1care diagrams illustrating an image display apparatus including a rollable display according to an embodiment of the present disclosure. Referring toFIGS.1ato1c, the image display apparatus100may be one that processes and outputs an image. The image display apparatus100is not specially limited to any apparatus which can output a screen corresponding to an image signal, such as TV, a notebook computer, or a monitor. The image display apparatus100may receive a broadcast signal, may perform signal processing on the broadcast signal, and may output the signal-processed broadcast image. When the image display apparatus100receives a broadcast signal, the image display apparatus100may correspond to a broadcast reception apparatus. The image display apparatus100may wirelessly receive the broadcast signal through an antenna, or may receive the broadcast signal in a wired way through a cable. For example, the image display apparatus100may receive a ground wave broadcast signal, a satellite broadcast signal, a cable broadcast signal, an Internet Protocol Television (IPTV) broadcast signal, etc. The image display apparatus100may include a display20and a housing30. The housing30may include an internal space. At least a part of the display20may be disposed within the housing30. An opening part101may be formed in one surface of the housing30. At least a part of the display20may be exposed to the outside of the housing30through the opening part101. In this case, a degree that at least a part of the display20is exposed to the outside of the housing30may be adjusted if necessary. The display20may display an image. For example, the display20may display an image through at least some area that belongs to the entire area of the display20and that is exposed through the opening part101. The display20may be a rollable display including a flexible display panel. For example, the display20may include an organic light-emitting panel consisting of OLEDs. A roller143(FIG.11a) on which the rollable display is wound and a motor (not illustrated) that rotates the roller may be disposed within the housing30. In this case, the display20may be rolled up or rolled down as the roller is rotated, and the size of an area that belongs to the entire area of the display20and that is exposed to the outside of the housing30may be adjusted through the rolling up or down of the display20. The image display apparatus100may adjust the size of an area that belongs to the entire area of the display20and that is exposed to the outside of the housing30depending on a mode. As inFIG.1a, in the image display apparatus100, the display20may be wound on the roller143disposed within the housing30so that the display20is not exposed to the outside of the housing30through the opening part101. For example, when power of the image display apparatus100is off or in a zero view mode, the display20may not be exposed to the outside of the housing30through the opening part101, and power may not be supplied to the display20. In this case, the zero view mode may mean a mode in which the display20is not exposed to the outside of the housing30and only some elements (e.g., the audio output unit285inFIG.12) of the image display apparatus100operate. Meanwhile, as inFIG.1B, the display20is rolled up as the roller143is rotated, and thus an area that belongs to the entire area of the display20and that corresponds to a first height h1may be exposed to the outside of the housing30. In this case, the area corresponding to the first height h1may correspond to some area of the display20. For example, in a line view mode, the image display apparatus100may display an image through an area that belongs to the entire area of the display20, that is exposed to the outside of the housing30, and that corresponds to the first height h1. In this case, the line view mode may mean a mode in which only some of the entire area of the display20is exposed to the outside of the housing30. Meanwhile, as inFIG.1c, the display20is rolled up as the roller143is rotated, and thus an area that belongs to the entire area of the display20and that corresponds to a second height h2may be exposed to the outside of the housing30. In this case, the area corresponding to the second height h2may correspond to the entire area of the display20. For example, in a full view mode, the image display apparatus100may display an image through the entire area of the display20that is exposed to the outside of the housing30. In this case, the full view mode may mean a mode in which the entire area of the display20is exposed to the outside of the housing30. FIGS.2to11are diagrams illustrating examples of elements included in the image display apparatus according to various embodiments of the present disclosure. Referring toFIG.2, the display20may include a display panel10and a plate15. The display panel10may be flexible. For example, the display panel10may be an organic light-emitting display panel including OLEDs. The display panel10may have a front surface on which an image is displayed. The display panel10may have a rear surface that faces the front surface. The front surface of the display panel10may be covered with a light transmissive material. For example, the light transmissive material may be synthetic resin or a film. The plate15may be fastened or attached to the rear surface of the display panel10. The plate15may include a metal material. The plate15may be named a module cover15, a cover15, a display panel cover15, a panel cover15, an apron15, etc. Referring toFIG.3, the plate15may include a plurality of segments15c. A magnet64may be disposed within a recess118of the segment15c. The recess118may be disposed in a surface that faces the display panel10of the segment15c. The recess118may be disposed in the front surface of each segment15c. The magnet64may not protrude to the outside of the segment15cbecause the magnet64is accommodated in the recess118. The display panel10may be flat without being crumbled although the display panel comes into contact with the segment15c. Referring toFIG.4, a plurality of magnets64may be disposed on a link73. For example, at least one magnet64may be disposed on a first arm73a, and at least one magnet64may be disposed on a second arm73b. The plurality of magnets64may be spaced apart from each other. Referring toFIG.5, the magnet64may be disposed in a recessed part321formed in the link73. The recessed part321may have a shape that is recessed into the link73. The magnet64may be coupled with the link73through at least one screw187. A width LHW that the recessed part321is recessed into the link73may be the same as or greater than a thickness MGW of the magnet64. If the thickness MGW of the magnet64is greater than the width LHW of the recessed part321, the display panel10and the module cover15may not be closely attached to the link73. In this case, the display panel10may be crumbled or may not be flat. A panel protection part97may be disposed on the rear surface of the display panel10. The panel protection part97can prevent damage to the display panel10attributable to friction with the module cover15. The panel protection part97may include a metal material. The panel protection part97may have a very thin thickness. For example, the panel protection part97may have a thickness of about 0.1 mm. Attraction between the panel protection part97and the magnet64may occur because the panel protection part97includes the metal material. Accordingly, the module cover15disposed between the panel protection part97and the link73can be closely attached to the magnet64although the module cover15does not include a metal material. Referring toFIG.6, the module cover15can be closely attached to the link73by an upper bar75on the upper side thereof and a guide bar234(refer toFIG.11a) on the lower side thereof. A part that belongs to the link73and that is between the upper bar75and the guide bar234may not be closely attached to the module cover15. Alternatively, a central part of the link73may not be closely attached to the module cover15. The central part of the link73may be near an arm joint152. In this case, a distance APRD1or APLD2between the module cover15and the link73may not be uniform. In this case, the display panel10may be bent or crooked. Referring toFIG.7, if the magnet64is disposed on the recessed part321of the link73, the module cover15may be simultaneously attached to the magnet64because the magnet64attracts the panel protection part97. That is, the central part of the link73may be closely attached to the module cover15. Referring toFIG.8, a bead136may be formed on the top of the segment15b. The bead136may have a shape that is recessed into the segment15b. The bead136may have a shape recessed in a −y axis direction. For example, the bead136may be formed by pressing the segment15b. The bead136may be formed in plural on the segment15b. The plurality of beads136may be spaced apart from each other. The beads136can improve the stiffness of the segment15b. The beads136can prevent that a shape of the segment15bis deformed by an external impact. Referring toFIG.9, a source PCB120may be disposed over the module cover15. Upon rolling up or rolling down, a location of the source PCB120may be changed along with a movement of the module cover15. An FFC cable231may be disposed at a central part of the module cover15on the basis of a first direction. The FFC cable231may be disposed at both ends of the module cover15on the basis of the first direction. Referring toFIG.10, a top case167may cover the source PCB120and the upper bar75in addition to the display panel10and the module cover15. The upper bar75may have one surface coupled with the rear surface of the module cover15and may have the other surface coupled with the source PCB120. The upper bar75may be fixed to the module cover15, and may support the source PCB120. The bottom of the FFC cable231may be connected to a timing controller board105(refer toFIG.11a) within the panel roller143(refer toFIG.11a). The FFC cable231may be wound on or unwound from the panel roller143along with the display unit20. A part of the FFC cable231may be disposed between the display panel10and the module cover15. A part that belongs to the FFC cable231and that is disposed between the display panel10and the module cover15may be named a first part231a. The first part231amay be disposed in a recessed part425in which a plurality of segments15dis formed. Alternatively, the first part231amay be accommodated in the recessed part425that is formed by the plurality of segments15d. A part of the FFC cable231may penetrate a segment15f. A part that belongs to the FFC cable231and that penetrates the segment15fmay be named a second part231b. The segment15fmay include a first hole521aformed in the front surface and a second hole521bformed in the rear surface. The first hole521aand the second hole521bmay be connected to form one hole521. The hole521may penetrate the segment15fin a third direction. The second part231bmay pass through the hole521. The hole521may be name a connection hole521. The top of the FFC cable231may be electrically connected to the source PCB120. A part of the FFC cable231may be disposed in the rear surface of the module cover15. A part that belongs to the FFC cable231and that is disposed in the rear surface of the module cover15may be named a third part231c. The third part231cmay be electrically connected to the source PCB120. The third part231cmay be covered by the top case167. Accordingly, the third part231cmay not be exposed to the outside. Referring toFIG.11a, the FFC cable231may be connected to the timing controller board105mounted on the panel roller143. A through hole615may be formed in the panel roller143. The FFC cable231may be connected to the timing controller board105through the through hole615. The through hole615may be disposed on one side of the panel roller143, and may penetrate an outer circumference part of the panel roller143. The FFC cable231may be connected to one side of the timing controller board105through the through hole615. Although the FFC cable231is disposed on the outer circumference of the panel roller143, the connection of the FFC cable231with the timing controller board105can be maintained due to the through hole615. Accordingly, the FFC cable231may not be twisted because the FFC cable is rotated along with the panel roller143. A part of the FFC cable231may be wound on the panel roller143. A part that belongs to the FFC cable231and that is wound on the panel roller143may be named a fourth part231d. The fourth part231dmay come into contact with an outer circumference surface of the panel roller143. A part of the FFC cable231may pass through the through hole615. A part that belongs to the FFC cable231and that passes through the through hole615may be named a fifth part231e. The bottom of the FFC cable231may be electrically connected to the timing controller board105. A part of the FFC cable231may be disposed within the panel roller143. A part that belongs to the FFC cable231and that is disposed within the panel roller143may be named a sixth part231f. The sixth part231fmay be electrically connected to the timing controller board105. Referring toFIGS.11band11c, the module cover15may be coupled with the rear of the display panel10. The module cover15may be wound on or unwound from the panel roller143that elongates along with the display panel10. Referring toFIG.11b, when the display panel10and the module cover15are wound on the panel roller143, the display panel10may be damaged because the front surface of the display panel10comes into contact with the rear surface of the module cover15. Referring toFIG.11c, when the display panel10and the module cover15are wounded on the panel roller143along with a protection sheet44, the protection sheet44may be disposed between the display panel10and the module cover15. That is, damage to the display panel10can be prevented because the front surface of the display panel10does not come into contact with the rear surface of the module cover15due to the protection sheet44that comes into contact with the front surface of the display panel10. For example, the protection sheet44may include non-woven fabric. For example, the protection sheet44may include a polyethylene foam material. For example, the thickness of the protection sheet44may be 0.3 to 0.5 mm. FIG.12is an example of an internal block diagram of the image display apparatus according to an embodiment of the present disclosure. Referring toFIG.12, the image display apparatus100may include a broadcast reception unit205, an external device interface unit230, a network interface unit235, a storage unit240, a user input interface unit250, a sensor unit260, a controller270, a driving unit275, a display unit280, the audio output unit285, a power supply unit290and/or a heating part295. The broadcast reception unit205may include a tuner unit210and a demodulation unit220. The tuner unit210may select a broadcast signal that corresponds to a channel selected by a user or all of previously stored channels among broadcast signals received through an antenna (not illustrated) or a cable (not illustrated). The tuner unit210may convert the selected broadcast signal into an intermediate frequency signal or a baseband image or a voice signal. For example, the tuner unit210may convert the selected broadcast signal into a digital IF signal (DIF) when the selected broadcast signal is a digital broadcast signal, and may convert the selected broadcast signal into an analog baseband image or a voice signal (CVBS/SIF) when the selected broadcast signal is an analog broadcast signal. That is, the tuner unit210may process the digital broadcast signal or the analog broadcast signal. The analog baseband image or the voice signal (CVBS/SIF) output by the tuner unit210may be directly input to the controller270. Meanwhile, the tuner unit210may sequentially select broadcast signals corresponding to all broadcast channels that are stored through a channel memory function among received broadcast signals, and may convert the selected broadcast signals into intermediate frequency signals or baseband images or voice signals. Meanwhile, the tuner unit210may include a plurality of tuners in order to receive broadcast signals of a plurality of channels. Alternatively, a single tuner that simultaneously receives broadcast signals of a plurality of channels is possible. The demodulation unit220may receive a digital IF signal (DIF) converted by the tuner unit210, and may perform a demodulation operation. After performing demodulation and channel decoding, the demodulation unit220may output a stream signal TS. In this case, the stream signal may be a signal in which an image signal, a voice signal, or a data signal is multiplexed. The stream signal output by the demodulation unit220may be input to the controller270. After performing demultiplexing, image/voice signal processing, etc., the controller270may output an image through the display unit280, and may output a voice through the audio output unit285. The external device interface unit230may transmit or receive data to and from an external device connected thereto. To this end, the external device interface unit230may include an A/V input and output unit (not illustrated). The external device interface unit230may be connected to an external device, such as a digital versatile disk (DVD), Bluray, a game machine, a camera, a camcorder, a computer (notebook), or a set-top box, in a wired/wireless way, and may perform an input/output operation along with the external device. The A/V input and output unit may receive an image and a voice signal from the external device. Furthermore, the external device interface unit230may establish a communication network with various remote controllers300, and may receive, from the remote controller300, a control signal related to an operation of the image display apparatus100or may transmit, to the remote controller300, data related to an operation of the image display apparatus100. The external device interface unit230may include a communication module (not illustrated) for short distance wireless communication with another electronic device. Through such a wireless communication unit (not illustrated), the external device interface unit230may transmit and receive data to and from an adjacent electronic device. In particular, in a mirroring mode, the external device interface unit230may receive device information, information on an executed application, an application image, etc. from a mobile terminal. The network interface unit235may provide an interface for connecting the image display apparatus100to wired/wireless network including the Internet. For example, the network interface unit235may receive, over a network, content or data provided by the Internet or a content provider or network operator. Meanwhile, the network interface unit235may include a communication module (not illustrated) for a connection with a wired/wireless network. For example, the external device interface unit230and/or the network interface unit235may include a communication module for short distance communication, such as wireless fidelity (Wi-Fi), Bluetooth, Bluetooth low energy (BLE), Zigbee, or near field communication (NFC), a communication module for cellular communication, such as long-term evolution (LTE), LTE-advance (LTE-A), code division multiple access (CDMA), wideband CDMA (WCDMA), a universal mobile telecommunications system (UMTS), or wireless broadband (WiBro), etc. The storage unit240may store a program for the processing and control of each signal within the controller270, and may store a signal-processed image, a voice, or a data signal. For example, the storage unit240may store application programs designed for the purpose of performing various tasks that may be processed by the controller270, and may selectively provide some of the stored application programs upon request by the controller270. A program, etc. stored in the storage unit240is not specially limited to anything which may be executed by the controller270. The storage unit240may perform a function for temporarily storing an image, a voice, or a data signal that is received from the external device through the external device interface unit230. The storage unit240may store information about designated broadcast channels through a channel memory function, such as a channel map. The storage unit240may store various data that is received through the external device interface unit230, the network interface unit235and/or the user input interface unit250. FIG.12illustrates an embodiment in which the storage unit240is provided separately from the controller270, but the scope of the present disclosure is not limited thereto. The storage unit240may be included in the controller270. The user input interface unit250may transfer, to the controller270, a signal input by a user, or may transfer a signal from the controller270to a user. For example, the user input interface unit250may transmit/receive user input signals, such as power on/off, channel selection, and screen setting, from the remote controller300or may transfer, to the controller270, user input signals received from a power key, a channel key, a volume key, a local key (not illustrated) such as a set value, etc. which are provided in the image display apparatus100. The sensor unit260may include at least one sensor. The sensor unit260may include a proximity sensor, a temperature/humidity sensor, an illuminance sensor, etc., for example. The sensor unit260may measure a physical quantity or detect an operating state of the image display device100, may convert the measured or detected information into an electrical signal, and may transfer the converted electrical signal to the controller270. The sensor unit260may further include at least one panel temperature sensor (not illustrated) that detects a temperature of the display panel10. The panel temperature sensor may be attached to the display panel10, and may be added in the form of a circuit that detects a temperature between the display panel10and a panel driving unit230(refer toFIG.13). The controller270may include at least one processor, and may control an overall operation of the image display apparatus100by using a processor included in the at least one processor. In this case, the processor may be a common processor, such as a central processing unit (CPU). Of course, the processor may be a dedicated device, such as an ASIC, or another hardware-based processor. The controller270may demultiplex a stream that is input through the tuner unit210, the demodulation unit220, the external device interface unit230, or the network interface unit235or may generate and output a signal for an image or voice output by processing demultiplexed signals. The driving unit275may include the roller143on which the display unit280is wounded and at least one motor (not illustrated). The display panel10may be wound on or unwound from the roller143depending on an operation of the motor. The display unit280(e.g., the display20inFIG.3) may generate a driving signal by converting an image signal, a data signal, an OSD signal, or a control signal that is processed by the controller270or an image signal, a data signal, a control signal, etc. that are received from the external device interface unit230. This is described with reference toFIG.13. Referring toFIG.13, the display unit280may include the display panel10and a panel driving unit281. The display panel10may include a plurality of pixels. The plurality of pixels may be connected to a plurality of gate lines GL and data lines DL that are intersected and disposed in a matrix form. A plurality of thin film transistors (TFT) may be disposed at the intersections of the plurality of gate lines GL and data lines DL. The plurality of pixels included in the display panel10may include RGB subpixels. Alternatively, the plurality of pixels included in the display panel10may include RGBW subpixels. The display unit280may generate a driving signal for the plurality of pixels by converting an image signal, the data signal, OSD signal, a control signal, etc. that are processed by the controller270. The panel driving unit281may drive the display panel10based on a control signal and a data signal transferred by the controller270. The panel driving unit281may include a timing controller282, a gate driver284and/or a data driver286. The timing controller282may receive a control signal, an image signal, etc. from the controller170. The timing controller282may control the gate driver284and/or the data driver286in response to a control signal. The timing controller282may rearrange image signals based on specifications of the data driver286, and may transmit the image signals to the data driver286. The gate driver284and the data driver286may supply a scanning signal and an image signal to the display panel10through the gate line GL and the data line DL under the control of the timing controller282. Meanwhile, the data driver286may include a plurality of source driver integrated circuits (ICs) (not illustrated) corresponding to the plurality of data lines DL. The display unit280may be a flexible display including an organic light-emitting panel composed of OLEDs. The display panel10may be formed on a substrate made of a material having flexibility, such as polyimide. If the display panel10is an organic light-emitting display panel including OLEDs, the plurality of pixels may be composed of the OLEDs. Furthermore, the display unit280may be capable of a three-dimensional (3-D) display. The display unit280capable of the 3-D display may be divided into a glassless method and a glass method. Meanwhile, the display unit280may be composed of a touch screen and may be used as an input device other than an output device. The power supply unit290may supply corresponding power to the overall image processing apparatus100. In particular, the power supply unit290may supply power to the controller270which may be implemented in the form of a system on chip (SOC), the display unit280for displaying an image, the audio output unit285for an audio output, etc. Specifically, the power supply unit290may include a converter (not illustrated) for converting AC power into DC power and a Dc/Dc converter (not illustrated) for converting a level of DC power. The power supply unit290may supply a common electrode voltage Vcom to the display panel10, and may supply a gamma voltage to the data driver286. An image signal image-processed by the controller270may be input to the display unit280, and may be displayed as an image corresponding to the corresponding image signal. Furthermore, an image signal image-processed by the controller270may be input to an external output device through the external device interface unit230. Although not illustrated inFIG.12, the controller270may include a demultiplexing unit (not illustrated), an image processing unit (not illustrated), etc. The audio output unit285may include an audio device, such as a speaker or a buzzer, may receive a signal voice-processed by the controller270, and may output the signal as a voice. A voice signal processed by the controller270may be output to the audio output unit285as a sound. Furthermore, a voice signal processed by the controller270may be input to an external output device through the external device interface unit230. In addition, the controller270may control an overall operation within the image processing apparatus100. For example, the controller270may control the tuner unit210to tune a channel selected by a user or broadcast corresponding to a previously stored channel. Furthermore, the controller270may control the image processing apparatus100based on a user command input through the user input interface unit250or an internal program. Meanwhile, the controller270may control the display unit280to display an image. In this case, the image displayed on the display unit280may be a still image or a moving image, and may be a 2-D image or a 3-D image. Meanwhile, the controller270may display a designated 2-D object within an image that is displayed on the display unit280. For example, the object may be at least one of an accessed web screen (paper, magazine, etc.), an electronic program guide (EPG), various menus, widgets, icons, still images, moving images, and text. A heating part295may include a hot wire (not illustrated) disposed within the housing30. In this case, the hot wire of the heating part295may include a heating element that generates heat by power supplied thereto and an insulating element that surrounds the heating element. For example, the power supply unit290may supply power to the hot wire of the heating part295under the control of the controller270. The hot wire of the heating part295may be heated by the supplied power. The heating part295is described with reference toFIGS.14to15c. Referring toFIG.14, the hot wire of the heating part295may be disposed within the module cover15. The hot wire of the heating part295may include a plurality of sub-hot wires29disposed within a plurality of segments of the module cover15, respectively. Each of the plurality of sub-hot wires29may elongate in an x axis direction within each of the plurality of segments of the module cover15. One end and/or the other end of at least one of the plurality of sub-hot wires29may be connected to another sub-hot wire29. Referring toFIGS.15ato15c, the module cover15may include a plurality of first segments15aincluded in a first segment group and a plurality of second segments15cincluded in a second segment group. The plurality of first segments15amay be segments that are earlier exposed to the outside of the housing30than the plurality of second segments15cwhen the display panel10is rolled up. That is, the plurality of second segments15cmay be segments that are earlier moved to the inside of the housing30than the plurality of first segments15aand wounded on the panel roller143when the display panel10is rolled down. Meanwhile, when the display panel10is rolled down the inside of the housing30and thus the display panel10and the module cover15are wounded on the panel roller143, at least a part of the display panel10may be surrounded by the module cover15. In this case, if the plurality of sub-hot wires29is heated, a temperature of one area of the display panel10surrounded by the module cover15may more rapidly rise than a temperature of the other area of the display panel10. Accordingly, although the temperature of the one area of the display panel10reaches a designated temperature, the temperature of the other area of the display panel10may not reach the designated temperature. Furthermore, when the temperature of the other area of the display panel10reaches the designated temperature, elements of the one area of the display panel10may be damaged because the temperature of one area of the display panel10is higher than the designated temperature. By taking such a point into consideration, at least some of the plurality of segments of the module cover15may include at least one third hole151that penetrates a side opposite to a side that faces the display panel10and that is formed therein. As inFIG.15c, if the plurality of second segments15cincludes the at least one third hole151, although the plurality of sub-hot wires29is heated, a temperature of one area of the display panel10that is surrounded by the plurality of second segments15ccan regularly rise along with a temperature of the other area of the display panel10because heat is discharged through the third hole151. Accordingly, the one area of the display panel10surrounded by the plurality of second segments15ccan be prevented from being overheated. Meanwhile, the image processing apparatus100may further include a photographing unit (not illustrated). The photographing unit may photograph a user. The photographing unit may be implemented as one camera, but the present disclosure is not limited thereto and may be implemented as a plurality of cameras. Meanwhile, the photographing unit may be buried in the image processing apparatus100over the display unit280or may be separately disposed. Image information photographed by the photographing unit may be input to the controller270. The controller270may determine a location of a user based on an image photographed by the photographing unit. For example, the controller270may confirm a distance (z axis coordinates) between the user and the image processing apparatus100. In addition, the controller270may confirm x axis coordinates and y axis coordinates within the display unit280, which correspond to the location of the user. The controller270may detect a gesture of a user based on an image photographed by the photographing unit, a sensed signal from the sensor unit, or a combination thereof. Meanwhile, the image display apparatus100may further include an input unit (not illustrated). The input unit may be provided on one side of the body of the image display apparatus100. For example, the input unit may include a touch pad, a physical button, etc. The input unit may receive various user commands related to an operation of the image display apparatus100, and may transfer, to the controller270, a control signal corresponding to an input command. Meanwhile, the image display apparatus100may be a fixed or mobile digital broadcast receiver capable of receiving digital broadcast. Meanwhile, the block diagram of the image display apparatus100illustrated inFIG.12is merely a block diagram for an embodiment of the present disclosure. The elements of the block diagram may be integrated, added, or omitted depending on specifications of the image display apparatus100, which are actually implemented. That is, if necessary, two or more elements may be combined into one element or one element may be divided into two or more elements. Furthermore, a function performed by each block is for describing an embodiment of the present disclosure, and a detailed operation or apparatus thereof does not limit the scope of right of the present disclosure. The remote controller300may include various communication modules, such as Wi-Fi, Bluetooth, BLE, Zigbee, and NFC. The remote controller300may transmit a user command to the user input interface unit250through the communication module. Furthermore, the remote controller300may receive an image, a voice, a data signal, etc. output by the user input interface unit250through the communication module. The remote controller300may display the image, the voice, the data signal, etc. or output the image, the voice, the data signal, etc. as a voice. FIGS.16and17are examples of flowcharts of operating methods of the image display apparatus according to an embodiment of the present disclosure.FIGS.18ato23bare diagrams for which reference is made to a description of operating methods of the image display apparatus according to various embodiments of the present disclosure. Referring toFIG.16, in operation S1610, the image display apparatus100may turn on power of the image display apparatus100. For example, the image display apparatus100may turn on the power of the image display apparatus100in response to a user input signal that turns on the power through the user input interface unit250. In this case, at a timing at which the power of the image display apparatus100is turned on, the display panel10may be wound on the panel roller143and may not be exposed to the outside of the housing30. In operation S1620, the image display apparatus100may check a temperature of the display panel10. For example, the image display apparatus100may check the temperature of the display panel10based on a sensing value detected through the panel temperature sensor. In operation S1630, the image display apparatus100may determine whether the temperature of the display panel10is a preset reference temperature or higher. In this case, the reference temperature may mean a temperature at which a material, such as polyimide included in the display panel10, is not hardened and elements included in the display panel10can maintain an operating state equal to or higher than a reference. For example, the reference temperature may correspond to a room temperature (e.g., 25° C.). When the temperature of the display panel10is the reference temperature or more, in operation S1640, the image display apparatus100may control an operation of the panel roller143. For example, when the temperature of the display panel10is 25° C. or higher, the image display apparatus100may control an operation of the panel roller143so that the display panel10is rolled up. Meanwhile, the temperature of the display panel10is less than the reference temperature, in operation S1650, the image display apparatus100may apply a signal to at least some of the plurality of pixels included in the display panel10based on the temperature of the display panel10. For example, when the temperature of the display panel10is less than 25° C., the image display apparatus100may control pixels to which a signal is applied to output light by applying the signal to at least some of the plurality of pixels included in the display panel10. In this case, as heating occurs due to pixels from which light is output, the temperature of the display panel10may rise. Meanwhile, when the temperature of the display panel10is less than the reference temperature, the image display apparatus100may additionally control the hot wire of the heating part295to be heated. For example, when the temperature of the display panel10is less than 25° C., the image display apparatus100may supply power to the hot wire of the heating part295so that the sub-hot wire29disposed within each of the plurality of segments of the module cover15is heated. The image display apparatus100may branch to operation S1620, may check a temperature of the display panel10, and may continuously apply a signal to at least some of the plurality of pixels included in the display panel10based on the temperature of the display panel10until the temperature of the display panel10reaches the reference temperature. Meanwhile, applying the signal to at least some of the plurality of pixels included in the display panel10based on the temperature of the display panel10is more specifically described with reference toFIG.17. Referring toFIG.17, in operation S1710, the image display apparatus100may determine whether the temperature of the display panel10is less than a first reference temperature. In this case, the first reference temperature may be a temperature lower than the reference temperature, that is, a criterion for the determination in operation S1630ofFIG.16. For example, the first reference temperature may correspond to a low temperature (e.g., 10° C.) lower than a room temperature (e.g., 25° C.). When the temperature of the display panel10is less than the first reference temperature (e.g., 10° C.), in operation S1720, the image display apparatus100may apply a signal to at least some of the plurality of pixels included in the display panel10based on a first pixel pattern. Meanwhile, when the temperature of the display panel10is the first reference temperature (e.g., 10° C.) or higher and is less than a second reference temperature (e.g., 25° C.), in operation S1730, the image display apparatus100may apply a signal to at least some of the plurality of pixels included in the display panel10based on a second pixel pattern. In this case, the second reference temperature may correspond to a room temperature (e.g., 25° C.) that corresponds to the reference temperature, that is, a criterion for the determination in operation S1630ofFIG.16. In this case, when the signal is applied to the display panel10based on the first pixel pattern, the temperature of the display panel10may more rapidly rise compared to a case where the signal is applied to the display panel10based on the second pixel pattern. Referring toFIGS.18aand18b, if a signal is applied to the display panel10based on the first pixel pattern, the signal may be first applied to a first pixel group181that corresponds to half of the plurality of pixels included in the display panel10for a preset given time. In this case, the designated time for which the signal is applied may be a time (e.g., two minutes) for which the deterioration of elements constituting the display panel10is not caused. Furthermore, if the signal is applied to the first pixel group181for the preset given time, the signal may be applied to a second pixel group182that corresponds to the remaining half of the plurality of pixels for a preset given time. In this case, the pixels included in the first pixel group181and the pixels included in the second pixel group182may be constructed to not overlap. Meanwhile, referring toFIGS.19ato19c, when a signal is applied to the display panel10based on the second pixel pattern, a signal may be first applied to a third pixel group191that corresponds to 25% of the plurality of pixels included in the display panel10for a preset given time. Furthermore, if the signal is applied to the third pixel group191for the preset given time, the signal may be sequentially applied to a fourth pixel group192, a fifth pixel group193, and a sixth pixel group193that correspond to another 25% of the plurality of pixels for the preset given time. In this case, the pixels included in the third pixel group191to the sixth pixel group194may be constructed to not overlap. That is, a temperature of the display panel10can more rapidly rise because the signal is applied to more pixels for the given time, compared to the case where the signal is applied to the display panel10based on the first pixel pattern and the case where the signal is applied to the display panel10based on the second pixel pattern. Meanwhile, referring toFIGS.20aand20b, if the display panel10is rolled down within the housing30and thus the display panel10and the module cover15are wound and stacked on the panel roller143plural times, a second area22that belongs to the display panel10and that is first wounded on the panel roller143may be surrounded by a first area21. In this case, when the same signal is applied to a plurality of pixels included in the first area21of the display panel10and a plurality of pixels included in the second area22, a temperature of the second area22may more rapidly rise than a temperature of the first area21due to heat occurring from the first area21. In such a case, before the temperature of the first area21reaches a designated temperature, the temperature of the second area22may already reach the designated temperature. When the first area21reaches the designated temperature, the temperature of the second area22already becomes higher than the designated temperature, and elements in the second area22of the display panel10may be overheated. By taking such a point into consideration, the image display apparatus100may apply signals to the plurality of pixels included in the first area21of the display panel10and the plurality of pixels included in the second area22based on different patterns. Referring toFIGS.21aand21b, if a signal is applied to the display panel10to the first pixel pattern, the signal may be first applied to a first pixel group201that corresponds to half of the pixels of the first area21of the display panel10and a fourth pixel group204that corresponds to 25% of the pixels of the second area22for a preset given time. Furthermore, if a signal is applied to the first pixel group201and the fourth pixel group204for a preset given time, the signal may be applied to a second pixel group202that corresponds to the remaining half of the pixels of the first area21of the display panel10and a third pixel group203that corresponds to another 25% of the pixels of the second area22of the display panel10, for the preset given time. In this case, the pixels included in the first pixel group201to the sixth pixel group206may be constructed to not overlap. Meanwhile, referring toFIGS.22aand22b, if a signal is applied to the display panel10based on the second pixel pattern, the signal may be first applied to a first pixel group211that corresponds to 25% of the pixels of the first area21of the display panel10and a fifth pixel group215that corresponds to 20% of the pixels of the second area22for a preset given time. Furthermore, if a signal is applied to the first pixel group211and the fifth pixel group215for a preset given time, the signal may be applied to a second pixel group212that corresponds to another 25% of the pixels of the first area21of the display panel10and a seventh pixel group217that corresponds to another 20% of the pixels of the second area22of the display panel10for the preset given time. In this case, the pixels included in the first pixel group201to a ninth pixel group219may be constructed to not overlap. That is, as illustrated inFIGS.21ato22b, although the signal is applied to the display panel10, the signal is applied to more pixels for the given time, compared to a case where the signal is applied to the display panel10based on the first pixel pattern and a case where the signal is applied to the display panel10based on the second pixel pattern. Accordingly, the temperature of the display panel10can more rapidly rise, and the overheating of the second area22of the display panel10can also be prevented. Meanwhile, the image display apparatus100may heat the hot wire of the heating part295based on a temperature of the display panel10. For example, when a temperature of the display panel10is less than a first reference temperature (e.g., 10° C.), the image display apparatus100may supply power to the hot wire of the heating part295so that all the sub-hot wires29disposed within the plurality of segments of the module cover15are heated. For example, when a temperature of the display panel10is a first reference temperature (e.g., 10° C.) or higher and is less than a second reference temperature (e.g., 25° C.), the image display apparatus100may apply only a signal to at least some of the plurality of pixels included in the display panel10based on the second pixel pattern without heating the hot wire of the heating part295. Alternatively, for example, when a temperature of the display panel10is a first reference temperature (e.g., 10° C.) or higher and is less than a second reference temperature (e.g., 25° C.), the image display apparatus100may supply power to the hot wire of the heating part295so that only some of the sub-hot wires29disposed within the plurality of segments of the module cover15are heated. Meanwhile, when a temperature of the display panel10is a preset reference temperature or higher, the image display apparatus100may apply a signal to at least some of the plurality of pixels included in the display panel10so that the temperature of the display panel10is maintained to the reference temperature or more until the image display apparatus100controls an operation of the panel roller143. In this case, the signal applied to raise the temperature of the display panel10in operation S1650, etc. may be named a main signal, and the signal applied to maintain the temperature of the display panel10may be named a sub-signal. In this case, a wavelength of light that is output while the main signal is applied to pixels included in the display panel10may be shorter than a wavelength of light that is output while the sub-signal is applied to the pixels. For example, white light may be output from the pixels to which the main signal is applied, and red light may be output from the pixels to which the sub-signal is applied. As illustrated inFIGS.23aand23b, when a temperature of the display panel10is a preset reference temperature (e.g., 25° C.) or higher, the image display apparatus100may first apply the sub-signal to a first pixel group221that corresponds to 20% of the plurality of pixels included in the display panel10for a preset given time until the image display apparatus100controls an operation of the panel roller143. Furthermore, if the sub-signal is applied to the first pixel group221for a preset given time, the sub-signal may be sequentially applied to a second pixel group222to a fifth pixel group225that correspond to another 20% of the plurality of pixels for a preset given time. Meanwhile, if a function (hereinafter a schedule function) for previously setting a timing at which an image is output is previously set by a user through the display20, the image display apparatus100can reduce the amount of power that is unnecessarily consumed in order to raise and/or maintain a temperature of the display panel10by checking the temperature of the display panel10prior to a given time from the timing at which the image is output through the display20and raising and/or maintaining the temperature of the display panel10based on the timing at which the image is output. As described above, according to various embodiments of the present disclosure, damage to the display panel10which may occur because an operation of the panel roller143is controlled in the state in which the display panel10has been hardened can be prevented because the display panel10hardened in a low-temperature environment can be sufficiently softened before the operation of the panel roller143is controlled. Furthermore, when a temperature of the display panel10is low, a response speed of the display panel10may be reduced due to a change in the operating characteristic of a thin film transistor (TFT), etc. However, according to various embodiments of the present disclosure, a user satisfaction level can be improved because the best state of the display panel10can be maintained despite a change in the surrounding environment. It is to be understood that the accompanying drawings are merely intended to make easily understood the embodiments disclosed in this specification, and the technical spirit disclosed in this specification is not restricted by the accompanying drawings and includes all changes, equivalents, and substitutions which fall within the spirit and technical scope of the present disclosure. Meanwhile, the operating method of the image display apparatus of the present disclosure may be implemented as a processor-readable code in a processor-readable recording medium included in the image display apparatus. The processor-readable recording medium includes all types of recording devices in which processor-readable data is stored. Example of the processor-readable recording medium may include a ROM, a RAM, a CD-ROM, magnetic tapes, floppy disks, and optical data storages, and also includes an implementation in the form of a carrier wave, such as transmission through the Internet. Furthermore, the processor-readable recording medium may be distributed to computer systems connected over a network, and the processor-readable code may be stored and executed in a distributed manner. Furthermore, although some embodiments of this specification have been illustrated and described above, this specification is not limited to the aforementioned specific embodiments, and a person having ordinary knowledge in the art to which this specification pertains may modify the present disclosure in various ways without departing from the subject matter of the claims. Such modified embodiments should not be individually interpreted from the technical spirit or prospect of this specification. | 53,129 |
11862079 | DETAILED DESCRIPTION Advantages and features of the present disclosure and implementation methods thereof will be clarified through the following embodiments described with reference to the accompanying drawings. However, the present disclosure is not limited to the embodiments described below and may be embodied with a variety of different modifications. The embodiments are merely provided to allow those skilled in the art to completely understand the scope of the present disclosure, and the present disclosure is defined only by the scope of the claims. The figures, dimensions, ratios, angles, numbers, and the like disclosed in the drawings for describing the embodiments of the present disclosure are merely illustrative and are not limited to matters shown in the present disclosure. Like reference numerals throughout the specification can refer to like elements. In the description of the present disclosure, a detailed description of known techniques related to the present disclosure may be omitted when it is determined that it may obscure the subject matter of the present disclosure. Terms such as “including,” “having,” and “composed of” used herein are intended to allow other elements to be added unless the terms are used with the term “only.” Any references to the singular may include the plural unless expressly stated otherwise. Components may be interpreted to include an ordinary error range even if not expressly stated. For description of a positional relationship, for example, when the positional relationship between two parts is described as “on,” “above,” “below,” and “next to,” etc., one or more parts may be interposed therebetween unless the term “immediately” or “directly” is used in the expression. In the description of embodiments, the terms “first,” “second,” and the like may be used herein to describe various components, the components are not limited by the terms. These terms are used only to distinguish one component from another. Accordingly, a first component discussed below could be termed a second component without departing from the teachings of the present disclosure. Like reference numerals throughout the specification can refer to like elements. The features of various embodiments may be partially or entirely bonded to or combined with each other. The embodiments may be interoperated and performed in technically various ways and may be carried out independently of or in association with each other. Hereinafter, various embodiments of the present disclosure will be described in detail with reference to the accompanying drawings. FIG.1is a conceptual diagram of a display device according to one embodiment of the present disclosure.FIG.2is a cross-sectional view schematically illustrating a display panel according to an embodiment of the present disclosure.FIG.3Ais a view illustrating a pixel arrangement in a first display area according to one embodiment of the present disclosure. Referring toFIG.1, the display device includes a display panel100, and a front surface of the display panel100may be configured as a display area. Thus, a full-screen display may be implemented. The display device may be a display panel itself and may be a concept including a display panel and a driving unit. The display area may include a first display area DA and a second display area CA. The first display area DA and the second display area CA may all output an image but may be different in resolution. As an example, a resolution of a plurality of second pixels disposed in the second display area CA may be lower than a resolution of a plurality of first pixels disposed in the first display area DA. A relatively large amount of light may be injected into sensors40and50disposed in the second display area CA by as much as the resolution lowered in the plurality of second pixels disposed in the second display area CA. However, the present disclosure is not necessarily limited thereto, and the resolution of the first display area DA and the resolution of the second display area CA may be the same as long as the second display area CA may have sufficient light transmittance or an appropriate compensation algorithm may be implemented. The second display area CA may be an area in which the sensors40and50are disposed. The second display area CA is an area that overlaps various sensors and thus may be smaller in area than the first display area DA outputting most of the image. The second display area CA is illustrated as being disposed on an upper end of the display device, but the present disclosure is not necessarily limited thereto. The position and area of the second display area CA may be variously modified. The sensors40and50may include at least one of an image sensor, a proximity sensor, an illumination sensor, a gesture sensor, a motion sensor, a fingerprint recognition sensor, and a biometric sensor. As an example, a first sensor may be an illumination sensor or an infrared sensor and a second sensor may be an image sensor configured to capture an image or a video, but the present disclosure is not necessarily limited thereto. Referring toFIGS.2and3A, the first display area DA and the second display area CA may include a pixel array in which pixels, to which pixel data is written, are disposed. The number of pixels per unit area (hereinafter, referred to as “pixels per inch (PPI)”) of the second display area CA may be lower than that of the first display area DA in order to ensure the light transmittance of the second display area CA. The pixel array of the first display area DA may include a pixel area in which a plurality of pixel groups having a high PPI are disposed. The pixel array of the second display area CA may include a pixel area in which a plurality of pixel groups having a relatively low PPI are disposed by being spaced apart from each other by light-transmitting areas. In the second display area CA, external light may pass through the display panel100through the light-transmitting areas having high light transmittance and may be received by a sensor placed below the display panel100. Since both the first display area DA and the second display area CA include the pixels, an input image may be reproduced on the first display area DA and the second display area CA. Thus, a full-screen display may be implemented. Each of the pixels of the first display area DA and the second display area CA may include sub-pixels having different colors to implement a color of an image. The sub-pixels may include red, green, and blue sub-pixels. Although not shown in the drawings, the pixel group may further include a white sub-pixel. Each of the sub-pixels may include a pixel circuit unit and a light-emitting element (e.g., organic light-emitting diode: OLED). The second display area CA may include the pixels and an image capturing unit40disposed below a screen of the display panel100. The image capturing unit40may include an image sensor. The pixels of the second display area CA may display an input image by writing pixel data of the input image in a display mode. The image capturing unit40may capture an external image in an image capturing mode to output a picture or video image data. The image capturing unit40may be a camera module that captures an external image to output a picture or video image data, but is not necessarily limited thereto, and may have various structures capable of acquiring an image. A filter module60may be disposed above the image capturing unit40. The filter module60may selectively pass light incident on the second display area. In order to ensure light transmittance, due to the pixels being removed from the second display area CA, an image quality compensation algorithm for compensating luminance and color coordinates of the pixels in the second display area CA may be applied. The display panel100may have a width in an X-axis direction, a length in a Y-axis direction, and a thickness in a Z-axis direction. The display panel100may include a circuit layer12disposed on a substrate10, and a light-emitting element layer14disposed on the circuit layer12. A polarizing plate18may be disposed on the light-emitting element layer14, and a cover glass20may be disposed on the polarizing plate18. The circuit layer12may include a pixel circuit connected to lines such as data lines, gate lines, power lines, and the like, a gate driving unit connected to the gate lines, and the like. The circuit layer12may include a circuit element such as a transistor implemented as a thin-film transistor (TFT), a capacitor, and the like. The lines and circuit elements of the circuit layer12may be implemented with a plurality of insulating layers, two or more metal layers separated from each other with the insulating layers therebetween, and an active layer including a semiconductor material. The light-emitting element layer14may include the light-emitting element driven by the pixel circuit. The light-emitting element may be implemented as an OLED. The OLED may include an organic compound layer formed between an anode and a cathode. The organic compound layer may include a hole injection layer HIL, a hole transport layer HTL, an emission layer EML, an electron transport layer ETL, and an electron injection layer EIL, but the present disclosure is not limited thereto. When a voltage is applied to the anode and the cathode of the OLED, holes passing through the hole transport layer HTL and electrons passing through the electron transport layer ETL move to the emission layer EML to create excitons, and thus visible light may be emitted from the emission layer EML. The light-emitting element layer14may further include a color filter array that selectively transmits light of red, green, and blue wavelengths. The light-emitting element layer14may be covered by a protective film, and the protective film may be covered by an encapsulation layer. The protective film and the encapsulation layer may have a structure in which organic films and inorganic films are alternately stacked. The inorganic films may block the penetration of moisture or oxygen. The organic films may planarize a surface of the inorganic film. When the organic films and the inorganic films are stacked in multiple layers, the penetration of moisture/oxygen affecting the light-emitting element layer14may be effectively blocked since a movement path of the moisture or oxygen is increased in length as compared with a single layer. The polarizing plate18may be disposed on the encapsulation layer. The polarizing plate18can improve outdoor visibility of the display device. The polarizing plate18may reduce the reflection of light from a surface of the display panel100and block the light reflected from metal of the circuit layer12, thereby improving the brightness of the pixels. The polarizing plate18may be implemented as a polarizing plate to which a linear polarizing plate and a phase retardation film are bonded, or a circular polarizing plate. Referring toFIG.3A, the first display area DA may include a plurality of first pixel groups PG1 arranged in a matrix form. In the plurality of first pixel groups PG1, two sub-pixels may form one pixel using a sub-pixel rendering algorithm. For example, a first unit pixel PIX1 may include R and G1 sub-pixels SP1 and SP2, and a second unit pixel PIX2 may include B and G2 sub-pixels SP3 and SP4. Insufficient color representation in each of the unit pixels PIX1 and PIX2 may be compensated with an average value of pieces of corresponding color data between neighboring pixels. However, the present disclosure is not necessarily limited thereto, and each of the plurality of first pixel groups PG1 may be real-type pixels including R, G, and B sub-pixels. FIG.3Bis a view illustrating pixels and light-transmitting areas of the second display area according to one embodiment of the present disclosure. Referring toFIG.3B, the second display area CA may include a plurality of second pixel groups PG2 and a plurality of light-transmitting areas TA. The plurality of light-transmitting areas TA may be disposed between the plurality of second pixel groups PG2. Specifically, each of the light-transmitting areas TA and the second pixel groups PG2 may be alternately disposed in a first direction and a second direction. External light may be received by the image capturing unit40through the light-transmitting areas TA. A resolution of the second display area CA may decrease relative to a resolution of the first display area DA by the extent to which an area of the light-transmitting area TA increases. The light-transmitting area TA may include transparent media having high light transmittance without having metal so that light may be incident with small or minimum light loss. The light-transmitting area TA may be made of transparent insulating materials without including metal lines or pixels. As the light-transmitting area TA becomes larger, the light transmittance of the second display area CA may be higher. Each of the plurality of second pixel groups PG2 may include one or two pixels. For example, in each of the second pixel groups PG2, a first unit pixel PIX1 may include R and G1 sub-pixels SP1 and SP2, and a second unit pixel PIX2 may include B and G2 sub-pixels SP3 and SP4. The shape and arrangement of pixels of the second pixel group PG2 may be the same as or different from those of the first pixel group PG1. The shape of the light-transmitting area TA is illustrated as being a quadrangular shape, but the present disclosure is not limited thereto. For example, the light-transmitting area TA may be designed in various shapes such as a circular shape, an elliptical shape, a polygonal shape, or the like. All metal electrode materials may be removed from the light-transmitting area TA. Accordingly, lines of the pixels may be disposed outside the light-transmitting area TA. Thus, light may be effectively incident through the light-transmitting area. However, the present disclosure is not necessarily limited thereto, and the metal electrode material may be present in a partial area of the light-transmitting area TA. FIG.4is a view schematically illustrating a structure of the display panel of the second display area. Referring toFIG.4, the display panel may include the circuit layer12disposed on the substrate10, and the light-emitting element layer14disposed on the circuit layer12. The polarizing plate18may be disposed on the light-emitting element layer14, and the cover glass20may be disposed on the polarizing plate18. In the polarizing plate18, a first light-transmitting pattern18dmay be formed in an area corresponding to the light-transmitting area TA. Based on green light having a wavelength of 555 nm, a light transmittance of the substrate made of PI is about 70% to 80%, and a light transmittance of the cathode is 80% to 90%. On the other hand, a light transmittance of the polarizing plate18is relatively very low to about 40%. Thus, in order to effectively increase the light transmittance in the light-transmitting area, it is necessary to increase the light transmittance of the polarizing plate18. The polarizing plate18according to the embodiment has the first light-transmitting pattern18dformed above the light-transmitting area TA to improve light transmittance. The light transmittance of the area in which the first light-transmitting pattern is formed may be the highest in the polarizing plate. The first light-transmitting pattern18dof the polarizing plate18may be formed by removing a portion of the polarizing plate18and may also be formed by decomposing a compound constituting the polarizing plate18. That is, the first light-transmitting pattern18dmay have various structures capable of increasing the light transmittance of the conventional polarizing plate18. In the light-transmitting area TA, the polarizing plate18may have the first light-transmitting pattern18d, and a cathode CAT may have a second light-transmitting pattern. The second light-transmitting pattern may be an opening H1 formed in the light-transmitting area TA. Since the light transmittance of the cathode is 80% to 90%, the light transmittance of the light-transmitting area TA may be further increased due to the opening H1. The method of forming the opening H1 in the cathode CAT is not particularly limited. As an example, after the cathode is formed, the opening H1 may be formed in the cathode using an etching process, or the cathode may be removed using a laser at a lower portion of the substrate10. A planarization layer PCL may be formed on the cathode CAT, and a touch sensor TOE may be disposed on the planarization layer PCL. Here, in the light-transmitting area TA, a sensing electrode and lines of the touch sensor may be made of a transparent material such as indium tin oxide (ITO) or a metal mesh, thereby increasing light transmittance. In another example, a sensing electrode and lines of the touch sensor may be disposed outside the light-transmitting area TA, and may not be disposed within the light-transmitting area TA. The image capturing unit40may be disposed below the first light-transmitting pattern18dand/or the opening H1 and may increase the amount of incident light. The filter module60may be disposed above the image capturing unit40. The filter module60may selectively pass light incident on the image capturing unit40. FIG.5is a view illustrating a state in which only a portion of light data is selectively incident on the image capturing unit. Referring toFIG.5, the image capturing unit40may be disposed below the second display area CA of the display panel100. However, the image capturing unit40may also be disposed below the first display area DA. According to the embodiment, since a plurality of pixels are disposed above the image capturing unit40, data about light incident from the outside may be relatively insufficient. Accordingly, an operation of extracting insufficient image data using a Bayer filter and an algorithm may be performed. However, for such a configuration, a high resolution image sensor may be required, and a calculation amount may be increased in the process of extracting insufficient image data using an algorithm. In general, data about light incident from the outside may include blue data, green data, and red data. The filter module60may selectively pass light data incident on the image capturing unit40. The configuration of the filter module60is not particularly limited. As an example, the filter module60may include various types of filters capable of selectively passing the blue data, the green data, and the red data. A host system1A of the display device may control a display panel driving unit2A and an image capturing unit driving unit2B to synchronize the display panel100and the image capturing unit40when the image capturing unit40is driven, and to time-divisionally drive each of the display panel100and the image capturing unit40. The image capturing unit driving unit2B may drive the image capturing unit40and the filter module60according to a timing signal received from the host system1A. The host system may be a main circuit board of a television system, a camera, a set-top box, a navigation system, a personal computer (PC), a vehicle system, a home theater system, a mobile device, or a wearable device. The display panel100may be synchronized with the image capturing unit40and time-divisionally controlled so that color data incident on the image capturing unit40is not output. As an example, when the color data incident on the image capturing unit40is red data in a specific time-division section, the display panel100may output only green data and blue data and may not output the red data. That is, the display panel driving unit2A may cause only a green pixel and a blue pixel to emit light, and cause a red pixel not to emit light. Accordingly, the problem of image distortion caused by light output from the display panel100being introduced into the image capturing unit40may be prevented. When a conventional Bayer filter is used, each of sensing pixels constituting the image sensor may receive only one of blue, green, and red color data. As an example, one sensing pixel may receive only the blue data, and the green and red data may be calculated through a post-processing process. Thus, there is a problem that an image sensor having a high resolution should be used. On the other hand, according to an embodiment, the blue, green, and red data may all be incident on one sensing pixel. Accordingly, compared to the conventional Bayer filter, the resolution may be increased three times or more. FIG.6Ais a view illustrating a state in which only red data is selectively incident on the image capturing unit by the filter module.FIG.6Bis a view illustrating a state in which the red data is received by a plurality of sensing pixels of the image capturing unit.FIG.7Ais a view illustrating a state in which only green data is selectively incident on the image capturing unit by the filter module.FIG.7Bis a view illustrating a state in which the green data is received by the plurality of sensing pixels of the image capturing unit.FIG.8Ais a view illustrating a state in which only blue data is selectively incident on the image capturing unit by the filter module.FIG.8Bis a view illustrating a state in which the blue data is received by the plurality of sensing pixels of the image capturing unit. Referring toFIG.6A, the filter module60may include a filter array61in which a red filter61R, a green filter61G, and a blue filter61B are disposed, and a driving unit62configured to rotate the filter array61. The filter module60is disposed below the second display area CA so that various shutter structures capable of replacing the filter for each time-division section are applied without limitation. The red filter61R of the filter module60may be rotated by the driving unit62and disposed below the second display area CA. Accordingly, only the red data among the light data incident on the second display area CA may pass through the filter module60to be incident on the image capturing unit40. Referring toFIG.6B, the red data may be written to each of a plurality of sensing pixels41of the image capturing unit40. The plurality of sensing pixels may be a unit pixel of a charge-coupled device (CCD) or complementary metal oxide semiconductor (CMOS) image sensor. An input value RI of the red data written to each of the plurality of sensing pixels41may be different according to color of an implemented image. A dark-colored portion in the drawing may be an area having a relatively large data value. Referring toFIG.7A, when the green filter61G of the filter module60is disposed below the second display area CA, only the green data among the light data incident on the second display area CA may pass through the filter module60to be incident on the image capturing unit40. Accordingly, as shown inFIG.7B, the green data may be written to each of the plurality of sensing pixels41of the image sensor. An input value GI of the green data written to each of the plurality of sensing pixels41may be different according to the color of the implemented image. A dark-colored portion in the drawing may be an area having a relatively large data value. Referring toFIG.8A, when the blue filter61B of the filter module60is disposed below the second display area CA, only the blue data among the light data incident on the second display area CA may pass through the filter module60to be incident on the image capturing unit40. Accordingly, as shown inFIG.8B, the blue data may be written to each of the plurality of sensing pixels41of the image sensor. An input value BI of the blue data written to each of the plurality of sensing pixels41may be different according to the color of the implemented image. A dark-colored portion in the drawing may be an area having a relatively large data value. According to this configuration, each of the sensing pixels41constituting the image sensor may receive all of the red data, the green data, and the blue data. Accordingly, each sensing pixel may increase the resolution as compared to the Bayer filter capable of receiving only one of the red data, the green data, and the blue data. An image synthesizing unit (not shown) may synthesize the red data, the green data, and the blue data sequentially output from the image capturing unit40to generate one image. FIG.9illustrates a first modified example of the filter module.FIGS.10A to10Cillustrate a second modified example of the filter module. Referring toFIG.9, the filter module60may be configured as a stacked color filter. In the stacked color filter, the red filter62R, the green filter62G, and the blue filter62B may be disposed by being stacked and may be sequentially disposed below the second display area CA by the driving unit62. Accordingly, light data may be selectively incident on the image capturing unit40by a plurality of filters. Referring toFIGS.10A to10C, the filter module60may include a plurality of splitters and a plurality of shutters. Accordingly, the light incident on the second display area CA may be separated into a plurality of pieces of light by the plurality of splitters depending on wavelengths thereof. As an example, referring toFIG.10A, of the incident light, only first light L1 may be transmitted through a first splitter63a, and second light L2 and third light L3 may be reflected by the first splitter63a. The second light L2 may be reflected by a second splitter63band the third light L3 may be transmitted therethrough. In addition, the third light L3 may be reflected by a first reflective plate63cand a second reflective plate63eso that a path thereof may be changed. When a first shutter64band a second shutter64aare closed, and a third shutter64cis open, the third light L3 may be incident on and reflected by a multi-reflective plate65and incident on the image capturing unit40. The first light may be the red data, the second light may be the green data, and the third light may be the blue data. Referring toFIG.10B, when the first shutter64band the third shutter64care closed, and the second shutter64ais open, the second light L2 may pass through the multi-reflective plate65to be incident on the image capturing unit40. Referring toFIG.10C, when the second shutter64aand the third shutter64care closed, and the first shutter64bis open, the first light L1 may be incident on and reflected by the multi-reflective plate65to be incident on the image capturing unit40. For the configuration of allowing the light to be selectively incident on the image capturing unit40as described above, various structures may be applied without limitation. Alternatively, the filter module60may have a form that is embedded in the image capturing unit40and mechanically or electrically driven. FIG.11is a view illustrating time-division driving of the display panel and the image capturing unit.FIG.12is a view illustrating a driving timing having a light blocking section in which sensing data is not incident on the image capturing unit. Referring toFIG.11, when the image capturing unit40is driven, the display panel100and the image capturing unit40may be synchronized with each other and may each be time-divisionally driven. One frame may be divided into a plurality of sub-frames SF1, SF2, and SF3, and the image capturing unit40may receive different color data for each section in the plurality of sub-frames SF1, SF2, and SF3. As an example, the plurality of sub-frames SF1, SF2, and SF3 may include a first sub-frame SF1, a second sub-frame SF2, and a third sub-frame SF3. In a section of the first sub-frame SF1, the image capturing unit40may sense the red data and the display panel100may output the green data and the blue data. Similarly, in the second sub-frame SF2, the image capturing unit40may sense the green data using the filter module60, and the display panel100may output the red data and the blue data. Further, in the third sub-frame SF3, the image capturing unit40may sense the blue data and the display panel100may output the red data and the green data. That is, the display panel100may simultaneously output only two pieces of data from among the red data, the green data, and the blue data for each sub-frame section. According to this configuration, the image data output from the display panel100may be prevented from being introduced into the image capturing unit40by the filter module60, thereby preventing color mixing. Accordingly, the image quality of an image generated by the image capturing unit40may be improved by preventing noise caused by image data emitted from the display panel100. In the embodiment, the method in which one frame is divided into three blocks and the three blocks are time-divisionally driven is exemplified, but the number of block sections may be variously modified. Referring toFIG.12, a blocking section BT during which data is not introduced may be included in the section between the plurality of sub-frames in consideration of a time for which the filter module60moves the filter. Accordingly, noise may be blocked by preventing data from being introduced in the section between the plurality of sub-frames. The blocking section BT may be implemented by closing the shutter of the image capturing unit40, but various methods capable of forming the blocking section may be applied without limitation. FIG.13is a view illustrating a difference between data voltages applied during normal driving and time-division driving of the display panel.FIG.14Ais a view illustrating a data voltage applied to the pixel in the normal driving and luminance.FIG.14Bis a view illustrating a data voltage applied to the pixel in the time-division driving and luminance. Referring toFIG.13, when the image capturing unit40is not driven, the display panel100may operate in a normal mode in which time-division driving is not performed, whereas when the image capturing unit40is driven, the display panel100may perform the time-division driving. Both the first display area DA and the second display area CA of the display panel100may be time-divisionally driven, but the present disclosure is not necessarily limited thereto, and only the second display area CA may be time-divisionally driven. In the normal mode (MODE 1), color data necessary to implement one still image may be continuously output during one frame section. In a time-division mode (MODE 2), color data necessary to implement one still image may be divided for each section of the plurality of sub-frames SF1, SF2, and SF3 and output. Accordingly, in the time-division mode, since the necessary color data is not output in some sub-frame sections, luminance may be smaller than that in the normal mode. Accordingly, in the embodiment, in the time-division mode, a voltage increment Vdata1 for compensating for the non-output section is added to a data voltage Vdata, which is applied to each pixel in the normal mode, and the summed voltage is applied to the pixel so that the overall luminance may be controlled in the same manner as in the normal mode. Referring toFIGS.14A and14B, in the time-division driving, a range of the data voltage is increased by the extent to which a light emission time of each pixel is reduced, so that it is possible to compensate for the luminance. Accordingly, a user may recognize that luminance LM1 in the normal driving and luminance LM2 in the time-division driving are the same. FIG.15is a block diagram of a display device according to an embodiment of the present disclosure. Referring toFIG.15, the display device according to the embodiment of the present disclosure may include a display panel100, a display panel driving unit2A for writing pixel data of an input image to pixels P of the display panel100, a timing controller130for controlling the display panel driving unit, and a power supply unit150for generating power required for driving the display panel100. The display panel100may include a pixel array that displays the input image on a screen. As described above, the pixel array may be divided into a first display area DA, and a second display area CA having a resolution or PPI lower than that of the first display area DA. Since the first display area DA includes the pixels P of high PPI and thus is larger in size than the second display area CA, most of image information is displayed on the first display area DA. An image capturing unit overlapping the second display area CA may be disposed in a lower portion of the display panel100. Touch sensors may be disposed on the screen of the display panel100. The touch sensors may be disposed on the screen of the display panel in an on-cell type or an add-on type, or may be implemented as in-cell type touch sensors that are embedded in the pixel array. The display panel100may be implemented as a flexible display panel in which the pixels P are disposed on a flexible substrate such as a plastic substrate, a metal substrate, or the like. In a flexible display, the size and shape of the screen may be varied by a method of rolling, folding, and bending the flexible display panel. The flexible display may include a slideable display, a rollable display, a bendable display, a foldable display, and the like. The display panel driving unit may drive the pixels P by applying an internal compensation technique. The display panel driving unit2A may reproduce the input image on the screen of the display panel100by writing the pixel data of the input image to sub-pixels. The display panel driving unit2A may include a first data driving unit110, a second data driving unit111, a first gate driving unit120, and a second gate driving unit123. The display panel driving unit may further include a demultiplexer112disposed between data lines DL and the data driving units110and111. The display panel driving unit2A may operate in a low-speed driving mode under the control of the timing controller130. In the low-speed driving mode, the input image is analyzed, and when the input image does not change for a preset period of time, power consumption of the display device may be reduced. In the low-speed driving mode, when a still image is input for a certain period of time or more, a refresh rate of the pixels P is lowered to control a data writing period of the pixels P to be longer, thereby reducing the power consumption. The low-speed driving mode is not limited to when a still image is input. For example, when the display device operates in a standby mode or when a user command or an input image is not input to the display panel driving circuit for a predetermined period of time or more, the display panel driving circuit may operate in the low-speed driving mode. The first data driving unit110may sample pixel data to be written to the pixels of the first display area DA from the pixel data received from the timing controller130. The first data driving unit110may convert the pixel data to be written to the pixels into a gamma compensation voltage using a digital-to-analog converter (hereinafter referred to as “DAC”) and output a data voltage Vdata. The data voltage Vdata output from channels of the first data driving unit110may be applied to the data lines DL connected to the pixels of the first display area DA through the demultiplexer112, or may be directly applied to the data lines DL. The second data driving unit111may receive pixel data to be written to the pixels of the second display area CA from the pixel data, which is received from the timing controller130, as a digital signal. The second data driving unit111may convert the pixel data to be written to the pixels of the second display area CA into a gamma compensation voltage using the DAC to output a data voltage Vdata. The data voltage Vdata output from channels of the second data driving unit111may be applied to the data lines DL connected to the pixels of the second display area CA through the demultiplexer112, or may be directly applied to the data lines DL. The first data driving unit110and the second data driving unit111may be one data driving unit that performs the same function, or may be separate driving units each independently driven. As an example, when both the first display area DA and the second display area CA are time-divisionally driven during an image capturing period, the first data driving unit and the second data driving unit may be one driving unit. However, when only the second display area CA is time-divisionally driven, the first data driving unit110and the second data driving unit111may be separate driving units that operate independently. Each of the first and second data driving units110and111may include a voltage dividing circuit that outputs the gamma compensation voltage. The voltage dividing circuit may divide a gamma reference voltage received from the power supply unit150to generate a gamma compensation voltage for each grayscale, and provide the gamma compensation voltage to the DAC. The DAC may convert the pixel data into the gamma compensation voltage to output the data voltage Vdata. The demultiplexer112may time-divisionally distribute the data voltage Vdata output through the channels of the data driving units110and111to the plurality of data lines DL. Due to the demultiplexer112, the number of the channels of the data driving units110and111may be reduced. However, the present disclosure is not necessarily limited thereto, and the demultiplexer112may be omitted. The first gate driving unit120may be implemented as a gate-in-panel (GIP) circuit formed directly in a bezel area BZ on the display panel100together with a TFT array of the pixel array. The first gate driving unit120may output a gate signal to gate lines GL connected to the pixels of the first display area DA under the control of the timing controller130. The first gate driving unit120may shift the gate signal using a shift register to sequentially supply the signals to the gate lines GL connected to the pixels of the first display area DA. A voltage of the gate signal may swing between a gate-off voltage VGH and a gate-on voltage VGL. The gate signal applied to the pixels of the first display area DA may include a pulse of a scan signal (hereinafter referred to as a “scan pulse”) and a pulse of a light emission control signal (hereinafter referred to as an “EM pulse”). The gate lines GL connected to the pixels of the first display area DA may include scan lines to which the scan pulse is applied and EM lines to which the EM pulse is applied. The first gate driving unit120may be disposed in each of left and right bezel areas BZ of the display panel100to supply the gate signals to the gate lines GL using a double feeding method. In the double feeding method, the gate driving units120disposed on both bezels of the display panel100are synchronized by the timing controller130, so that the gate signals may be simultaneously applied at both ends of one gate line. In another embodiment, the first gate driving unit120may be disposed on one of the left and right bezels of the display panel100to supply the gate signals to the gate lines GL using a single feeding method. The first gate driving unit120may include a first-first gate driving unit121and a first-second gate driving unit122. The first-first gate driving unit121may output a scan pulse, shift the scan pulse according to a shift clock, and sequentially supply the scan pulse to the scan lines connected to the pixels of the first display area DA. The first-second gate driving unit122may output an EM pulse, shift the EM pulse according to the shift clock, and sequentially supply the EM pulse to the EM lines connected to the pixels of the first display area DA. The second gate driving unit123may output a gate signal to the gate lines GL connected to the pixels of the second display area CA under the control of the timing controller130. The second gate driving unit123may shift the gate signal using a shift register to sequentially supply the signals to the gate lines GL connected to the pixels of the second display area CA. A voltage of the gate signal may swing between the gate-off voltage VGH and the gate-on voltage VGL. The gate signal applied to the pixels of the second display area CA may include a scan pulse. The gate lines GL connected to the pixels of the second display area CA may include scan lines to which the scan pulse is applied. The second gate driving unit123may be implemented as a gate in array (GIA) circuit disposed in the second display area CA or disposed in at least one of the bezel areas BZ of the display panel100. However, the position of the second gate driving unit123is not particularly limited. In addition, a portion of the second gate driving unit123may be disposed in the second display area CA, and the remaining circuit configuration of the second gate driving unit123may be disposed in the bezel area BZ of the display panel100. The second gate driving unit123may output a scan pulse, shift the scan pulse according to the shift clock, and sequentially supply the scan pulse to the scan lines connected to the pixels of the second display area CA. The timing controller130may receive the pixel data of the input image and a timing signal synchronized with the pixel data from a host system. The timing signal may include a vertical synchronization signal Vsync, a horizontal synchronization signal Hsync, a clock CLK, a data enable signal DE, and the like. One period of the vertical synchronization signal Vsync is one frame period. One period of each of the horizontal synchronization signal Hsync and the data enable signal DE is one horizontal period1H. A pulse of the data enable signal DE may be synchronized with one piece of line data to be written to the pixels P of one pixel line. Since a frame period and a horizontal period may be obtained through a method of counting the data enable signal DE, the vertical synchronization signal Vsync and the horizontal synchronization signal Hsync may be omitted. The timing controller130may transmit the pixel data of the input image to the first and second data driving units110and111, and control an operation timing of the display panel driving unit so that the first and second data driving units110and111, the demultiplexer112, and the first and second gate driving units120and123are synchronized with each other. The timing controller130may multiply an input frame frequency by i (here, i is a natural number) to control the operation timing of the display panel driving unit2A at a frame frequency of the input frame frequency x i Hz. The input frame frequency is 60 Hz for National Television Standards Committee (NTSC) and 50 Hz for Phase-Alternating Line (PAL). In order to lower a refresh rate of the pixels P in the low-speed driving mode, the timing controller130may lower the frame frequency into a frequency ranging from 1 Hz to 30 Hz. The timing controller130may generate a data timing control signal for controlling an operation timing of the data driving unit110, a switch control signal for controlling an operation timing of the demultiplexer112, and a gate timing control signal for controlling an operation timing of the first gate driving unit120on the basis of the timing signals Vsync, Hsync, and DE received from the host system1A (seeFIG.5). The gate timing control signal may include a start pulse, a shift clock, a reset signal, an initialization signal, and the like. A voltage level of the gate timing control signal output from the timing controller130may be converted into the gate-off voltage VGH/VEH and the gate-on voltage VGL/VEL through a level shifter (omitted from the drawing) and supplied to the first gate driving unit120. The level shifter may convert a low-level voltage of the gate timing control signal into the gate-on voltage VGL and convert a high-level voltage of the gate timing control signal into the gate-off voltage VGH. The power supply unit150may include a charge pump, a regulator, a buck converter, a boost converter, a programmable gamma integrated circuit (P-GMA IC), and the like. The power supply unit150may adjust a DC input voltage received from the host system to generate power required for driving the display panel driving unit and the display panel100. The power supply unit150may output DC voltages such as the gamma reference voltage, the gate-off voltage VGH/VEH, the gate-on voltage VGL/VEL, a pixel driving voltage ELVDD, a low-potential power voltage ELVSS, an initialization voltage Vini, a reference voltage Vref. The programmable gamma IC may vary the gamma reference voltage according to a register setting value. The gamma reference voltage may be supplied to the data driving unit110. The gate-off voltage VGH/VEH and the gate-on voltage VGL/VEL may be supplied to the level shifter and the first gate driving unit120. The pixel driving voltage ELVDD, the low-potential power voltage ELVSS, the initialization voltage Vini, and the reference voltage Vref may be commonly supplied to the pixel circuits through power lines. The pixel driving voltage ELVDD may be set to a voltage higher than the low-potential power voltage ELVSS, the initialization voltage Vini, and the reference voltage Vref. In a mobile device or a wearable device, the timing controller130, the data driving unit110, and the power supply unit150may be integrated into one drive IC (D-IC). The PPI of the second display area CA is lower than the PPI of the first display area DA. For this reason, when the data voltage Vdata applied to the pixels P of the second display area CA is equal to the data voltage Vdata applied to the pixels P of the first display area DA at the same grayscale, the luminance of the second display area CA may be lower than the luminance of the first display area DA. In order to compensate for a luminance difference between the first and second display areas DA and CA, the data voltage Vdata output from the second data driving unit111may be set to have a larger voltage range than the data voltage Vdata output from the first data driving unit110. The data voltage Vdata may be determined according to the gamma compensation voltage. Thus, in order to extend the voltage range of the data voltage Vdata, the output voltage range of the programmable gamma IC may be extended. Further, during time-division driving, the data voltage is not output in some sub-frame sections, and thus the data voltage Vdata may be set to a larger voltage range in the sub-frame section in which the corresponding color data is output to compensate for the data voltage. FIG.16is a view illustrating driving timings during normal driving and time-division driving of the display panel.FIG.17is a view illustrating driving timings of the image capturing unit, the first display area, and the second display area.FIG.18is a view illustrating a state in which a cathode of the first display area and a cathode of the second display area are separated. Referring toFIGS.5and16, when the image capturing unit40is not driven, the display panel100may be driven at a first driving frequency, but when the image capturing unit40is driven, the display panel100and the image capturing unit40may be time-divisionally driven at a second driving frequency lower than the first driving frequency, thereby ensuring a rotation speed margin of the filter module. That is, both the first display area DA and the second display area CA of the display panel100may be time-divisionally driven at the second driving frequency. When a driving signal of the image capturing unit40is received, the timing controller130of the display device may vary the driving frequency, and the display panel driving unit2A may operate the display panel100according to the varied frequency. In this case, the second driving frequency may be lower than the first driving frequency, but may be a speed that cannot be recognized by a person. As an example, the first driving frequency may be 60 Hz and the second driving frequency may be 20 Hz, but the present disclosure is not necessarily limited thereto. Referring toFIGS.15and17, when the image capturing unit40is not driven, both the first display area DA and the second display area CA may be driven in a normal mode. However, when the driving signal of the image capturing unit40is input, the first display area DA may be driven in the normal mode, but the second display area CA may be field-sequentially driven at a high speed. That is, in the second display area CA, one frame may be divided into sections of a plurality of sub-frames SF1, SF2, and SF3 to output different color data, and the image capturing unit40may receive the different color data for each of the sections of the plurality of sub-frames SF1, SF2, and SF3. Here, the color data output from the second display area CA may be different from the color data received by the image capturing unit. As an example, in the section of the first sub-frame SF1, the image capturing unit may sense red data and the display panel may output green data and blue data, thereby preventing light outputted from the display panel from being received by the image capturing unit. The second data driving unit111may increase and output the data voltage so that the overall luminance of the second display area CA during the time-division driving is equal to the luminance of the second display area CA in the normal mode. The configuration of independently controlling the data voltage is not particularly limited. As an example, different data voltages may be applied by separating the pixel driving voltage ELVDD or the low-potential power voltage ELVSS. However, the present disclosure is not necessarily limited thereto, and various configurations may all be applied to the method of setting the data voltage differently. According to the embodiment, a first voltage level applied to the pixels of the second display area CA by the second data driving unit111may be greater than a second voltage level applied to the pixels of the first display area DA by the first data driving unit110. When operating in the normal mode, since the PPI of the second display area CA is small and thus the luminance is relatively low, a data voltage level may be increased to compensate for this. In addition, in the time-division driving, the data voltage applied to the pixels of the second display area CA may be increased by adding a first voltage increase for compensating for the low luminance due to the small PPI to a second voltage increase for compensating for the decrease in luminance due to the shortened light emission time due to the time-division driving. According to the embodiment, when the image capturing unit40is not driven, the first display area DA and the second display area CA may equally output image data, but when the image capturing unit40is driven, only the second display area CA may be time-divisionally driven. In addition, light output from the second display area CA may be independently controlled so as not to be introduced into the image capturing unit, and the luminance of the second display area CA may be independently adjusted. FIG.18is a view illustrating an example in which a cathode of a light-emitting element is separated between a low PPI area and a high PPI area so that an independent low-potential power voltage is applied to pixels for each area. Referring toFIG.18, the first display area DA may include a first cathode CAT1. The first cathode CAT1 may be commonly connected to the light-emitting elements (OLEDs) of the pixels disposed in the first display area DA. A first low-potential power voltage ELVSS1 may be applied to the first cathode CAT1. The second display area CA may include a second cathode CAT2. The second cathode CAT2 may be separated from the first cathode CAT1. Accordingly, the first cathode CAT1 and the second cathode CAT2 may apply low-potential power voltages ELVSS1 and ELVSS2 having different voltage levels to the pixels for each area. Accordingly, the data voltage level applied to the second display area CA may be independently controlled. However, the present disclosure is not necessarily limited thereto, and the data voltage level may also be independently controlled by separating ELVDD. FIG.19illustrates a first modified example ofFIG.17.FIG.20illustrates a second modified example ofFIG.17.FIG.21illustrates a third modified example ofFIG.17. Referring toFIG.19, when the image capturing unit40is not driven, the first display area DA and the second display area CA may equally output image data, but when the image capturing unit40is driven, both the first display area DA and the second display area CA may be time-divisionally driven. That is, when the image capturing unit is driven, both the first display area DA and the second display area CA may be field-sequentially driven at a high speed. Referring toFIG.20, when the driving signal of the image capturing unit40is input, the first display area DA may be driven in the normal mode while the second display area CA may be time-divisionally driven. Here, the second display area CA may operate at a second driving frequency lower than the current driving frequency. As described above, such a configuration may be achieved by separately driving the first gate driving unit and the second gate driving unit. In the second display area CA, one frame may be divided into sections of a plurality of sub-frames SF1, SF2, and SF3 to output different color data, and the image capturing unit40may receive the different color data for each of the sections of the plurality of sub-frames SF1, SF2, and SF3. Here, the color data output from the second display area CA may be different from the color data received by the image capturing unit. As an example, in the section of a first sub-frame, the image capturing unit may sense red data and the display panel may output green data and blue data. The second data driving unit111may increase and output the data voltage so that the overall luminance of the second display area CA during the time-division driving is equal to the luminance of the first display area DA. Further, referring toFIG.21, when the image capturing unit40is driven, both the first display area DA and the second display area CA are time-divisionally driven, and the driving frequency is slowly varied to ensure a rotation speed margin of the filter module. According to an embodiment, the quality of an image captured by a front camera of a full-screen display can be improved. Effects of the present disclosure will not be limited to the above-mentioned effects and other unmentioned effects will be clearly understood by those skilled in the art from the following claims. It will be apparent to those skilled in the art that various modifications and variations can be made in the display device of the present disclosure without departing from the technical idea or scope of the disclosure. Thus, it is intended that the present disclosure cover the modifications and variations of this disclosure provided they come within the scope of the appended claims and their equivalents. | 55,384 |
11862080 | DETAILED DESCRIPTION It will be appreciated that for simplicity and clarity of illustration, where appropriate, reference numerals have been repeated among the different figures to indicate corresponding or analogous elements. In addition, numerous specific details are set forth in order to provide a thorough understanding of the exemplary embodiments described herein. However, it will be understood by those of ordinary skill in the art that the exemplary embodiments described herein may be practiced without these specific details. In other instances, methods, procedures, and components have not been described in detail so as not to obscure the related relevant feature being described. Also, the description is not to be considered as limiting the scope of the exemplary embodiments described herein. The drawings are not necessarily to scale, and the proportions of certain parts may be exaggerated to better illustrate details and features of the present disclosure. The term “comprising” when utilized, means “including, but not necessarily limited to”; it specifically indicates open-ended inclusion or membership in the so-described combination, group, series, and the like. The disclosure is illustrated by way of example and not by way of limitation in the figures of the accompanying drawings in which like references indicate similar elements. It should be noted that references to “an” or “one” embodiment in this disclosure are not necessarily to the same embodiment, and such references can mean “at least one”. The term “circuit” is defined as an integrated circuit (IC) with a plurality of electric elements, such as capacitors, resistors, amplifiers, and the like. In colorimetry, metamerism is a perceived matching of colors with different (nonmatching) spectral power distributions. Colors that can be made to match this way are called metamers. A spectral power distribution describes the proportion of total light given off (emitted, transmitted, or reflected) by a color sample at each visible wavelength; it defines complete information about the light coming from the sample. However, the human eye contains only three color receptors (three types of cone cells), which means that all colors are reduced to three sensory quantities, called the tristimulus values. Metamerism occurs because each type of cone responds to the cumulative energy from a broad range of wavelengths, so that different combinations of light across all wavelengths can produce an equivalent receptor response and the same tristimulus values or color sensation. In color science, the set of sensory spectral sensitivity curves is numerically represented by color matching functions. A method of adjusting two colors to be visually identical is called color matching. In a color matching task, a value of the three primary colors (i.e., red, green, and blue) required to match a color to be measured is called the tristimulus value. International Commission on Illumination (CIE) proposed the CIE 1931 XYZ system, which established a new chromaticity system with the three imaginary primary colors defined as X, Y, and Z. The CIE 1931 XYZ system is suitable for viewing angles from 1 degree to 4 degrees. In order to adapt to color measurement in wide field of view, CIE additionally proposed the CIE1964 supplementary standard chromaticity system in 1964, which was obtained through observation and testing by multiple observers over a 10-degree field of view. Tristimulus color is based on the three primary color receptors of the human eye and quantified into various tristimulus values of X, Y, and Z. The tristimulus values of X, Y, and Z are superimposed from the object reflection spectrum, the illumination spectrum, and the CIE standard observer spectrum tristimulus value curve. The spectral power distributions of two colors that human eyes perceive as the same (that is, the tristimulus values of the two colors are the same) can be the same or different. Metamerism is a phenomenon where the color of two objects appears the same under a particular light source for a particular standard observer, but actually have different spectral power distributions. In other words, metamerism produces color that has different spectral power distributions for a specific standard observer and a specific light source but has the same tristimulus values. FIG.1shows an arrangement of pixels P of a display panel100according to an embodiment of the present disclosure. The display panel100includes a plurality of pixels P. Each pixel P includes a plurality of main sub-pixels10and a backup sub-pixel20. For any one of the pixels P, if there is no failure of main sub-pixels, the backup sub-pixel20does not work, and the plurality of main sub-pixels10cooperate to emit light, so that the pixel P outputs a target color; if there is a failed main sub-pixel, the backup sub-pixel20cooperates with the non-failed main sub-pixels (i.e., normal main sub-pixels) to emit light i, so that the pixel P outputs mesmerized light of the target color. The display panel100utilizes the phenomenon of metamerism and adjusts the spectrum of each pixel P, so that the human eye perceives the same color or colors regardless of whether there is a failed main sub-pixel in the pixel P. Compared with the method of directly removing the failed light-emitting element and re-bonding a working light-emitting element as replacement, the difficulty of repair is reduced, and time is saved. In addition, compared with the method of preparing a backup light-emitting element for each light-emitting element, the display panel100reduces the number of light-emitting elements and thus reduces the cost. As shown inFIG.1, the pixels P are arranged in a plurality of columns along a first direction D1, and a plurality of rows in a second direction D2. The first direction D1 intersects with the second direction D2. Specifically, inFIG.1, the first direction D1 is perpendicular to the second direction D2. Each pixel P includes three main sub-pixels10and one backup sub-pixel20. The three main sub-pixels10are a red sub-pixel12for emitting red light, a green sub-pixel14for emitting green light, and a blue sub-pixel16for emitting blue light. The red sub-pixels12, the green sub-pixels14, the blue sub-pixels16, and the backup sub-pixels20in each pixel P are arranged in a 2×2 matrix. The red sub-pixel12and the green sub-pixel14are arranged in a first row of each pixel P, and the blue sub-pixel16and the backup sub-pixel20are arranged in a second row of each pixel P. In a row, each red sub-pixel12is alternating with one green sub-pixel14, and each blue sub-pixel16is alternating with one backup sub-pixel20in the second row. Each red sub-pixel12is alternating with one blue sub-pixel16in a column, and each green sub-pixel14is alternating with one backup sub-pixel20in the next column. In other embodiments, the arrangement of the main sub-pixels10and backup sub-pixels20is not limited to that shown inFIG.1, and as long as the arrangement is in a 2×2 matrix. In addition, each pixel P is not limited to include the main sub-pixels of the above three colors, and may also include main sub-pixels that emit light of other colors. For example, each pixel P can include white sub-pixels for emitting white light, yellow sub-pixels for emitting yellow light, or cyan sub-pixels for emitting cyan light, and so on. In some embodiments, the number of the main sub-pixels10and the backup sub-pixels20in one pixel P is not limited to that shown inFIG.1, and the arrangement of the sub-pixels in each pixel P is not limited to a 2×2 matrix arrangement. For example, each pixel P may include six sub-pixels in a 2×3 matrix or a 3×2 arrangement. In some embodiments, the main sub-pixel10includes a main light-emitting element (not shown) and a color conversion layer on a light-emitting side of the main light-emitting element. The light emitted from the main light-emitting element is converted into a desired color by passing through the color conversion layer. For example, the main light-emitting element is an inorganic light-emitting diode (LED) that emits blue light or an organic light-emitting diode (OLED) that emits blue light, and a material of the color conversion layer is quantum dots or phosphors. The material of the color conversion layer in the red sub-pixel12converts the blue light emitted by the main light-emitting element into red, and the material of the color conversion layer in the green sub-pixel14converts the blue light emitted by the main light-emitting element into green. The material of the color conversion layer in the blue sub-pixel16converts the blue light emitted by the main light-emitting element into blue light with a desired wavelength band. In other embodiments, the main sub-pixel10does not include a color conversion layer. For example, the red sub-pixel12includes a LED for emitting red light, the green sub-pixel14includes a LED for emitting green light, and the blue sub-pixel16includes a LED for emitting blue light. As shown inFIG.2, the backup sub-pixel20includes a backup light-emitting element22that emits blue light and a color conversion layer24. The color conversion layer24is located on a light-emitting side of the backup light-emitting element22for converting the blue light emitted by the backup light-emitting element22into white light. The display panel100enables the pixel P to output the mesmerized light of the target color by adjusting the spectral power of the white light converted from the blue light. The backup light-emitting element22may be an OLED that emits blue light or a LED that emits blue light, and a material of the color conversion layer24is quantum dots or phosphors. In other embodiments, the backup sub-pixel20is not limited to be a sub-pixel that emits white light, and can be a sub-pixel that emits light of other colors, for example, a yellow sub-pixel that emits yellow light, a cyan sub-pixel that emits cyan light, and the like. In some embodiments, both the main sub-pixels10and the backup sub-pixel20include a color conversion layer, and the color conversion layer of the main sub-pixels10and the color conversion layer of the backup sub-pixel20are of the same type, and the main light-emitting element and the backup light-emitting element22are of the same type. For example, in the main sub-pixel10and the backup sub-pixel20, the color conversion layers are both quantum dots or both phosphors; the main light-emitting element and the backup light-emitting element22are all OLEDs. In some embodiments, the size of the LED or OLED is less than 100 microns. That is, the light-emitting elements in the main sub-pixels10and the backup sub-pixel20are micro LEDs or micro OLEDs. The display panel with micro LEDs as the light-emitting elements has the characteristics of high resolution, low power consumption, high brightness, high contrast, high color saturation, fast response speed, thin thickness, and long life. Compared with the method of preparing a redundant light-emitting element for each light-emitting element, the display panel100reduces the number of light-emitting elements, thereby reducing an area of a driving circuit, improving yield, and reducing cost. In addition, traditional repair techniques for light-emitting elements (e.g., micro LEDs), such as ultraviolet irradiation repair technology, laser fusing repair technology, selective pick-up repair technology, selective laser repair technology, and so on, at least include a de-bonding process of removing the LEDs from a driving substrate, a cleaning process, a re-bonding process of fixing the LEDs on the driving substrate and so on, which is complicated. The display panel of the present disclosure uses the phenomenon of metamerism to adjust the spectrum of the pixels P, so that human eyes perceive the same color regardless of whether there is a failed main sub-pixel in each pixel P. Therefore, no secondary transfer is required, the repair process is simplified, and repair cost of light-emitting elements (such as micro LEDs) is reduced. InFIGS.3A to3D, a pixel P includes one red sub-pixel12, one green sub-pixel14, one blue sub-pixel16and one backup sub-pixel20emitting white light as an example for description. InFIGS.3A to3D, the curve marked “T” is a target output spectral power curve, and the curves marked as “R”, “G”, “B”, “W” are actual output spectral power curves of the red sub-pixel12, the green sub-pixel14, the blue sub-pixel16and the backup sub-pixel20, respectively. InFIGS.3A to3D, the horizontal axis represents the wavelength, and the vertical axis represents intensity of spectral power. InFIG.3A, the failed main sub-pixel10is the blue sub-pixel16. InFIG.3B, the failed main sub-pixel10is the green sub-pixel14. InFIG.3C, the failed main sub-pixel10is the red sub-pixel12. InFIG.3D, the three main sub-pixels10are all normal sub-pixels which would normally emit light. That is, for the pixel P shown inFIG.3D, the backup sub-pixel20does is not emitting light as the three main sub-pixels10cooperate to emit light, so that the pixel P outputs the target color. For the pixels P shown inFIG.3AtoFIG.3C, each pixel P has one failed sub-pixel not emitting light, and the backup sub-pixel20cooperates with the non-failed main sub-pixels10to emit light, so that the pixel P outputs the metamerized light of the target color. The term “failed sub-pixel” also means a sub-pixel that is not working normally for some reason, and is referred to as a dead sub-pixel or a faulty sub-pixel. Specifically, inFIG.3A, the blue sub-pixel16is a dead sub-pixel, and the red sub-pixel12, the green sub-pixel14, and the backup sub-pixel20emit white light cooperatively to output the metamerized light of the target color. That is, although the spectral power distributions inFIG.3AandFIG.3Dare different for a specific standard observer and a specific illuminator, the observer has exactly the same color perception (i e., the same tristimulus values). Similarly, inFIG.3B, the green sub-pixel14is a dead sub-pixel and the blue sub-pixel16, the red sub-pixel12, and the backup sub-pixel20emits white light which, in cooperation, causes the output of metamerized light of the target color. InFIG.3C, the red sub-pixel12is a dead pixel, and the blue sub-pixel16, the green sub-pixel14, and the backup sub-pixel20cooperate to output the metamerized light of the target color. That is, inFIGS.3A to3D, although the spectral power distributions of the pixels P are different, the actual output spectral power curves by the pixels P have the same tristimulus values, and the human eyes perceive the same color. InFIGS.3A to3D, each pixel P has one dead sub-pixel as an example. In other embodiments, if one pixel P has more than one dead sub-pixel, the actual spectral power curve of the backup sub-pixel20can still be adjusted to cooperate with the main sub-pixels10that have not failed, to output the metamerized light of the target color. In addition, the display panel100can adjust the intensity of the spectral power of the white light emitted by each backup sub-pixel20, so that the pixel P can continue to output light of the target color as metamerized light. That is, directly adjusting the spectral power output result of the pixel P with a dead sub-pixel, or adjusting for the standard human eye spectral sensitivity spectrum (also known as the CIE standard observer spectral tristimulus curve), is equivalent to directly adjusting the pixel P to obtain the target tristimulus value of X, Y, and Z, improving product yield. In addition, the display panel100further includes a driving substrate (not shown). The main light-emitting elements and backup light-emitting elements are disposed on the driving substrate and are electrically connected to the driving substrate to emit light under the control of the driving substrate. The driving substrate is, for example, a thin film transistor substrate. In addition, the display panel100further includes a controller (such as a display driver IC, shown inFIG.1) on the driving substrate to control the main sub-pixels10in pixel P, or the ones which are working plus the backup sub-pixel20, to display the target color. FIG.4shows an arrangement of pixels P of a display panel200according to another embodiment of the present disclosure. The difference between the display panel200and the display panel100inFIG.1lies in the structure of backup sub-pixels. InFIG.4, the light emitted by each backup sub-pixel20can be one of the three primary colors (i.e., red, green, and blue) light, by a combination of any two of the three primary colors, or with light from the backup sub-pixel20mixed therein producing the three primary colors. As shown inFIG.5, the backup sub-pixel20includes three backup light-emitting elements22for emitting blue light (i.e., first, second, and third backup light-emitting elements221,222, and223), a red conversion block262, a green conversion block264and a blue conversion block266. The red conversion block262, the green conversion block264, and the blue conversion block266are located on a light-emitting side of the three backup light-emitting elements22for converting the blue light emitted by the corresponding light-emitting elements into red light and green light and blue light, respectively. Specifically, the red conversion block262is on the first backup light-emitting element221, the green conversion block264is on the second backup light-emitting element222, and the blue conversion block266is on the third backup light-emitting element223. The display panel200independently adjusts the spectral power distribution of the red light converted from the blue light, the spectral power distribution of the green light converted from the blue light, and the spectral power distribution of the blue light converted from the blue light, so that each pixel P outputs metamerized light of the target color. Since the spectral power distribution of each primary color light (i.e., red light, green light, blue light) in each pixel P can be adjusted independently. As shown inFIG.4, the red conversion block262, the green conversion block264and the blue conversion block266are arranged in sequence along the first direction D1. A size of each backup sub-pixel20is approximately the same as that of one main sub-pixel10. In addition, a size of each backup light-emitting element22is approximately the same as that of the corresponding color conversion block (e.g., red conversion block262, green conversion block264, and blue conversion block266). The materials of the red conversion block262, the green conversion block264, and the blue conversion block266are, for example, quantum dots or phosphors. InFIGS.6A to6D, a pixel P includes one red sub-pixel12, one green sub-pixel14, one blue sub-pixel16, and one backup sub-pixel20shown inFIGS.4and5as an example for description. InFIGS.6A to6D, the curve marked “T” is a target output spectral power curve, and the curves marked as “R”, “G”, “B”, “W” are actual spectral power curves of the red sub-pixel12, the green sub-pixel14, the blue sub-pixel16and the backup sub-pixel20, respectively. InFIG.6A, the failed main sub-pixel10is the blue sub-pixel16. InFIG.6B, the failed main sub-pixel10is the green sub-pixel14. InFIG.6C, the failed main sub-pixel10is the red sub-pixel12. InFIG.6D, the three main sub-pixels10are all normal sub-pixels that can emit light normally. That is, for the pixel P shown inFIG.6D, the backup sub-pixel20does not need to work or emit light, and the three main sub-pixels10cooperate to emit light, so that the pixel P outputs the target color. For the pixels P shown inFIG.6AtoFIG.6C, each pixel P has one failed sub-pixel not emitting light, and the backup sub-pixel20cooperates with the remaining main sub-pixels10to emit light, so that the pixel P outputs the metamerized light of the target color. Specifically, inFIG.6A, the blue sub-pixel16is a dead pixel and the red sub-pixel12, the green sub-pixel14, and the backup sub-pixel20cooperate to output the metamerized light of the target color. That is, although the spectral power distributions ofFIG.6AandFIG.6Dare different for a specific standard observer and a specific illuminator, the observer has exactly the same color perception (i.e., have the same tristimulus values). Similarly, inFIG.6B, the green sub-pixel14is a dead pixel, and the blue sub-pixel16, the red sub-pixel12, and the backup sub-pixel20cooperate to output metamerized light of the target color. InFIG.6C, the red sub-pixel12is a dead pixel, and the blue sub-pixel16, the green sub-pixel14, and the backup sub-pixel20emit light cooperate to output metamerized light of the target color. That is, inFIGS.6A to6D, although the spectral power distributions of the pixels P are different, the actual output spectra power curves by the pixels P have the same tristimulus value, and the human eyes perceive the same color. InFIGS.6A to6D, each pixel P has one dead sub-pixel as an example. In other embodiments, if one pixel P has more than one dead sub-pixel, the actual spectral power curve of the backup sub-pixel20can still be adjusted to cooperate with the non-failed main sub-pixels10to output metamerized light of the target color. In addition, although “W” is marked inFIG.6AtoFIG.6D, the light emitted by the backup sub-pixel20is not necessarily white light. That is, the light emitted by each backup sub-pixel20can be one of the three primary colors (i.e., red, green, and blue) light, a combination of any two of the three primary colors can still emit target color light. The display panel200independently adjusts the spectral power distribution of the red light converted from the blue light, the spectral power distribution of the green light converted from the blue light, and the spectral power distribution of the blue light converted from the blue light, so that each pixel P outputs metamerized light of the target color. That is, directly adjusting the spectral power output result of the pixel P with a dead sub-pixel, or adjusting for the standard human eye spectral sensitivity spectrum (also known as the CIE standard observer spectral tristimulus curve), is equivalent to directly adjusting the pixel P to obtain the target tristimulus value of X, Y, and Z, improving product yield. In some embodiments, the display panel100and the display panel200further include a color filter layer (not shown), so that the uniformity of chromaticity of the main sub-pixels10or the sub-pixels20is better. For example, the three main sub-pixels10include three light-emitting elements for emitting red light, green light, and blue light, respectively. The color filter layer includes a red color filter block, a green color filter block, and a blue color filter block, each being corresponding to one light-emitting element. The red color filter block allows red light of a desired wavelength band to pass, while filtering out undesired wavelength bands. Correspondingly, the green color filter block allows green light of a desired wavelength band to pass, while filtering out undesired wavelength bands, and the blue color filter block allows blue light of a desired wavelength band to pass, while filtering out undesired other bands. In addition, when the main sub-pixels10include the color conversion layer24, the color filter layer is on a side of the color conversion layer24away from the main light-emitting element. Similarly, when the backup sub-pixels20include a color conversion block, the color filter layer is on a side of the color conversion block away from the backup light-emitting element22. FIG.7shows a flowchart of a method for driving a display panel according to an embodiment of the present disclosure. The display panel includes a plurality of pixels, each of the pixels includes a plurality of main sub-pixels and a backup sub-pixel, and the driving method of the display panel includes the following steps. In block S1, the presence of a failed main sub-pixel in each pixel is detected. For any one of the pixels, block S3 is performed where all main sub-pixels are working, otherwise block S2 is performed. In block S2, for the pixel having a failed main sub-pixel, the backup sub-pixel cooperates with normal main sub-pixels to output metamerized light of a target color. In block S3, for the pixel without a failed main sub-pixel, the backup sub-pixel does not emit light, and normal main sub-pixels cooperate to emit the target color. In block S1, if it is detected that a pixel with a dead sub-pixel exists, the method further includes recording the pixel with the dead sub-pixel in a look-up table. When block S2 is performed, the record of a backup sub-pixel in the pixel emitting light is in the look-up table. In block S1, if it is detected that a pixel P has no dead sub-pixel, block S2 is not executed. In all the pixels P, the backup sub-pixels do not emit light, and each pixel P outputs a correct target color. In some embodiments, the backup sub-pixel includes a backup light-emitting element that emits blue light and a color conversion layer. The color conversion layer is on light-emitting side of the backup light-emitting element, and is used for converting the blue light emitted by the backup light-emitting element into white light. In block S2, the display panel enables the pixels to output the metamerized light of the target color by adjusting the spectral power distribution of the white light converted from the blue light. In some embodiments, the backup sub-pixel includes three backup light-emitting elements emitting blue light and a red conversion block, a green conversion block, and a blue conversion block. The red conversion block, the green conversion block and the blue conversion block are respectively located on the light-emitting side of the three backup light-emitting elements for converting the blue light emitted by the corresponding backup light-emitting elements into red light, green light and blue light. In block S2, the display panel independently adjusts the spectral power distribution of the red light converted from the blue light, the spectral power distribution of the green light converted from the blue light, and the spectral power distribution of the blue light converted from the blue light. The pixels thus output metamerized light of the target color. The driving method of the display panel utilizes the phenomenon of metamerism and adjusts the output of the pixel, so that the human eye perceives the same color regardless of a failed main sub-pixel in any pixel. Compared with the method of directly removing the failed light-emitting element and picking up the qualified light-emitting element for replacement, the difficulty of repair is reduced, and time is saved. In addition, compared with the method of preparing a redundant light-emitting element for each light-emitting element, the display panel relatively reduces the number of light-emitting elements and reduces the cost. Moreover, it can also provide products with high and low specification brightness according to whether there are dead sub-pixels in the pixels. Thus, the original defective product does not need to be scrapped, and it is still saleable. It is to be understood, even though information and advantages of the present exemplary embodiments have been set forth in the foregoing description, together with details of the structures and functions of the present exemplary embodiments, the disclosure is illustrative only. Changes may be made in detail, especially in matters of shape, size, and arrangement of parts within the principles of the present exemplary embodiments to the full extent indicated by the plain meaning of the terms in which the appended claims are expressed. | 27,838 |
11862081 | DETAILED DESCRIPTION For clearer description of the objectives, technical solutions, and advantages in the present disclosure, the embodiments of the present disclosure are described in further detail hereinafter with reference to the accompanying drawings. With the development of display technologies, an existing notch display design or water drop display design gradually cannot meet users' requirements of high screen-to-body ratios of display panels, and a series of display panels provided with a light-transmitting display region have emerged. In this type of display panels, hardware such as a photosensitive sensor (for example, a camera) may be disposed in the light-transmitting display region, and therefore it is not necessary to punch a hole. Therefore, while the practicability of display panels is ensured, real full-screen displays become possible. In the related art, a display panel with an under-screen camera generally includes a first display region for normal display and a second display region for arranging the camera. The second display region generally includes a plurality of light-emitting elements and a plurality of pixel circuits. Each pixel circuit is connected to one light-emitting element, and is used for driving the light-emitting element to emit light. The pixel circuit and the light-emitting element connected to each other are overlapped in a direction perpendicular to the display panel. Because the second display region further includes the pixel circuit in the related art, the light transmittance of the second display region is poor, and correspondingly, the display panel has a poor display effect. Embodiments of the present disclosure provide a display panel. Under the premise of ensuring reliable driving of the light-emitting elements in a light-transmitting display region and ensuring good light transmittance in the light-transmitting display region, the display panel does not reduce the number of pixels in a non-light-transmitting display region, thereby ensuring that the display effect of the non-light-transmitting display region is good. FIG.1is a schematic structural diagram of a display panel according to an embodiment of the present disclosure. As shown inFIG.1, the display panel may include a base substrate01. The base substrate01may include a first display region A1and a second display region A2. The first display region A1may at least be partially disposed around the second display region A2. For example, the second display region A2shown inFIG.1is located at a top center position of the base substrate01. Correspondingly, four sides of the rectangular first display region A1may all surround the second display region A2. That is, the second display region A2may be surrounded by the first display region A1. In some embodiments, the second display region A2may be alternatively not located at the top center position of the base substrate01shown inFIG.1, but located at another position. For example, with reference toFIG.1, the second display region A2may be located at an upper left corner position or an upper right corner position of the base substrate01. In combination with another display panel shown inFIG.2, it can be seen that the display panel may further include: a plurality of first pixel circuits10, a plurality of second pixel circuits20, and a plurality of first light-emitting elements30are disposed in the first display region A1, and a plurality of second light-emitting elements40that are disposed in the second display region A2. The plurality of second pixel circuits20may be disposed at intervals among the plurality of first pixel circuits10. At least one first pixel circuit10of the plurality of first pixel circuits10may be connected to at least one first light-emitting element30of the plurality of first light-emitting elements30, and an orthographic projection of the at least one first pixel circuit10on the base substrate01is at least partially overlapped with an orthographic projection of the at least one first light-emitting element30on the base substrate01. The at least one first pixel circuit10may be used for providing a drive signal for the connected first light-emitting element30to drive the first light-emitting element30to emit light. At least one second pixel circuit20of the plurality of second pixel circuits20may be connected to at least one second light-emitting element40of the plurality of second light-emitting elements40by a conductive trace L1. The at least one second pixel circuit20may be used for providing a drive signal for the connected second light-emitting element40to drive the second light-emitting element40to emit light. The second light-emitting element40and the second pixel circuit20are located in different regions. Therefore, as shown inFIG.2, there is no overlapping portion between an orthographic projection of the at least one second pixel circuit20on the base substrate01and an orthographic projection of the at least one second light-emitting element40on the base substrate01. Optionally, in the embodiments of the present disclosure, the first display region A1may be disposed as a non-light-transmitting display region, and the second display region A2may be disposed as a light-transmitting display region. That is, the first display region A1described in the embodiments of the present disclosure may be not light-transmitting, and the second display region A2may be light-transmitting. In this way, there is no need to punch a hole in the display panel, and a required hardware structure such as a photosensitive sensor may be directly disposed in the second display region A2, to provide a solid basis for implementing a real full-screen display. In addition, since only the light-emitting elements are included in the second display region A2, and the pixel circuits are not included, the light transmittance of the second display region A2can be ensured to be good. In summary, the embodiments of the present disclosure provide a display panel. The display panel includes a base substrate including a first display region and a second display region. Since pixel circuits for driving light-emitting elements in the second display region are only disposed in the first display region but not disposed in the second display region, the light transmittance of the second display region is ensured to be good. Correspondingly, the display panel described in the embodiments of the present disclosure has a good display effect. FIG.3is a schematic structural diagram of another display panel by taking the display panel shown inFIG.2as an example. With reference toFIG.3, it can be further seen that the first display region A1not only includes a plurality of pixels, but also includes a plurality of columns of second pixel circuits20, and the second display region A2includes only a plurality of second light-emitting elements40. The pixel is a structure including a pixel circuit and light-emitting elements. Taking the first pixel circuit10and the first light-emitting element30as an example, it can be seen fromFIG.3that each pixel includes one red sub-pixel R, two green sub pixels G1and G2, and one blue sub-pixel B. The red sub-pixel R and the blue sub-pixel B are located in the same column. The two green sub pixels G1and G2are located in the same column. In some embodiments, the pixel may also include sub-pixels of another color and another quantity. An arrangement mode of the sub-pixels is not limited to the structure shown inFIG.3. For example, each pixel may include only one red sub-pixel R, one blue sub-pixel B, and one green sub-pixel G. Optionally, in the embodiments of the present disclosure, the electrical connection relationship between the plurality of first pixel circuits10and the plurality of first light-emitting elements30may correspond to one another. That is, each first pixel circuit10may be connected to one first light-emitting element30, and the first pixel circuits10are connected to different first light-emitting elements30. Therefore, combined with the display panel shown inFIG.2, an orthographic projection of each first pixel circuit10on the base substrate01is at least partially overlapped with an orthographic projection of the connected first light-emitting element30on the base substrate01. The electrical connection relationship between the plurality of second pixel circuits20and the plurality of second light-emitting elements40may correspond to one another, similar to the electrical connection relationship between the first pixel circuits10and the first light-emitting elements30. In addition, an orthographic projection of each second pixel circuit20on the base substrate01is not overlapped with an orthographic projection of the connected second light-emitting element40on the base substrate01. Optionally, a density of the plurality of second light-emitting elements40located in the second display region A2may be the same as a density of the plurality of first light-emitting elements30located in the first display region A1. That is, the same quantity of light-emitting elements are included per inch in the first display region A1and the second display region A2. That is, the first display region A1(that is, the main display region) does not include two subregions with different pixel density, such that compared with the related art, when displaying a picture, the first display region A1does not have a bright-dark border line, and the display effect of the display panel is good. Taking the display panel shown inFIG.2as an example,FIG.4shows a structural layout of the display panel. With reference toFIG.4, it can be seen that a resolution of the first display region A1may be greater than a resolution of the second display region A2. That is, the area of the first display region A1is larger than the area of the second display region A2, and the quantity of the light-emitting elements included in the first display region A1is greater than the quantity of the light-emitting elements included in the second display region A2. In some embodiments, the resolution of the first display region A1may be less than or equal to the resolution of the second display region A2. For example, the area of the first display region A1may be the same as the area of the second display region A2, and the quantity of the light-emitting elements included in the first display region A1may also be the same as the quantity of the light-emitting elements included in the second display region A2. Alternatively, the area of the first display region A1may be less than the area of the second display region A2, and the quantity of the light-emitting elements included in the first display region A1may be less than the quantity of the light-emitting elements included in the second display region A2. Optionally,FIG.5is a partial schematic enlarged view of the display panel shown inFIG.4. Combined withFIG.4andFIG.5, it can be seen that the size of the first light-emitting element30may be greater than the size of the second light-emitting element40. That is, an anode of the light-emitting element in the second display region A2is smaller than an anode of the light-emitting element in the first display region A1. In this way, the light transmittance of the second display region A2being greater than the light transmittance of the first display region A1can be ensured. Moreover, the shape and size of an anode of the second light-emitting element40may be further optimized to ensure better light transmittance. For example, combined with the display panel shown inFIG.3, the anode of the shown second light-emitting element40is elliptical. Optionally, to further ensure that the light transmittance of the second display region A2is good, the conductive trace L1described in the embodiments of the present disclosure may be a transparent conductive trace. For example, the conductive trace L1may be made of a transparent material such as indium tin oxide (ITO) or indium gallium zinc oxide (IGZO). Assuming that the conductive trace L1is made of ITO, the conductive trace L1may also be referred to as an ITO trace. All the following embodiments take that the conductive trace L1is the ITO trace as an example for description. Optionally, in the embodiments of the present disclosure, the base substrate01is provided with a light-transmitting display region, that is, the second display region A2. Therefore, as shown inFIG.6, the structure of a photosensitive sensor50(for example, a camera) in a display module included in a display device may be directly disposed in the second display region A2. That is, there is no need to additionally punch a hole in the display panel. In this way, a solid basis is provided for implementing a full-screen display panel. Optionally, the second display region A2may be rectangular, and the area of an orthographic projection of the photosensitive sensor50on the base substrate01may be less than or equal to the area of an incircle of the second display region A2. That is, the size of a region in which the photosensitive sensor50is located may be less than or equal to the size of the incircle of the second display region A2. For example, combined with the display panel shown inFIG.6, the size of a region in which the photosensitive sensor50is located is equal to the size of an incircle Y0 of the second display region A2. That is, the shape of the region in which the photosensitive sensor50is located may be circular. Correspondingly, the region in which the photosensitive sensor50is located may also be referred to as a light hole. In some embodiments, the second display region A2may also be in another shape except the rectangular shape, such as a circular shape or an elliptical shape. In the related art, the size (pitch) of a pixel circuit (including the first pixel circuit10and the second pixel circuit20) is the same as the size of the first light-emitting element30. For example, a typical width is about 30 μm to 32 μm and a length is about 60 μm to 65 μm. In the embodiments of the present disclosure, to provide sufficient space for the arrangement of the second pixel circuit20without reducing the number of pixels in the first display region A1, the pixel circuits may be compressed in a second direction X2 (for example, an extension direction of a gate line, which may also be referred to as a transverse direction), such that a width of the pixel circuit in the second direction is less than the width of the first light-emitting element30. Alternatively, the width of the first light-emitting element30in the second direction is greater than the width of the pixel circuit by stretching the first light-emitting element30in the second direction. In this way, on the premise that the size of the base substrate01is kept the same, more regions can be provided in the first display region A1, and correspondingly, second pixel circuits20dedicated to driving the second light-emitting elements40located in the second display region A2can be disposed at the more regions. For example, a difference between the width of each pixel circuit and the width of the first light-emitting element30may be about 4 μm. Taking a compressed pixel circuit and a width difference of 4 μm as an example,FIG.7shows structural layouts of a pixel circuit before and after compression (that is, the related art and the embodiments of the present disclosure). With reference toFIG.7, it can be seen that the pixel circuit may include a driving structure and a connecting member B1used for connecting to an anode of a light-emitting element. The size of the connecting member B1may represent the size of a pixel circuit. Before compression, the sizes of the pixel circuit and the light-emitting element both have a width of 1 μm to 100 μm and a height of 2 μm to 200 μm. After compression, the size of the light-emitting element is kept unchanged and the height of the pixel circuit is kept unchanged, but the width of the pixel circuit is reduced by 1 μm to 20 μm. Thus, there are one or more additional columns of compression pixel circuits in every several columns of compression pixel circuits, and the whole screen adopts the design to realize full-screen compression. These additional columns may be chosen for connecting the second light-emitting elements40in the second display region A2to control the second light-emitting elements40to emit light. In some embodiments, the additional columns of pixel circuits proximal to the periphery of the second display region A2preferably serve as the second pixel circuits20for connecting the second light-emitting elements40. In this way, a normal display can be ensured without changing the resolution of the display panel. That is, the existing space of the display panel is fully used to achieve a normal display. It needs to be noted that, with reference toFIG.3, the width of the pixel circuit may be a length of an orthographic projection of a layout of the pixel circuit on the base substrate01in the second direction X2. The width of the first light-emitting element30is a length of an orthographic projection of an anode of the first light-emitting element30on the base substrate01in the second direction X2. In addition, combined withFIG.3andFIG.8, each of the first light-emitting elements described in the embodiments of the present disclosure belongs to one sub-pixel in one pixel, such as a red sub-pixel R, a green sub-pixel G1or G2, or a blue sub-pixel B. When the size of the anode of the first light-emitting element is determined, a width D10of the pixel in a first direction X1 or the second direction X2 may be measured generally in a period of one pixel as a period, and the width D01of each first light-emitting element may be obtained by dividing the total width D10of the pixel by a quantity of sub-pixels (for example, 4 shown inFIG.8) included in the pixel. Similarly, since each first light-emitting element is connected to one pixel circuit, the width of each pixel circuit in the first direction X1 or the second direction X2 may still be measured with each pixel circuit connected to one pixel as a period, and the width DO of each pixel circuit may be obtained by dividing the total width by the quantity of sub-pixels included in the pixel. Optionally, with reference to the pixel circuit shown inFIG.7, the pixel circuit described in the embodiments of the present disclosure may be a 7T1C structure, that is, may include 7 transistors and 1 capacitor.FIG.9shows a schematic structural diagram of a 7T1C pixel circuit, andFIG.10shows a structural layout of the 7T1C pixel circuit. Combined with the pixel circuit shown inFIG.9andFIG.10, it can be known that the 7T1C pixel circuit10includes a driving transistor T1, a data write transistor T2, a threshold compensation transistor T3, a first light-emitting control transistor T4, a second light-emitting control transistor T5, a first reset transistor T6, a second reset transistor T7, and a storage capacitor C1. The pixel circuit may be connected to a gate signal terminal Gate, a data signal terminal Data, reset signal terminals RST1and RST2, a light-emitting control signal terminal EM, a power supply terminal VDD, initial power supply terminals Vinit1and Vinit2, and light-emitting elements. The light-emitting elements may also be connected to a power supply terminal VSS. The pixel circuit may be used to drive the connected light-emitting elements to emit light in response to signals provided by the connected signal terminals. In addition, transistors may be classified, according to the characteristics of transistors, into N-type transistors and P-type transistors. The embodiments of the present disclosure are described by taking the transistor being the P-type transistor as an example. Based on the description and teachings of the implementations of the present disclosure, a person skilled in the art can easily conceive that N-type transistors are used for at least part of the transistors in the pixel circuit structure in the embodiments of the present disclosure without creative efforts, that is, an implementation of N-type transistors or implementation of combining N-type transistor and P-type transistors is used. Therefore, these implementations also fall within the protection scope of the embodiments of the present disclosure. To further reflect that there are a plurality of additional columns of pixel circuits after the pixel circuits are compressed,FIG.11is a schematic structural diagram of the first display region A1.FIG.12is a schematic diagram of a partial structure (including only a pixel circuit) inFIG.4.FIG.13is a schematic diagram of a partial structure (including only the light-emitting elements) inFIG.4. With reference toFIG.11toFIG.13, it can be seen that the width of the pixel circuit is less than the width of the light-emitting element. In this way, the pixel circuits in the second column and the ninth column from left to right are not connected to any first light-emitting element30, which belong to the additional column of pixel circuits and may serve as the second pixel circuit20for connecting the second light-emitting elements40in the second display region A2. In addition, each first light-emitting element30may include a total of four anodes RG1BG2 and a connecting member B2used for connecting the first pixel circuit10. The connecting member B1of the first pixel circuit10and the connecting member B2of the first light-emitting element30may be connected by a source-drain metal layer SD2. Alternatively, in the case that the first pixel circuit10and the first light-emitting element30are lapped together, there is no need to dispose the SD2for connection. It should be noted that each of at least one second pixel circuit20and at least one second light-emitting element40may be provided with a connecting member. When the at least one second pixel circuit20is connected to the at least one second light-emitting element40by the conductive trace L1, it may be in practice that the conductive trace L1is respectively connected to the connecting member of the at least one second pixel circuit20and the connecting member of the at least one second light-emitting element40. Therefore, to ensure that there is sufficient space for the conductive trace L1, an axis of the connecting member of each second pixel circuit20located in the same row may be flush with an axis of the connecting member of any second light-emitting element40. The axes may extend in the second direction X2. That is, in the same row in a row direction, the connecting member of the second pixel circuit20and the connecting member of the second light-emitting element40are located in the same straight line. Similarly, combined withFIG.11toFIG.13, in the same row, the connecting member B1of the first pixel circuit10and the connecting member B2of the first light-emitting element30may also be located in the same straight line, such that the traces are arranged neatly. Optionally,FIG.14is a schematic structural diagram of yet another display panel according to an embodiment of the present disclosure. As shown inFIG.14, the first display region A1may include a first sub-display region A11and a second sub-display region A12sequentially arranged in the first direction X1. The first sub-display region A11may include two symmetrical target sub-display regions A110. That is, the two target sub-display regions A110have the same layout. The second display region A2may include two third sub-display regions A21symmetrically arranged in the second direction X2. That is, the two third sub-display regions A21have the same layout. One target sub-display region A110, the second display region A2, and the other target sub-display region A110may be sequentially arranged in the second direction X2. Based on the display panel shown inFIG.14, the left half part and the right half part of the display panel have the same layout. Therefore, the following embodiments show only the left half structure of the display panel, that is, one target sub-display region A110located at the left half part and one adjacent third sub-display region A21. The right half part is similar, and details are not described again. Moreover, the additional columns of pixel circuits, that is, the plurality of second pixel circuits20, described in the embodiments of the present disclosure may be dispersedly disposed in the first display region A1, and the disposition positions may be flexibly adjusted according to the requirement, as long as the second light-emitting element40can be effectively connected to the second light-emitting element40and drive the second light-emitting element40to reliably emit light. For example, in the embodiments of the present disclosure, the disposition positions of the second pixel circuits20are schematically described below by taking an example in which the plurality of second pixel circuits20are distributed in a column direction, a row direction, and a diagonal direction. In an optional implementation, the second pixel circuit20extends in the column direction.FIG.15is a schematic structural diagram of yet another display panel according to an embodiment of the present disclosure. Combined withFIG.14andFIG.15, it can be seen that the plurality of first pixel circuits10may include a plurality of columns of first pixel circuits10extending in the first direction X1, and the plurality of second pixel circuits20may include a plurality of columns of second pixel circuits20extending in the first direction X1. The plurality of columns of second pixel circuits20may be disposed at intervals among the plurality of columns of first pixel circuits10. For example, at an interval of a plurality of columns of adjacent first pixel circuits10, there is one column of second pixel circuits20. In other words, a plurality of columns of adjacent first pixel circuits10may be spaced between every two adjacent columns of second pixel circuits20. Optionally, the same quantity of columns of first pixel circuits10may be disposed between any two adjacent columns of second pixel circuits20, such that the arrangement uniformity is ensured. For example, 8 adjacent columns of first pixel circuits10are disposed between any two adjacent columns of second pixel circuits20. Alternatively, the different quantities of columns of first pixel circuits10may be disposed between any two adjacent columns of second pixel circuits20. Exemplarily, combined with the display panel shown inFIG.16, starting from a left border line between the third sub-display region A21and the target sub-display region A110, the second column of pixel circuits, the 12thcolumn of pixel circuits, and the 20thcolumn of pixel circuits on the left may all be the second pixel circuits20. It needs to be noted that, the additional columns of second pixel circuits20below the second display region A2may serve as dummy columns and are not connected to any light-emitting element. In another optional implementation, the second pixel circuits20do not extend in the column direction, andFIG.17is a schematic structural diagram of yet another display panel according to an embodiment of the present disclosure. As shown inFIG.17, the plurality of first pixel circuits10may include a plurality of rows of first pixel circuits10extending in the second direction X2, and the plurality of second pixel circuits20may include a plurality of rows of second pixel circuits20extending in the second direction X2. Optionally, the first direction X1 may be intersected with the second direction X2. For example, when the first direction X1 is not perpendicular to the second direction X2, the plurality of second pixel circuits20may be arranged in the diagonal direction. When the first direction X1 is perpendicular to the second direction X2, the plurality of second pixel circuits20may be arranged in the row direction. The plurality of rows of second pixel circuits20are disposed at intervals among the plurality of rows of first pixel circuits10. For example, the plurality of second pixel circuits20shown inFIG.17extend in the row direction. That is, at an interval of a plurality of rows of adjacent first pixel circuits10, there is one row of second pixel circuits20. In other words, a plurality of rows of adjacent first pixel circuits10are disposed between every two adjacent rows of second pixel circuits20. The following embodiments are described by taking an example in which the plurality of second pixel circuits20are sequentially arranged in the column direction. It needs to be noted that the additional columns of pixel circuits, that is, the second pixel circuit20, may be connected to the second light-emitting element40by the conductive trace L1, and the stack layers of the conductive trace L1may be flexibly adjusted according to the radius of the light hole. For example,FIG.18is a schematic structural diagram of yet another display panel according to an embodiment of the present disclosure. As shown inFIG.18, the display panel may include a first conductive trace L11(that is, ITO1), a second conductive trace L12(that is, ITO2), and a third conductive trace L13(that is, ITO3). Each third sub-display region A21may include k light-emitting element groups. Each light-emitting element group may include a plurality of columns of adjacent second light-emitting elements40, and a first light-emitting element group to a kthlight-emitting element group may be sequentially arranged in a direction going towards the other third sub-display region. Correspondingly, each target sub-display region A110includes k pixel circuit groups in one-to-one correspondence with the k light-emitting element groups Z0. Each of the k pixel circuit groups may include a plurality of columns of adjacent second pixel circuits20, and a first pixel circuit group to a kthpixel circuit group may be sequentially arranged in a direction going away from the adjacent third sub-display region. k may be an integer greater than 0. For example, the embodiments of the present disclosure are described by taking k being 4 as an example. Optionally, each of the first light-emitting element group Z01to the third light-emitting element group Z03may include 12 columns of second light-emitting elements40. The fourth light-emitting element group Z04may include 8 columns of second light-emitting elements40. Correspondingly, each of the first pixel circuit group Z11to the third pixel circuit group Z13may include 12 columns of second pixel circuits20. The fourth pixel circuit group Z14may include 8 columns of second pixel circuits20. That is, for the display panel shown inFIG.18, in the third sub-display region A21, the first column of second light-emitting elements40to the 13thcolumn of second light-emitting elements40(that is, R1 to R13) belong to the first light-emitting element group Z01. The 14thcolumn of second light-emitting elements40to the 26thcolumn of second light-emitting elements40(that is, R14 to R26) belong to the second light-emitting element group Z02. The 27thcolumn of second light-emitting elements40to the 39thcolumn of second light-emitting elements40(that is, R27 to R39) belong to the third light-emitting element group Z03. The 40thcolumn of second light-emitting elements40to the 48thcolumn of second light-emitting elements40(that is, R40 to R48) belong to the fourth light-emitting element group Z04. Correspondingly, in the target sub-display regions A110, the first column of second pixel circuits20to the 13thcolumn of second pixel circuits20(that is, P1 to P13) belong to the first pixel circuit group Z11. The 14thcolumn of second pixel circuits20to the 26thcolumn of second pixel circuits20(that is, P14 to P26) belong to the second pixel circuit group Z12. The 27thcolumn of second pixel circuits20to the 39thcolumn of second pixel circuits20(that is, P27 to P39) belong to the third pixel circuit group Z13. The 40thcolumn of second pixel circuits20to the 48thcolumn of second pixel circuits20(that is, P40 to P48) belong to the fourth pixel circuit group Z14.FIG.18does not show the first pixel circuits10and the first light-emitting elements30. Optionally, in the embodiments of the present disclosure, the second light-emitting elements40in each light-emitting element group may be connected to the second pixel circuits20in the corresponding pixel circuit group by the first conductive traces L11, the second conductive traces L12, and/or the third conductive traces L13in one-to-one correspondence. For example, as shown inFIG.19, the second light-emitting elements40in the first light-emitting element group Z01may be connected to the second pixel circuits20in the first pixel circuit group Z11by the first conductive traces L11in one-to-one correspondence (inFIG.19, ITO1 represents the first conductive trace L11). As shown inFIG.20, the second light-emitting elements40in the second light-emitting element group Z02may be connected to the second pixel circuits20in the second pixel circuit group Z12by the second conductive trace L12in one-to-one correspondence (inFIG.20, ITO2 represents the second conductive trace L12). As shown inFIG.21, the second light-emitting elements40in the third light-emitting element group Z03may be connected to the second pixel circuits20in the third pixel circuit group Z13by the third conductive trace L13in one-to-one correspondence (inFIG.21, ITO3 represents the third conductive trace L13). As shown inFIG.22, the second light-emitting elements40in the fourth light-emitting element group Z04may be connected to the second pixel circuits20in the fourth pixel circuit group Z14by the first conductive traces L11(that is, ITO1), the second conductive traces L12(that is, ITO2), and the third conductive traces L13(that is, ITO3) in one-to-one correspondence. For example, referring toFIG.22, the fourth light-emitting element group Z04may include two first sub-light-emitting element groups Z041, two second sub-light-emitting element groups Z042, and two third sub-light-emitting element groups Z043symmetrically arranged along an axis xx of the third sub-display region A21. Each of the sub-light-emitting element groups may include a plurality of rows of second light-emitting elements40adjacent to each other, and the number of rows of second light-emitting elements40included in each sub-light-emitting element group may be the same or different. In addition, the first sub-light-emitting element group Z041, the second sub-light-emitting element group Z042, and the third sub-light-emitting element group Z043disposed on the same side may be sequentially arranged in a direction going away from the axis xx, and the axis xx extends in the second direction X2. The fourth pixel circuit group Z14may include two first sub-pixel circuit groups Z141in one-to-one correspondence with the two first sub-light-emitting element groups Z041, two second sub-pixel circuit groups Z142in one-to-one correspondence with the two second sub-light-emitting element groups Z042, and two third sub-pixel circuit groups Z143in one-to-one correspondence with the two third sub-light-emitting element groups Z043. The arrangement of the first sub-pixel circuit group Z141, the second sub-pixel circuit group Z142, and the third sub-pixel circuit group Z143located on the same side is the same as that of the sub-light-emitting element groups. The second light-emitting elements40in each of the two first sub-light-emitting element groups Z041may be connected to the second pixel circuits20in a corresponding first sub-pixel circuit group Z141by the first conductive traces L11(that is, ITO1) in one-to-one correspondence. The second light-emitting elements40in each of the second sub-light-emitting element groups Z042may be connected to the second pixel circuits20in a corresponding second sub-pixel circuit group Z142by the second conductive traces L12(that is, ITO2) in one-to-one correspondence. The second light-emitting elements40in each of the third sub-light-emitting element groups Z043may be connected to the second pixel circuits20in a corresponding third sub-pixel circuit group Z143by the third conductive traces L13(that is, ITO3) in one-to-one correspondence. FIG.23is a schematic structural diagram of a conductive trace according to an embodiment of the present disclosure.FIG.24shows a structural layout shown inFIG.19toFIG.21. Combined withFIG.19toFIG.21, it can be seen that each of the first conductive trace L11connected to the second light-emitting element40in the first light-emitting element group Z01, the second conductive trace L12connected to the second light-emitting element40in the second light-emitting element group Z02, and the third conductive trace L13connected to the second light-emitting element40in the third light-emitting element group Z03may include a first conductive trace segment La, a second conductive trace segment Lb, and a third conductive trace segment Lc. One end of the first conductive trace segment La may be connected to a corresponding second light-emitting element40, and the other end of the first conductive trace segment La may be connected to one end of the second conductive trace segment Lb. The other end of the second conductive trace segment Lb may be connected to one end of the third conductive trace segment Lc. The other end of the third conductive trace segment Lc may be connected to a corresponding second pixel circuit20. In addition, the first conductive trace segment La and the third conductive trace segment Lc may extend in the first direction X1, the second conductive trace segment Lb may extend in the second direction X2, and an orthographic projection of the second conductive trace segment Lb on the base substrate01is at least partially overlapped with an orthographic projection of the second light-emitting element40on the base substrate01(as can be seen with reference toFIG.24described below). That is, the first conductive trace L11, the second conductive trace L12, and the third conductive trace L13may all be led out from the connected second light-emitting element40and transversely extend in the row direction to the position of the second pixel circuit20to be connected to the second pixel circuit20. Optionally, to prevent signals from interfering with each other, the second conductive trace segment Lb in the first conductive trace L11may be at least partially overlapped with the second conductive trace segment Lb in the third conductive trace L13. The second conductive trace segment Lb in the first conductive trace L11may be not overlapped with the second conductive trace segment Lb in the second conductive trace L12, and the second conductive trace segment Lb in the third conductive trace L13may also be not overlapped with the second conductive trace segment Lb in the second conductive trace L12. The overlapping parts may be transferred through vias. It needs to be noted thatFIG.24only schematically shows a structural layout of the first conductive traces L11connected to the second light-emitting elements40in the first light-emitting element group Z01, that is, ITO′ traces, in the display panel. For the structural layouts of the second conductive trace L12(that is, the ITO2 trace) connected to the second light-emitting elements40in the second light-emitting element group Z02and the third conductive trace L13(that is, the ITO3 trace) connected to the second light-emitting elements40in the third light-emitting element group Z03in the display panel, reference may directly be made to the schematic diagram of the display panel inFIG.24, and details are not described herein again. Optionally,FIG.25is a schematic structural diagram of a conductive trace according to an embodiment of the present disclosure. As shown inFIG.25, each of the first conductive trace L11connected to the second light-emitting element40in each first sub-light-emitting element group Z041, the second conductive trace L12connected to the second light-emitting element40in each second sub-light-emitting element group Z042, and the third conductive trace L13connected to the second light-emitting element40in each third sub-light-emitting element group Z043may include a fourth conductive trace segment Ld, a fifth conductive trace segment Le, a sixth conductive trace segment Lf, and a seventh conductive trace segment Lg. One end of the fourth conductive trace segment Ld may be connected to a corresponding second light-emitting element40, and the other end of the fourth conductive trace segment Ld may be connected to one end of the fifth conductive trace segment Le. The other end of the fifth conductive trace segment Le may be connected to one end of the sixth conductive trace segment Lf. The other end of the sixth conductive trace segment Lf may be connected to one end of the seventh conductive trace segment Lg. The other end of the seventh conductive trace segment Lg may be connected to a corresponding second pixel circuit20. In addition, the fifth conductive trace segment Le and the seventh conductive trace segment Lg may extend in the first direction X1, and the sixth conductive trace segment Lf may extend in the second direction X2. The fourth conductive trace segment Ld may be located between a row in which the connected second light-emitting element40is located and an adjacent row. FIG.26is a schematic structural diagram of yet another display panel according to an embodiment of the present disclosure,FIG.27is a schematic structural diagram of yet another display panel according to an embodiment of the present disclosure, andFIG.28is a simplified schematic diagram of the display panel shown inFIG.27. Combined withFIG.26toFIG.28, it can be seen that the fifth conductive trace segment Le included in the first conductive trace L11(that is, ITO′ in the figure) may be located in a region in which the second light-emitting element group Z02to the fourth light-emitting element group Z04are located. The fifth conductive trace segment Le included in the second conductive trace L12(that is, ITO2 in the figure) may be located in a region in which the third light-emitting element group Z03to the fourth light-emitting element group Z04are located. The fifth conductive trace segment Le included in the third conductive trace L13(that is, ITO3 in the figure) may be located in a region in which the fourth light-emitting element group Z04is located. The sixth conductive trace segment Lf on a side, distal from the second sub-display region A12, of the axis, may be located on a side, distal from the second sub-display region A12, of the second display region A2, and the sixth conductive trace segment Lf on a side, proximal to the second sub-display region A12, of the axis, is located in the second display region A2proximal to the second sub-display region A12. That is, the fifth conductive trace segment Le included in the first conductive trace L11may be led out from the connected second light-emitting element40, and extend in the column direction from a region (that is, a region in which Z02to Z04are located) in which the R14 to R48 columns of second light-emitting elements40are located on a side, proximal to a non-display region or proximal to the second sub-display region A12, of the third sub-display region A21, and then transversely extends in the row direction to a region in which a corresponding second pixel circuit20is located, to be connected to the second pixel circuit20. The fifth conductive trace segment Le included in the second conductive trace L12may be led out from the connected second light-emitting element40, and extend in the column direction from a region (that is, a region in which Z03and Z04are located) in which the R27 to R48 columns of second light-emitting elements40are located to a side, proximal to a non-display region or proximal to the second sub-display region A12, of the third sub-display region A21, and then transversely extends in the row direction to a region in which a corresponding second pixel circuit20is located, to be connected to the second pixel circuit20. The fifth conductive trace segment Le included in the third conductive trace L13may be led out from the connected second light-emitting element40, and extend in the column direction from a region (that is, a region in which Z04is located) in which the R40 to R48 columns of second light-emitting elements40are located to a side, proximal to a non-display region or proximal to the second sub-display region A12, of the third sub-display region A21, and then transversely extends in the row direction to a region in which a corresponding second pixel circuit20is located, to be connected to the second pixel circuit20. In addition, the sixth conductive trace segments Lf included in the conductive traces located on the same side and extending in the row direction may partially overlap or may not overlap. With reference toFIG.26andFIG.27, it can be seen that the display panel may further include at least one column of dummy second pixel circuits20. At least one column of dummy second pixel circuits20may be disposed in the target sub-display regions A110. The column of dummy second pixel circuits20may also be referred to as a transition column. The column of dummy second pixel circuits20is not connected to any light-emitting element. By arranging the transition column, the problem that a turn-on time difference between the first column of second light-emitting elements40and the last column of second light-emitting elements40is relatively large since a distance between the first column of second light-emitting elements40and the second pixel circuit20connected to the first column of second light-emitting elements40is less than a distance between the last column of second light-emitting elements40and the second pixel circuit20connected to the last column of second light-emitting elements40can be avoided, which further ensures a better display effect. In addition, by comparingFIG.26andFIG.27, it can be seen that in the first light-emitting element group01, the first conductive traces L11(that is, ITO′ in the figure) connected to the second light-emitting elements40located in adjacent rows may be disposed on the same side facing upward or may be symmetrically arranged on different sides. In the second light-emitting element group01, the second conductive traces L12(that is, ITO2 in the figure) connected to the second light-emitting elements40located in adjacent rows may be located on the same side facing upward or may be symmetrically arranged on different sides. In the third light-emitting element group01, the third conductive traces L13(that is, ITO′ in the figure) connected to the second light-emitting elements40located in adjacent rows may be located on the same side facing upward or may be symmetrically arranged on different sides. Alternatively, the foregoing signal lines may also be both located on the same side facing downward, and details are no longer described with reference to the accompanying drawings. FIG.29is a sectional view of a display panel. ITO′ represents the first conductive trace L11, ITO2 represents the second conductive trace L12, and ITO3 represents the third conductive trace L13. Anode refers to an anode of a light-emitting element, and PLN refers to a planarization layer. The display panel shown inFIG.29includes a total of five planarization layers PLN1to PLN5. SD1refers to a first source-drain metal layer, and SD2refers to a second source-drain metal layer. Optionally, the display panel may further include a plurality of metal layers such as a first gate metal layer GATE1, a second gate metal layer GATE2, a first source-drain metal layer SD1, and the second source-drain metal layer SD2. A data trace DATA connected to each second pixel circuit20and any metal layer may be disposed in the same layer. For example, referring to the display panel shown inFIG.30, in the direction going away from the adjacent third sub-display region A21, in the first column to an ithcolumn of second pixel circuits20in each target sub-display region A110, the data trace DATA connected to the second pixel circuits20disposed in an odd-numbered column and the first gate metal layer GATE1may be disposed in the same layer. A data trace DATA connected to the second pixel circuits20disposed in an even-numbered column and the second gate metal layer GATE2may be disposed in the same layer. A data trace DATA connected to the ithcolumn to an nthcolumn of second pixel circuits20and the first source-drain metal layer SD1may be disposed in the same layer. i may be an integer greater than 1 and less than n, and n may be equal to a total quantity of columns in each target sub-display region A110. That is, data traces DATA connected to the first i columns of second pixel circuits20may be alternately disposed in the same layer as GATE1and GATE2. All data traces DATA connected to the ithcolumn to the nthcolumn of second pixel circuits20and SD1may be disposed in the same layer. Optionally, assuming that the third sub-display region A21includes 48 columns of second light-emitting elements40, i may be 24. That is, every 24 columns of second light-emitting elements40may be one group. Correspondingly, that is, in the first column to the 24thcolumn, a data trace DATA connected to the odd-numbered column and the first gate metal layer GATE1may be disposed in the same layer. From the first column to the 24thcolumn, a data trace DATA connected to the even-numbered column and the second gate metal layer GATE2may be disposed in the same layer, and data traces DATA connected to the 24thcolumn to the 48thcolumn and the first source-drain metal layer SD1are disposed in the same layer. Optionally,FIG.31andFIG.32show structural layouts in which at different positions, a data trace DATA connected to an odd-numbered column of second pixel circuits20and the first gate metal layer GATE1are disposed in the same layer.FIG.33andFIG.34show structural layouts in which at different positions, a data trace DATA connected to an even-numbered column of second pixel circuits20and the second gate metal layer GATE2are disposed in the same layer.FIG.35andFIG.36show structural layouts in which at different positions, a data trace DATA connected to the ithcolumn to the nthcolumn of second pixel circuits20and SD1are disposed in the same layer.FIG.37andFIG.38show an overall layout of a display panel that at different positions, data traces DATA and the first gate metal layer GATE1are disposed in the same layer, data traces DATA and the second gate metal layer GATE2are disposed in the same layer, and data traces DATA and SD1are disposed in the same layer.FIG.39andFIG.40show a structural layout including a conductive trace L1and a data trace DATA. FIG.41is a schematic structural diagram of a data trace. Combined with the accompanying drawings related to the foregoing data traces, it can be seen that a data trace DATA connected to each second pixel circuit20may include a first data trace segment D11, a second data trace segment D12, and a third data trace segment D13. One end of the first data trace segment D11may be connected to a corresponding metal layer, the other end of the first data trace segment D11may be connected to one end of the second data trace segment D12, the other end of the second data trace segment D12may be connected to one end of the third data trace segment D13, and the other end of the third data trace segment D13may be connected to the second pixel circuit20. In addition, the second data trace segment D12may extend in the first direction X1, and the second data trace segment D12included in the data trace DATA in the same layer as the first gate metal layer GATE1, the second data trace segment D12included in the data trace DATA in the same layer as the second gate metal layer GATE2, and the second data trace segment D12included in the data trace DATA in the same layer as the first source-drain metal layer SD1maybe not overlapped with each other. That is, each data trace DATA may be transferred and led out from a metal layer at a borderline between the third sub-display region A21and the second sub-display region A12, extends in the column direction from a display region to a non-display region in the third sub-display region A21, then extends in the row direction to a corresponding column of second pixel circuits20, and is connected to the second pixel circuit20. In addition, combined with the display panel shown inFIG.42, in the same column of second pixel circuits20, the data trace DATA connected to the second pixel circuits20located in the first sub-display region A11may be different from the data trace DATA connected to the second pixel circuits20located in the second sub-display region A12. For example, a data trace DATA connected to one column of second pixel circuits20is disconnected from a borderline between the first sub-display region A11and the second sub-display region A12. In this way, signals provided by data traces can be prevented from interfering with each other, to ensure effective and reliable driving of the second light-emitting elements40. Moreover, with reference to the display panel shown inFIG.43, the first gate metal layer GATE1, the second gate metal layer GATE2, and the first source-drain metal layer SD1may be covered by the second source-drain metal layer SD2. Thus, signal shielding at a connecting point between a driving transistor and a light-emitting element can be achieved, thereby reducing signal crosstalk. The influence of parasitic capacitance on the conductive trace L1can be shielded to a certain extent, and a good display effect is ensured. In summary, the embodiments of the present disclosure provide a display panel. The display panel includes a base substrate including a first display region and a second display region. Since pixel circuits for driving light-emitting elements in the second display region are only disposed in the first display region but not disposed in the second display region, the light transmittance of the second display region is ensured to be good. Correspondingly, the display panel described in the embodiments of the present disclosure has a good display effect. FIG.44is a schematic structural diagram of a display device according to an embodiment of the present disclosure. As shown inFIG.44, the display device may include an integrated circuit100and the display panel200shown in any foregoing drawing. The integrated circuit100may be connected to a first pixel circuit and a second pixel circuit in the display panel200and may be configured to drive the first pixel circuit and the second pixel circuit. For example, the integrated circuit100may be connected to signal terminals connected to pixel circuits and used for providing signals to the signal terminals. It needs to be noted thatFIG.44only schematically shows the position of the integrated circuit100. The integrated circuit100may be alternatively located on the right side of the display panel200or maybe both located on the left side of the display panel200and located on the right side of the display panel200. Alternatively, the integrated circuit100may be located on an upper side and/or a lower side of the display panel200. Optionally, the display device may be any product or part provided with a display function, such as an organic light-emitting diode (OLED) display device, an active-matrix OLED (AMOLED) display device, a mobile phone, a tablet computer, a flexible display device, a television, or a display. It may be clearly understood by a person skilled in the art that, for a convenient and brief description, for a detailed working process of the foregoing display panel and display device, reference may be made to a corresponding process in the foregoing method embodiments, and details are not described herein again. Described above are merely exemplary embodiments of the present disclosure, and are not intended to limit the present disclosure. Within the spirit and principles of the disclosure, any modifications, equivalent substitutions, improvements, and the like are within the protection scope of the present disclosure. | 56,349 |
11862082 | DESCRIPTION OF EMBODIMENT The inventor of the present application has found the following problem in a conventional image display device. As in a conventional transparent display, a display panel, which is switchable between a state in which an image is displayed and a transmissive state, includes, for example, an organic electro-luminescent (EL) panel including a plurality of organic EL elements disposed on a glass substrate, and a light control sheet which switches between light transmittance and light non-transmittance according to whether or not voltage is applied to polymer-dispersed liquid crystals. In such a display panel, for example, by not displaying an image on an organic EL panel and turning on the light control sheet (turning the light control sheet into a transmissive state), it is possible to allow a user in front of the display panel to view one or more objects behind the display panel. However, the organic EL panel includes a plurality of organic EL elements arranged in a matrix and the light control sheet includes dispersed liquid crystals. In other words, although the base material of the display panel is a transparent material such as a glass, minute light shielding elements, such as organic EL elements and liquid crystals, are dispersed in the display panel. Accordingly, for the user who views the display panel from the front side, the back side of the display panel may look dark. In view of the above, the inventor of the present application has studied a configuration in which an illuminator is disposed behind the display panel. With such a configuration, illumination light emitted from the illuminator can be emitted to the objects behind the display panel. This allows the user in front of the display panel to clearly view the objects placed behind the display panel. However, the inventor of the present application has found the following problems caused due to illumination light when the illuminator is disposed behind the display panel. Specifically, when the illuminator is disposed behind the display panel, the brightness around the image display device changes, resulting in insufficient transparency of the display panel or reflection of the illumination light on the display panel (halation). The present disclosure has been conceived based on such findings. As a result of diligent studies by the inventor of the present application, the inventor has arrived at an idea about a configuration and a control of an image display device capable of effectively using a display panel that operates in a transmissive mode. Hereinafter, an embodiment will be described with reference to the drawings as necessary. Note that unnecessarily detailed descriptions may be omitted. For example, detailed descriptions of already known matters and overlapping description of substantially the same configuration may be omitted. This is to avoid the following description to become unnecessarily redundant, and to facilitate understanding of the person skilled in the art. The inventor of the present application provides the accompanying drawings and the following description so that the person skilled in the art fully understands the present disclosure, and do not intend to limit the subject matter of the claims by this. In the following embodiment, the vertical (top-bottom) direction is represented by a Z-axis, the front-back direction is represented by a Y-axis, and the horizontal (left-right) direction is represented by an X-axis for the sake of description, but these do not limit the orientation of the image display device according to the present disclosure at the time of manufacture or usage. In the following description, for example, a positive X-axis side indicates the direction of the arrow of the X-axis and a negative X-axis side indicates the direction opposite to the positive X-axis. The same applies to the Y-axis and the Z-axis. Embodiment Hereinafter, an embodiment of the present disclosure will be described with reference toFIG.1toFIG.9. First, with reference toFIG.1toFIG.5, an outline of a configuration of an image display device according to the present embodiment will be described. [1-1. Outline of Configuration of Image Display Device] FIG.1is an external perspective view of a state of image display device10according to the present embodiment when operating in an image display mode (a first display mode).FIG.2is an external perspective view of a state of image display device10according to the embodiment when operating in a transmissive mode.FIG.3is an exploded perspective view of an outline of a configuration of image display device10according to the embodiment.FIG.4is a cross-sectional view of an outline of a configuration of display panel110and illuminator201according to the embodiment. Specifically,FIG.4illustrates a portion of a cross-section taken along line IV-IV inFIG.2.FIG.5is a side view of image display device10according to the embodiment. InFIG.5, illustration of right side wall32is omitted, and the side surfaces of display panel110and illuminator201are simply illustrated. As illustrated inFIG.1toFIG.5, image display device10according to the present embodiment includes: display panel110; shelf board50; and illuminator201. In the present embodiment, display panel110is surrounded from the outer periphery by frame body30which includes shelf board50, and is supported by frame body30. In the present embodiment, although image display device10may include members other than the members described below, such as a protective panel for protecting the front face of display panel110, the descriptions and illustrations thereof are omitted. Display panel110is a display device switchable between an image display mode in which an image is displayed on display panel110and a transmissive mode in which display panel110is in a transmissive state where each of objects behind display panel110is visible in the front view of display panel110. Specifically, as illustrated inFIG.4, display panel110includes organic EL panel111and light control panel112disposed behind organic EL panel111. Organic EL panel111is an example of an image display panel. Note that the “image” displayed on display panel110may be any of a still image or a moving image, or may be video content including both the still image and the moving image. In the present embodiment, organic EL elements, each of which includes an EL layer and transparent electrodes sandwiching the EL layer, are disposed in a matrix in organic EL panel111. The region of organic EL panel111where an image (including background image) is not displayed has light transmitting properties to the extent generally referred to as transparent. Light control panel112includes light control sheet113, first glass plate114adisposed in front of light control sheet113, and second glass plate114bdisposed behind light control sheet113. Light control sheet113is a member switchable between a light transmissive state and a light non-transmissive state depending on whether or not a predetermined voltage is applied to light control sheet113. Light control sheet113incudes, for example, a liquid crystal layer including liquid crystal molecules having an orientational state changed by presence or absence of an application of voltage, and resin sheets sandwiching the liquid crystal layer. Display panel110may include, for example, an optical member, such as an anti-reflection film, in addition to the above described structural elements. Display panel110configured such that organic EL panel111and light control panel112are layered, is capable of displaying an image on organic EL panel111, for example, as illustrated inFIG.1. At this time, by not applying a predetermined voltage to light control sheet113(turning off light control sheet113), light control sheet113shields the back side of organic EL panel111from light, so that the user is able to view a clear image. In the present embodiment, the operation mode for displaying an image on organic EL panel111is referred to an “image display mode”. More specifically, the case where an image is displayed on organic EL panel111and light control sheet113is turned off is referred to as a “first display mode”, and the case where an image is displayed on organic EL panel111and a predetermined voltage is applied to light control sheet113(light control sheet113is turned on) is referred to as a “second display mode”. The second display mode will be described later with reference toFIG.9. Moreover, as illustrated inFIG.2, for example, display panel110is turned into a transmissive state where objects500behind display panel110are visible by not displaying an image on organic EL panel111and turning on light control sheet113. In the present embodiment, this operation mode is referred to as a transmissive mode. Space portion40is provided behind display panel110. In the present embodiment, space portion40is a space portion provided behind display panel110and within the frame body including a plurality of walls. The walls and the frame body according to the present embodiment will be described below. Shelf board50is disposed behind display panel110so as to project rearward. In the present embodiment, shelf board50surrounds the periphery of display panel110and forms part of frame body30which holds display panel110. Frame body30includes shelf board50, top wall31, right side wall32, and left side wall33. Top wall31is disposed along the top side of display panel110. Right side wall32is disposed along the right side of display panel110in the front view. Left side wall33is disposed along the left side of display panel110in the front view. Right side wall32is connected to the right end portion of shelf board50in the front view, and left side wall33is connected to the left end portion of shelf board50in the front view. Top wall31is connected to the upper end portions of right side wall32and left side wall33. Top wall31and shelf board50are connected to right side wall32and left side wall33by, for example, screws. As illustrated inFIG.2andFIG.3, left side wall33has holding groove33afor holding the left edge of display panel110in the front view. In a similar manner, right side wall32has a holding groove (not illustrated) for holding the right edge of display panel110in the front view. As illustrated inFIG.3andFIG.4, top wall31has holding groove31afor holding the upper edge of display panel110. Each of shelf board50, top wall31, right side wall32, and left side wall33includes, for example, a wood pattern sheet pasted onto a metal base material, such as an aluminum or aluminum alloy. In such a case, as illustrated inFIG.2, when image display device10is operating in the transmissive mode, image display device10is recognized as furniture or display furniture for displaying objects500with the front face covered with a glass. The material of each member of frame body30is not limited to metal, but a non-metallic material such as wood or resin may be used as the material of shelf board50or the like. InFIG.4, top wall31is illustrated as a solid plate-shaped member, but top wall31may be a hollow plate-shaped member. One or more objects500(photo, doll, vase, toy, model, picture and the like) can be placed on placement surface52forming the upper face of shelf board50. The user is able to view objects500placed on shelf board50through display panel110that is operating in the transmissive mode. The transmittance of display panel110is, for example, approximately 40% to 50%. Hence, when image display device10is placed in a relatively dark environment, the user may fail to clearly view objects500. In view of the above, image display device10includes illuminator201which emits illumination light to objects500placed in space portion40by emitting illumination light to space portion40. In image display device10configured as described above, the operations of display panel110and illuminator201are controlled by controller80held in shelf board50. In the present embodiment, as illustrated inFIG.3, controller80is housed inside shelf board50. Controller80is disposed along bottom surface51aof shelf body51so that controller80falls within the thickness (width in the Z-axis direction) of shelf board50. Illuminance sensor90is further disposed in shelf board50as illustrated inFIG.1andFIG.2. Illuminance sensor90is fixed to shelf board50such that a light receiver is located on the front face of shelf board50. Illuminance sensor90is an example of an illuminance detector, and detects the ambient illuminance of image display device10, and mainly the illuminance on the front side of image display device10. The result of detection by illuminance sensor90is used for the control (illumination control) of illuminator201performed by controller80. Specific examples of illumination control performed in accordance with the result of detection by illuminance sensor90will be described later with reference toFIG.6toFIG.9. In addition to controller80, devices such as a light receiver that receives an infrared signal transmitted from a remote controller, a speaker unit, a television tuner, input and output terminals for audio and image signals and a wireless communication module may be disposed in shelf body51. Illuminator201is disposed behind display panel110as illustrated inFIG.3toFIG.5. Specifically, as illustrated inFIG.4, top wall31includes illumination groove31bfor attaching illuminator201. Illuminator201includes light source unit202which emits light and heat sink205for dissipating heat generated by light source unit202. Heat sink205also functions as an attachment member for attaching light source unit202to illumination groove31b. Heat sink205is a metal member such as aluminum or aluminum alloy. Light source unit202includes substrate204long in the X-axis direction and a plurality of LED elements203mounted on substrate204. LED elements203are arranged side by side in the X-axis direction. Illuminator201further includes micro-louver225disposed on the light-emitting side of light source unit202. Micro-louver225is an optical member which limits the distribution angle of light emitted from light source unit202. Micro-louver225is a member disposed along light source unit202and long in the X-axis direction, and has a configuration in which light shields and light transmitting bodies extended in the X-axis direction are alternately arranged in the short direction (Y-axis direction) of micro-louver225. In the present embodiment, micro-louver225has a role of narrowing the light distribution angle of the illumination light emitted from light source unit202, which prevents the illumination light from directly entering display panel110and the illumination light from leaking to the region behind shelf board50(toward the positive Y-axis side). On and off of illuminator201configured as described above is switched according to the operation mode of display panel110. Specifically, as illustrated inFIG.1, when display panel110operates in the image display mode, controller80turns off illuminator201. Moreover, as illustrated inFIG.2, when display panel110operates in the transmissive mode, controller80turns on illuminator201. As a result, as illustrated inFIG.2, objects500located below illuminator201are irradiated with the illumination light, and the user is able to view objects500more clearly through display panel110. Illuminator201is embedded in illumination groove31b, and the light distribution angle of illuminator201is limited by micro-louver225. As a result, when display panel110is in a transmissive state, the light emitted from illuminator201is less likely to directly enter the eyes of the user in front of image display device10that is placed on the floor, for example. However, when the surrounding region of image display device10is relatively dark, for example, at night, a state where the illumination light from illuminator201is brightly visible on a portion of display panel110that is operating in the transmissive mode (so-called reflection of the illumination light) may be observed. In order to reduce such reflection of the illumination light, the brightness of the illumination light may be decreased. However, in such a case, when the surrounding region of image display device10is relatively bright, another problem occurs in that the transparency of display panel110is diminished due to insufficient brightness of the illumination light. In view of the above, image display device10according to the present embodiment changes the brightness of the illumination light emitted from illuminator201, according to the ambient brightness. Accordingly, when display panel110is in a state where light is transmissive through display panel110, a natural transparency can be given to display panel110regardless of the ambient brightness. Hereinafter, a configuration related to illumination control included in image display device10and a specific example of illumination control performed by image display device10will be described with reference toFIG.6toFIG.9. [1-2. Illumination Control] FIG.6is a block diagram illustrating a functional configuration of image display device10according to the embodiment. Specifically,FIG.6illustrates a functional configuration related to illumination control included in image display device10.FIG.7is a flowchart of a basic operation of the illumination control according to the embodiment.FIG.8is a simple diagram illustrating a change in brightness of the illumination light emitted from illuminator201of image display device10according to the embodiment.FIG.9is an external perspective view of a state of image display device10according to the embodiment when operating in an image display mode (a second display mode). As illustrated inFIG.6, image display device10according to the present embodiment includes display panel110which includes organic EL panel111and light control panel112, illuminator201, controller80, and illuminance sensor90. In the present embodiment, controller80has a function of controlling the operation of display panel110in addition to controlling the operation of illuminator201as described above. Controller80includes, for example, a computer which includes a central processing unit (CPU), a storage device such as a memory, an interface for inputting and outputting information, and the like. Controller80controls illuminator201and/or display panel110by the CPU executing a predetermined program stored in the storage device, for example, based on an instruction from the user. Controller80executes the control illustrated inFIG.7, for example. Specifically, for example, controller80switches the operation mode of display panel110from the image display mode (seeFIG.1) to the transmissive mode (seeFIG.2) based on an instruction from the user (S10). At the time of switching of the operation mode, controller80obtains the detection result indicating the ambient brightness detected by illuminance sensor90(S20). Controller80further controls illuminator201using the obtained detection result (S30). InFIG.7, for the sake of simplicity, switching of the operation mode (S10), obtainment of the detection result (S20), and illumination control (S30) are described in this order, but the flow of control performed by controller80is not limited to such an order. Controller80is capable of monitoring the ambient brightness of image display device10by obtaining the result of the detection by illuminance sensor90at predetermined intervals, for example. Moreover, controller80is capable of switching illuminator201from off to on almost at the same time as switching from the image display mode to the transmissive mode such that the brightness of the light from illuminator201is in accordance with the latest detection result at that time. Specifically, as illustrated inFIG.8, the light output of illuminator201is controlled such that the ambient brightness and the illuminance of the illumination light have a positive correlation. Controller80controls the light output of LED elements203included in illuminator201according to, for example, a pulse width modulation (PWM) signal. The method of controlling the light output is not particularly limited, and the light output of illuminator201in image display device10may be controlled according to a digital signal. By controlling the light output of illuminator201in such a manner, brightness C1of the illumination light when the ambient brightness is L1is less than brightness C2of the illumination light when the ambient brightness is L2(L2>L1). L1is an example of a first illuminance, L2is an example of a second illuminance, and the unit for each of L1and L2is, for example, lux. The unit for each of C1and C2is, for example, the dimming rate (%) used for controlling illuminator201. The dimming rate is a kind of variable for adjusting the brightness of illumination, and is a variable which is capable of increasing the brightness of the illumination with an increase in numerical value (the maximum value is 100%). The dimming rate can also be expressed as, for example, “dimming level” or “dimming ratio”. The unit of the ambient brightness and the brightness of the illumination light is an example, and various types of units can be used as long as the unit indicates the level of brightness. For example, the brightness of the illumination light may be expressed in the illuminance (lux) of the illumination light measured at the position of shelf board50. In the present embodiment, controller80decreases the brightness of the illumination light with a decrease in ambient brightness of image display device10, that is, with a decrease in illuminance indicated by the result of detection by illuminance sensor90. In other words, controller80increases the brightness of the illumination light with an increase in illuminance indicated by the result of detection by illuminance sensor90. In other words, the brightness of the illumination light emitted from illuminator201decreases monotonically (increases monotonically) with a decrease (an increase) in ambient brightness. InFIG.8, the ambient brightness and the brightness of the illumination light have a linear relationship, but the ambient brightness and the brightness of the illumination light may have a non-linear relationship. Moreover, image display device10according to the present embodiment is also capable of causing illuminator201to emit illumination light when display panel110is operating in the image display mode. Specifically, controller80displays an image on organic EL panel111, and also turns on light control sheet113of light control panel112. In such a case, light control panel112disposed behind organic EL panel111is in a state where light is transmissive through light control panel111. Accordingly, as illustrated inFIG.9, the objects placed behind display panel110can be easily viewed through the low-brightness portion (dark image portion122) in the image displayed on organic EL panel111. In other words, the user is able to view the relatively high-brightness portion (bright image portion121) in the image, and view the back side through dark image portion122. In view of the above, in image display device10according to the present embodiment, display panel110is operated in the second display mode which is an image display mode in which an image is displayed on organic EL panel111and light control panel112is in a transmissive state, and illuminator201is turned on. As a result, objects500placed on shelf board50and the image displayed on display panel110can be presented to the user at the same time. The second display mode as described above can be switched from another operation mode according to, for example, an instruction from the user. In such a case, for example, controller80may accept an instruction from the user to switch to the second display mode under the condition that the value indicated by the result of detection by illuminance sensor90is less than a threshold value. When display panel110is operated in the image display mode, controller80may operate display panel110in the second display mode when the value indicated by the result of detection by illuminance sensor90is less than the threshold value. In other words, when display panel110is operated in the image display mode, display panel110can be automatically switched between the first display mode and the second display mode according to the ambient brightness. Controller80is capable of performing illumination control according to the ambient brightness even when display panel110is operating in the second display mode as described above. In other words, when display panel110is operating in the second display mode, for example, as illustrated inFIG.8, controller80controls the brightness of the illumination light emitted from illuminator201according to the result of detection by illuminance sensor90. Accordingly, for example, when the surrounding region is dark, illuminator201is capable of emitting weak illumination light. As a result, it is possible to reduce a decrease in clarity of bright image portion121displayed on display panel110and to secure the visibility of objects500through a portion (dark image portion122) of display panel110. [2. Advantageous Effects, etc.] As described above, image display device10according to the present embodiment includes display panel110, space portion40, illuminator201, controller80, and illuminance sensor90. Display panel110is switchable between an image display mode in which an image is displayed and a transmissive mode in which display panel110is in a transmissive state where the back side of display panel110is visible in the front view of display panel110. Space portion40and illuminator201are provided behind display panel110, and illuminator201emits illumination light to space portion40. Controller80controls illuminator201. Illuminance sensor90detects the ambient illuminance of image display device10. When display panel110is operating in the transmissive mode, controller80performs illumination control for causing illuminator201to emit illumination light with a brightness that is in accordance with the result of the detection by illuminance sensor90. With such a configuration, for example, the brightness of the illumination light can be automatically changed between the daytime when the surrounding region is bright due to a large amount of outside light that is entering and the nighttime when the surrounding region is dark due to little influence from the outside light. Accordingly, for example, the brightness of the illumination light on the back side of display panel110, of which the surrounding is relatively dark and which is operating in the transmissive mode, can be automatically decreased. As a result, the possibility that the reflection of the illumination light (halation) from the back side of display panel110, which functions like a transparent glass plate, is observed is reduced. As described above, image display device10according to the present embodiment is capable of effectively using display panel110that is operable in the transmissive mode. In the present embodiment, when the detection result indicates a first illuminance (L1) in the illumination control, controller80decreases the brightness of the illumination light compared to when the detection result indicates a second illuminance (L2) that is higher than the first illuminance (L1). With this configuration, for example, when the surrounding region is bright, the brightness of the illumination light at the back side of display panel110that is operating in the transmissive mode can be automatically increased, and when the surrounding region is dark, the brightness of the illumination light at the back side of display panel110that is operating in the transmissive mode can be automatically decreased. As a result, the user is able to clearly view objects500placed on shelf board50behind display panel110when the surrounding region is bright, and is able to clearly view objects500with no reflection of the illumination light when the surrounding region is dark. Since such control of the illumination light is automatically performed, it is possible to provide natural transparency of display panel110regardless of the brightness of the surrounding region, without bothering the user. As a result, the visibility of objects500placed on shelf board50can be maintained or improved. In the present embodiment, in the illumination control, controller80decreases the brightness of the illumination light with a decrease in illuminance indicated by the detection result. With this configuration, for example, the brightness of the illumination light emitted from illuminator201is automatically changed in multiple levels or with no level according to the ambient brightness of image display device10. Hence, the brightness of the illumination light in the space behind display panel110is highly responsive to a change in the ambient brightness. Accordingly, for example, even when the fluctuation range of the ambient brightness is large, the natural transparency of display panel110can be maintained while being highly responsive to the change. In the present embodiment, when display panel110operates in the image display mode and light control panel112is in the transmissive state, controller80further performs illumination control according to the detection result. In other words, even when display panel110is operating in the second display mode, controller80is capable of performing illumination control according to the ambient brightness. With this configuration, when an image is displayed on display panel110, dark image portion122which has a low brightness in the image is the portion through which the back side of display panel110is visible from the front when the back side of display panel110is relatively bright. In other words, the user is able to view objects500placed on shelf board50through dark image portion122on display panel110. Accordingly, when such dark image portion122is included in the image, the illumination control performed according to the ambient brightness allows objects500that are viewed through dark image portion122to be appropriately illuminated, and reduces the reflection of the illumination light on display panel110. Here, in the present embodiment, controller80is capable of performing illumination control according to the ambient brightness in both when display panel110is operating in the transmissive mode (seeFIG.2) and when display panel110is operating in the second display mode (seeFIG.9). In this case, when the detection result indicates a predetermined illuminance in the illumination control, controller80may change the brightness of the illumination light according to whether display panel110is operating in the image display mode (more specifically, the second display mode) or the transmissive mode. With this configuration, the brightness of the illumination light can be changed between the second display mode and the transmissive mode even when the ambient brightness is the same (a predetermined illuminance). Accordingly, more adaptive illumination control can be performed in such a manner that, for example, the brightness of the illumination light is relatively decreased in the second display mode so that the illumination light does not affect the display image much at all, and the brightness of the illumination light is relatively increased in the transmissive mode so that the user is able to more clearly view objects500. Focusing on the structural features of image display device10, in image display device10, space portion40is provided within the four walls that surround display panel110from the top, bottom, left, and right in the front view. Illuminator201is disposed in at least one of the four walls, at a position behind display panel110. Specifically, in the present embodiment, image display device10includes frame body30that surrounds the periphery of display panel110in the front view, and frame body30includes four walls which are top wall31, right side wall32, left side wall33, and shelf board50. In other words, space portion40is provided inside frame body30, at a position behind display panel110. As described above, in the present embodiment, illuminator201is disposed at a position in one of the four walls that forms the outer silhouette of image display device10. Hence, for example, even when image display device10is moved, it is not necessary to adjust the position or orientation of illuminator201. Moreover, for example, the wall in which illuminator201is disposed can be used as a heat dissipating member of illuminator201. In the present embodiment, as illustrated inFIG.4andFIG.5, illuminator201is arranged in top wall31, at a position behind display panel110, and illuminator201emits illumination light toward space portion40that is below illuminator201. In this case, it is unlikely that the illumination light emitted from illuminator201directly enters the eyes of the user in front of image display device10placed on the floor, for example. Since the base material of top wall31is made of a metal such as an aluminum alloy, the heat generated by LED elements203included in illuminator201can be efficiently dissipated. This reduces the degradation of LED elements203. The arrangement position of illuminator201is not limited to top wall31, and may be right side wall32, left side wall33, or shelf board50. In this case, the illumination light can be emitted from the right side, the left side, or the lower side of objects500, which creates a unique shadow on each object500. Illuminator201may be arranged in each of the four walls of frame body30. In other words, the number and arrangement positions of illuminators201may be appropriately determined according to the size, application or the like of image display device10. In the present embodiment, it can be expressed that illuminator201is disposed in an orientation which causes illuminator201to emit illumination light to any one of the above four walls. In other words, part of frame body30that has functions of protecting and supporting display panel110can be used as a place for placing objects500to be illuminated with illumination light. Accordingly, objects500can be exhibited or the like without separately using a plate-shaped member for placing objects500. The method of controlling image display device10according to the present embodiment will be described, for example, as below. Image display device10includes display panel110, space portion40, and illuminator201described above. The method of controlling image display device10includes: obtaining (S20inFIG.7) a result of detection of illuminance by illuminance sensor90, and when display panel110is operating in the transmissive mode, causing illuminator201(S30inFIG.7) to emit illumination light with a brightness that is in accordance with the detection result obtained in the obtaining. With this control method, as described above, for example, the brightness of the illumination light on the back side of display panel110, of which the surrounding is relatively dark and which is operating in the transmissive mode, can be automatically decreased. As a result, the possibility that the reflection of the illumination light (halation) from the back side of display panel110, which functions like a transparent glass plate, is observed is reduced. As described above, the method of controlling image display device10according to the present embodiment is capable of effectively using display panel110that is operable in the transmissive mode. Other Embodiments As described above, the embodiment has been described as an example of the technique disclosed in the present application. However, the technique according to the present disclosure is not limited to such an example, and is applicable to embodiments to which various kinds of modifications, replacements, additions, deletions and the like have appropriately been made. Moreover, each structural element described in the above embodiment may be combined to obtain a new embodiment. Another embodiment will be described below as an example. For example, display panel110according to an embodiment may include a different type of display device from organic EL panel111, as a display device for displaying an image. Specifically, instead of organic EL panel111, an inorganic EL panel which is a self-emitting display device like organic EL panel111may be included in display panel110. The second display mode may be treated as one type of “transmissive mode” because it is an operation mode in which light control panel112included in display panel110is in a transmissive state. For example, the operation mode of image display device10illustrated inFIG.2may be a “first transmissive mode”, and the operation mode of image display device10illustrated inFIG.9may be a “second transmissive mode”. Shelf board50does not always have to be part of frame body30. For example, shelf board50may be disposed at a given position in the vertical direction of frame body30including four walls which are the top wall, the bottom wall, the left side wall, and the right side wall, such that shelf board50is laid between the left side wall and the right side wall. In such a case, an illuminator may be disposed on the lower surface of shelf board50such that illumination light is emitted to the objects placed on the wall below shelf board50. In other words, by providing shelf board50on frame body30, two tiers of top and bottom shelfs for placing objects may be provided. The optical member of illuminator201that limits the light distribution angle of light source unit202may be an optical member of a type different from that of the micro-louver (for example, a lens or a reflector). It has been described that light control sheet113according to the embodiment is switchable from the non-transmissive state to the transmissive state by an application of a predetermined voltage (by turning on light control sheet113). However, light control sheet113may be switched from the transmissive state to the non-transmissive state by an application of a predetermined voltage. In this case, for example, even when the main power of image display device10is off, light control sheet113is maintained in the transmissive state. Accordingly, even when the main power of image display device10is off, it is possible to allow the user in front of display panel110to view, through display panel110, objects500placed behind display panel110. In this case, in order to illuminate objects500, image display device10may include an electric circuit which is capable of turning on illuminator201(causing illuminator201to illuminate) even when the main power of image display device10is off. Controller80may be separate controllers which are an illumination controller which controls illuminator201and a display controller which controls display panel110. In other words, the controller which controls illuminator201does not have to include a function of controlling display panel110. In the embodiment, it has been described that controller80is housed in shelf board50, however, the member in which controller80is disposed is not limited to shelf board50, but, for example, may be top wall31, right side wall32, or left side wall33. Moreover, controller80may be disposed outside frame body30. For example, in order to reduce the thickness of shelf board50, electric devices such as controller80may be housed in a housing different from frame body30. Image display device10may include, below shelf board50, a stand or the like for placing image display device10. The placement of image display device10is not particularly limited. For example, image display device10may be attached to the wall surface by, for example, a wall hanging unit. Display panel110may be arranged in one section of a rack having a plurality of sections arranged in the vertical and/or lateral direction where objects500can be placed. Accordingly, it is possible to configure an image display device (or a rack) such that exhibition of objects500and display of an image can be performed in at least one section, and objects500can be exhibited or housed by using another one or more sections. Objects500do not always have to be placed on shelf board50. Shelf board50may function only as a portion of the building frame of image display device10. Image display device10does not have to include shelf board50for placing objects500. In such a case, too, the placement surface on which image display device10is placed is used as a placement surface for placing objects500, so that the user in front of display panel110is able to view objects500placed behind display panel110operating in the transmissive mode. In other words, space portion40is provided in the region surrounded by top wall31, right side wall32, left side wall33, and the placement surface, and objects500can be placed in space portion40. In addition, illumination light can be emitted from illuminator201to objects500. In image display device10, the periphery of display panel110does not have to be surrounded by four walls. Image display device10may include, for example, a rectangular frame that surrounds the periphery of display panel110instead of frame body30. Even in such a case, for example, the illuminator may be disposed on the upper portion of the frame, at a position behind display panel110, such that illumination light can be emitted to space portion40behind display panel110. In this case, space portion40may be defined, for example, as a space behind display panel110, a space in a range covered by display panel110in the front view, and a space in a range where the illumination light emitted from the illuminator reaches. As described above, the embodiment has been described as an example of the technique disclosed in the present disclosure. For this purpose, the accompanying drawings and detailed description have been provided. Accordingly, the structural elements described in the accompanying drawings and detailed description may include not only structural elements which are essential for solving the problem but also structural elements which are not essential for solving the problem but are provided for illustrating the technique. Therefore, the non-essential structural elements described in the attached drawings and/or the detailed description should not be instantly acknowledged to be essential structural elements. Since the above embodiment is intended to illustrate the technique in the present disclosure, it is possible to make various kinds of modifications, replacements, additions, deletions, and the like within the scope of the claims or an equivalent scope thereof. INDUSTRIAL APPLICABILITY The present disclosure is applicable to an image display device, such as a television receiver, a monitor display, or a digital signage. | 42,986 |
11862083 | DETAILED DESCRIPTION An electronic device may have a display. The display may include a display panel with an array of pixels for displaying images. The pixels may be thin-film organic light-emitting diode pixels or pixels formed from crystalline semiconductor light-emitting diode dies mounted on a substrate. The display may have a transparent display cover layer that overlaps and protects the display panel. The display may have a rectangular outline with rounded corners or an outline of other suitable shapes. On the peripheral edge of the display, the display cover layer may have a curved cross-sectional profile. As light travels through the curved surface of the peripheral portion of the display cover layer, the light may be refracted. This light refraction may serve to increase the apparent lateral dimension of the display when viewed on axis and thereby help minimize amount of visible inactive border region surrounding the display. Light refraction through the peripheral portion of the display cover layer may also make a portion of the image on the display panel visible to off-axis viewers. To help ensure that the portion of the image viewable by off-axis viewers does not have an undesired yellow appearance or other undesired color cast, color cast compensation layers and/or other optical structures may be incorporated into the display. These display structures may include, for example, a diffusion layer and a guest-host liquid crystal layer with a yellow-light-absorbing dye that forms an anisotropic yellow-light-absorbing layer. A top view of an illustrative electronic device of the type that may be provided with a display having color cast compensation structures is shown inFIG.1. Device10ofFIG.1may be a portable device such as a wristwatch having a wristband such as wristband16, may be a portable device without a wristband such as a cellular telephone or tablet computer, or may be other suitable electronic equipment (e.g., a desktop computer, a voice-control speaker with a display panel, a television or other non-portable display, a head-mounted device, an embedded system such as a system built into a vehicle or home, an electronic device accessory, and/or other electronic device). Illustrative configurations in which device10is a wristwatch may sometimes be described herein as an example. As shown inFIG.1, device10includes a housing such as housing12. Housing12may be formed from polymer, metal, glass, crystalline material such as sapphire, ceramic, fabric, fibers, fiber composite material, natural materials such as wood and cotton, other materials, and/or combinations of such materials. Housing12may be configured to form housing walls. The housing walls may enclose one or more interior regions in which internal device components18are mounted and may separate the interior region of device10from the exterior environment surrounding device10. In some configurations, an opening may be formed in housing12for a data port, a power port, to accommodate audio components, or to accommodate other devices. Clear housing regions may be used to form optical component windows. In the illustrative arrangement ofFIG.1, a transparent housing layer may cover the upper surface of device10and may serve as a protective display cover layer for display14. If desired dielectric housing structures may be used to form radio-transparent areas for antennas and wireless power components. Electrical components18in the interior of device10may include integrated circuits, discrete components, light-emitting components, sensors, and/or other circuits and may, if desired, be interconnected using signal paths in one or more printed circuits. Electrical components18may include control circuitry. The control circuitry may include storage and processing circuitry for supporting the operation of device10. The storage and processing circuitry may include storage such as hard disk drive storage, nonvolatile memory (e.g., flash memory or other electrically-programmable-read-only memory configured to form a solid state drive), volatile memory (e.g., static or dynamic random-access-memory), etc. Processing circuitry in the control circuitry may be used to control the operation of device10. For example, the processing circuitry may use sensors and other input-output circuitry to gather input and to provide output and/or to transmit signals to external equipment. The processing circuitry may be based on one or more microprocessors, microcontrollers, digital signal processors, baseband processors, power management units, audio chips, application specific integrated circuits, etc. The control circuitry may include wired and/or wireless communications circuitry (e.g., antennas and associated radio-frequency transceiver circuitry such as cellular telephone communications circuitry, wireless local area network communications circuitry, etc.). The communications circuitry of the control circuitry may allow device10to communicate with other electronic devices. For example, the control circuitry (e.g., communications circuitry in the control circuitry) may be used to allow wired and/or wireless control commands and other communications to be conveyed between devices such as cellular telephones, tablet computers, laptop computers, desktop computers, head-mounted devices, handheld controllers, wristwatch devices, other wearable devices, keyboards, computer mice, remote controls, speakers, accessory displays, accessory cameras, and/or other electronic devices. Wireless communications circuitry may, for example, wirelessly transmit control signals and other information to external equipment in response to receiving user input or other input from sensors or other devices in components18. Input-output circuitry in components18of device10may be used to allow data to be supplied to device10and to allow data to be provided from device10to external devices. The input-output circuitry may include input devices that gather user input and other input and may include output devices that supply visual output, audible output, or other output. Output may be provided using light-emitting diodes (e.g., crystalline semiconductor light-emitting diodes for status indicators and/or displays, organic light-emitting diodes in displays and other components), lasers, and other light-emitting devices, audio output devices (e.g., tone generators and/or speakers), haptic output devices (e.g., vibrators, electromagnetic actuators, piezoelectric actuators, and/or other equipment that supplies a user with haptic output), and other output devices. The input-output circuitry of device10(e.g., the input-output circuitry of components18) may include sensors. Sensors for device10may include force sensors (e.g., strain gauges, capacitive force sensors, resistive force sensors, etc.), audio sensors such as microphones, touch and/or proximity sensors such as capacitive sensors (e.g., a two-dimensional capacitive touch sensor integrated into a display, a two-dimensional capacitive touch sensor and/or a two-dimensional force sensor overlapping a display, and/or a touch sensor or force sensor that forms a button, trackpad, or other input device not associated with a display), and other sensors. Touch sensors for a display or for other touch components may be based on an array of capacitive touch sensor electrodes, acoustic touch sensor structures, resistive touch components, force-based touch sensor structures, a light-based touch sensor, or other suitable touch sensor arrangements. If desired, a display may have a force sensor for gathering force input (e.g., a two-dimensional force sensor may be used in gathering force input on a display). If desired, the sensors may include optical sensors such as optical sensors that emit and detect light, optical touch sensors, optical proximity sensors, and/or other touch sensors and/or proximity sensors, monochromatic and color ambient light sensors, image sensors, fingerprint sensors, ultrasonic sensors, temperature sensors, sensors for measuring three-dimensional non-contact gestures (“air gestures”), pressure sensors, sensors for detecting position, orientation, and/or motion (e.g., accelerometers, magnetic sensors such as compass sensors, gyroscopes, and/or inertial measurement units that contain some or all of these sensors), health sensors, radio-frequency sensors (e.g., sensors that gather position information, three-dimensional radio-frequency images, and/or other information using radar principals or other radio-frequency sensing), depth sensors (e.g., structured light sensors and/or depth sensors based on stereo imaging devices), optical sensors such as self-mixing sensors and light detection and ranging (lidar) sensors that gather time-of-flight measurements, humidity sensors, moisture sensors, gaze tracking sensors, three-dimensional sensors (e.g., time-of-flight image sensors, pairs of two-dimensional image sensors that gather three-dimensional images using binocular vision, three-dimensional structured light sensors that emit an array of infrared light beams or other structured light using arrays of lasers or other light emitters and associated optical components and that capture images of the spots created as the beams illuminate target objects, and/or other three-dimensional image sensors), facial recognition sensors based on three-dimensional image sensors, and/or other sensors. In some configurations, components18may include mechanical devices for gathering input (e.g., buttons, joysticks, scrolling wheels, key pads with movable keys, keyboards with movable keys, and other devices for gathering user input). During operation, device10may use sensors and/or other input-output devices in components18to gather user input (e.g., buttons may be used to gather button press input, touch and/or force sensors overlapping displays can be used for gathering user touch screen input and/or force input, touch pads and/or force sensors may be used in gathering touch and/or force input, microphones may be used for gathering audio input, etc.). The control circuitry of device10can then take action based on this gathered information (e.g., by transmitting the information over a wired or wireless path to external equipment, by supplying a user with output using a haptic output device, visual output device, an audio component, or other input-output device in housing12, etc.). If desired, electronic device10(e.g., components18) may include a battery or other energy storage device, connector ports for supporting wired communications with ancillary equipment and for receiving wired power, and other circuitry. In some configurations, device10may serve as an accessory and/or may include a wired and/or wireless accessory (e.g., a keyboard, computer mouse, remote control, trackpad, etc.). Device10may include one or more displays such as display14(e.g., a display that includes a two-dimensional capacitive touch sensor and/or other touch sensor or a display that is insensitive to touch). Display14may, for example, be a light-emitting diode display such as an organic light-emitting diode display or a display having an array of pixels formed from crystalline light-emitting diode dies such as micro-light-emitting diode dies. The pixels of display14may be overlapped by a transparent housing structure (sometimes referred to as a transparent display cover layer, protective cover layer structures, etc.). The light-emitting portions of display14(e.g., thin-film light-emitting diodes or other light-emitting diodes on a substrate layer) may sometimes be referred to as forming a display panel, display layer, pixel array, or pixel array layer. As shown inFIG.2, display14may have a display panel such as display panel14P with an array of pixels P forming an active area in which images are displayed. Display14may have an associated protective cover layer such as transparent display cover layer14C. Display cover layer14C may be formed from one or more layers of glass, clear polymer, crystalline material such as sapphire or other crystalline material, and/or other transparent structures(s). The presence of layer14C may help protect the outer surface of display panel14P from scratches, while allowing a user such as on-axis viewer20who is viewing device10in on-axis direction22to view displayed images through layer12C. The center of display cover layer14C may have a planar surface area (as an example). During operation, pixels P emit light that travels upwardly (e.g., outwardly in the +Z direction ofFIG.2) to viewer20so that the image on display panel14P is viewable as viewer20views display14in on-axis (parallel or nearly parallel to +Z) direction22. Part of the image on display panel14P is also viewable through the curved edge of display cover layer14C as viewer20views display in on-axis direction22. As shown inFIG.2, peripheral edge38of display cover layer14C has a curved cross-sectional profile. As a result, on-axis light rays from pixels P such as illustrative light ray24that are traveling in a direction oriented at a relatively small non-zero angle A with respect to surface normal n of display panel14P may be refracted back towards the Z axis (e.g., a smaller angle A) and towards viewer20as indicated by refracted ray26. Due to this light refracting property, the curved surface profile of display cover layer14C helps move the outermost visible boundary of display14in direction40. In effect, the on-axis light refraction of the curved edge of display cover layer14C helps to enlarge the size of the image being provide by pixels P when display14is viewed on axis. This may help minimize the size of any inactive display border that is visible by on-axis viewer20who is viewing device10in on-axis direction22. When a viewer is viewing device10from an off-axis direction (e.g., when off-axis viewer32is viewing display14in off-axis direction42), the viewer may view a portion of the image on display panel14P through the curved peripheral portion of display cover layer14C (and generally cannot view any portion of the image on display panel14P through the planar central area on the upper surface of display cover layer14C due to the steep angle of view associated with direction42). For example, off-axis emitted light rays such as light ray28, which are angled at relatively large angles A relative to surface normal n will pass to off-axis viewer32through curved surface38as shown by illustrative refracted light ray30. The light viewed along the edge of display cover layer14C may also include a guided light portion due to total internal reflection within layer14C. For example, off-axis light from pixels P of display panel14P such as off-axis light ray34that will reflect internally from surface36of display cover layer14C in accordance with the principal of total internal reflection rather than passing outwardly towards viewer20through surface36. Rays such as ray34may also reflect from the surface of display panel14P. In this way, off-axis light rays such a ray34may propagate laterally across display14towards the peripheral edge of display14and be visible to off-axis viewers such as off-axis viewer32along the curved peripheral edge of display cover layer14C. The portion of the image that is visible along the peripheral edge of display cover layer14C may have an undesired color cast. This is due to changes in the color cast of light-emitting diode light emission that may be exhibited by pixels P as a function of emission angle.FIG.3is a graph in which the intensity I of image light from pixels P has been plotted as a function of emission angle relative to pixel array surface normal n. When light is emitted parallel to the Z axis ofFIG.2(e.g., at an angle A=0°), the emitted light tends not to have any particular color cast (e.g., the light can have a neutral white color cast). At a light emission angle of about 25°, the color of the emitted light may tend to be bluish. At high angles, the color of the emitted light may tend to be yellowish. These color cast effects are due to the characteristics of the light-emitting didoes in pixels P. For on-axis viewing, color cast effects tend to be negligible and not noticed by on-axis viewers. When a viewer is viewing a portion of the image on display panel14P that is visible from an off-axis perspective (e.g., where shown by off-axis viewer32ofFIG.2) through the curved side portion of display cover layer14P, the yellowish color cast associated with off-angle light rays such as light-rays28and34ofFIG.2will tend make the visible image yellowish. To counteract this undesired image yellowing effect, display14may be provided with color cast compensation structures (sometimes referred to as tint-compensating optical layers, etc.). In a first illustrative arrangement, a structure (e.g., an anisotropic layer) may be incorporated into display14that exhibits a preferential light-absorption for the undesired color cast at higher angles A. As shown inFIG.4, for example, the light transmission for yellow light (transmission Ty) through this type of structure may be high at angles near 0° and low at higher angles (e.g., angles of at least 50°, at least 65°, at least 80°, etc.). An anisotropic light-absorbing layer such as a guest-host liquid crystal layer with a yellow-light-absorbing dye and a corresponding anisotropic yellow-light absorption characteristic or other suitable optical layer may exhibit this type of preferential yellow-light absorption as a function of increasing angle relative to surface normal n. In a second illustrative arrangement, a light diffuser layer such as diffuser layer44ofFIG.5may be incorporated into display14. Diffuser layer44, which may sometimes be referred to as a diffuser, light diffuser, haze layer, etc., may be formed from a polymer or other transparent material (e.g., material46). Light-scattering structures may be formed in layer44to diffuse light. The light-scattering structures may include surface roughness features and/or light-scatting particles48(e.g., inorganic light-scattering particles such as silica microspheres or other light-scattering particles with a refractive index differing from the refractive index of material46), voids, gas bubbles, etc. When light passes through layer44, the light will tend to be diffused and scattered. For example, light rays propagating at a relatively high angle (see, e.g., light ray50) may be scattered towards a lower angle (see, e.g., scattered ray52), whereas light rays propagating at a relatively low angle (see, e.g., light ray54) may be scattered towards a higher angle (see, e.g., scattered ray56). The presence of diffuser layer44in display14may therefore tend to mix the light-emission angles of emitted light rays and reduce angularly dependent color cast effects. FIGS.6,7, and8are cross-sectional side views of portions of displays containing illustrative color cast compensation optical layers. In the example ofFIG.6, display panel14P has a pixel array60formed from an array of pixels P (e.g., light-emitting diode pixels). Pixel array60may be overlapped by a circular polarizer layer in panel14P such as circular polarizer62to help suppress ambient light reflections from structures in pixel array60. Circular polarizer62may have a quarter wave plate and linear polarizer. Guest-host liquid crystal layer64may have polymer layers68(e.g., liquid crystal alignment layers) and a layer of liquid crystal material such as liquid crystal material66sandwiched between layers68. Material66may be a guest-host material that includes a light-absorbing dye such as anisotropic yellow-light-absorbing dye. Layer64is configured to exhibit an angularly dependent yellow light absorption characteristic (e.g., a characteristic that preferentially absorbs off-axis yellow light). Optional diffuser layer44may be placed between liquid crystal layer64and cover glass14C. If desired, one or more layers of adhesive may be used to attach the layers of display14together. For example, optically clear adhesive layer70may be interposed between diffuser layer44and cover layer14C. With the arrangement ofFIG.6, off-axis yellow light is preferentially absorbed by layer64and light of different colors is scattered by layer44, thereby helping to reduce yellowing of the image viewed from an off-axis direction such as direction42at the edge of cover layer14C (FIG.2). In the example ofFIG.7, display panel14P includes pixel array60and overlapping circular polarizer62. Circular polarizer62may include a linear polarizer such as linear polarizer72and a wave plate interposed between linear polarizer72and pixel array60. The wave plate may be a quarter wave plate formed from guest-host liquid crystal layer64(e.g., a layer that serves as an anisotropic yellow-light absorbing layer as well as serving as a quarter wave plate). Optional diffuser layer44may be interposed between display cover layer14C and circular polarizer62. Optically clear adhesive layer70may be formed between diffuser layer44and display cover layer14C. In the illustrative configuration ofFIG.8, light-scattering particles48have been embedded in optically clear adhesive layer70to form a light-diffusing layer that is interposed between display cover layer14C and circular polarizer62. Circular polarizer62ofFIG.8may be formed from linear polarizer72and liquid crystal layer64. Layer64may be interposed between layer72and pixel array60. In addition to or instead of using color cast compensation structures such as a light diffuser layer and/or anisotropic yellow-absorbing layer in display14, display14may include color cast compensation structures formed from antireflection structures and/or may include light guide structures. Consider, as an example, the illustrative configuration of display14that is shown inFIG.9. As shown inFIG.9, antireflection layer80may be incorporated between display cover layer14C and display panel14P. Antireflection layer80may be formed from a thin-film interference filter having a stack of N dielectric thin-film layers82. The value of N may be 5-10, at least 3, at least 5, fewer than 20, or other suitable number. If desired, antireflection layer80may be formed from a single layer of material. Dielectric layers82of layer80may have alternating refractive index values (as an example). The refractive index values and thicknesses of layers82and may be configured so that layer80exhibits a desired wavelength dependent light transmission. For example, layer80may be configured to form a yellow-light antireflection coating or other coating that passes more yellow light than non-yellow light and that reflects more non-yellow light than yellow light. With this type of arrangement, yellowish off-axis light rays emitted by display panel14P (see, e.g., yellowish off-axis light84ofFIG.9) may reflect internally at the interface (surface36) between display cover layer14C and surrounding air, but will be preferentially passed through layer80back into panel14P rather than being guided further to the right within display cover layer14C. This preferential antireflection performance of layer80at yellow light wavelengths tends to remove the yellow color cast from the light that is being provided at the exposed peripheral edge of display cover layer14C. Yellow tint in the off-axis images presented by display14may also be reduced by shunting yellow light away from layer14C. This type of approach is shown inFIG.10. In the illustrative configuration ofFIG.10, intermediate optically transparent layer86has been formed between the inner surface of display cover layer14C and the opposing upper surface of pixel array60. Emitted light from the pixels of array60that is close to parallel with surface normal n (on-axis light92) will pass through layers86and14C for viewing by on-axis viewer20. Emitted light from array60that is highly angled with respect to surface normal n (off-axis light88) will be reflected at the interface between layer86and display layer14C due to a refractive index difference between layer86and layer14C. This shunts off-axis light88into a narrow band such as band90below the peripheral edge of layer14C, leaving curved peripheral edge surface38of layer14C free of off-axis image light (including undesired yellowish off-axis light). If desired, an opaque material may overlap band90to block light88. Layer14C may have a refractive index (e.g., 1.5) that is larger than the refractive index of layer86(e.g., 1.4) or that is smaller than the refractive index of layer86. As described above, one aspect of the present technology is the gathering and use of information such as sensor information. The present disclosure contemplates that in some instances, data may be gathered that includes personal information data that uniquely identifies or can be used to contact or locate a specific person. Such personal information data can include demographic data, location-based data, telephone numbers, email addresses, twitter ID's, home addresses, data or records relating to a user's health or level of fitness (e.g., vital signs measurements, medication information, exercise information), date of birth, username, password, biometric information, or any other identifying or personal information. The present disclosure recognizes that the use of such personal information, in the present technology, can be used to the benefit of users. For example, the personal information data can be used to deliver targeted content that is of greater interest to the user. Accordingly, use of such personal information data enables users to calculated control of the delivered content. Further, other uses for personal information data that benefit the user are also contemplated by the present disclosure. For instance, health and fitness data may be used to provide insights into a user's general wellness, or may be used as positive feedback to individuals using technology to pursue wellness goals. The present disclosure contemplates that the entities responsible for the collection, analysis, disclosure, transfer, storage, or other use of such personal information data will comply with well-established privacy policies and/or privacy practices. In particular, such entities should implement and consistently use privacy policies and practices that are generally recognized as meeting or exceeding industry or governmental requirements for maintaining personal information data private and secure. Such policies should be easily accessible by users, and should be updated as the collection and/or use of data changes. Personal information from users should be collected for legitimate and reasonable uses of the entity and not shared or sold outside of those legitimate uses. Further, such collection/sharing should occur after receiving the informed consent of the users. Additionally, such entities should consider taking any needed steps for safeguarding and securing access to such personal information data and ensuring that others with access to the personal information data adhere to their privacy policies and procedures. Further, such entities can subject themselves to evaluation by third parties to certify their adherence to widely accepted privacy policies and practices. In addition, policies and practices should be adapted for the particular types of personal information data being collected and/or accessed and adapted to applicable laws and standards, including jurisdiction-specific considerations. For instance, in the United States, collection of or access to certain health data may be governed by federal and/or state laws, such as the Health Insurance Portability and Accountability Act (HIPAA), whereas health data in other countries may be subject to other regulations and policies and should be handled accordingly. Hence different privacy practices should be maintained for different personal data types in each country. Despite the foregoing, the present disclosure also contemplates embodiments in which users selectively block the use of, or access to, personal information data. That is, the present disclosure contemplates that hardware and/or software elements can be provided to prevent or block access to such personal information data. For example, the present technology can be configured to allow users to select to “opt in” or “opt out” of participation in the collection of personal information data during registration for services or anytime thereafter. In another example, users can select not to provide certain types of user data. In yet another example, users can select to limit the length of time user-specific data is maintained. In addition to providing “opt in” and “opt out” options, the present disclosure contemplates providing notifications relating to the access or use of personal information. For instance, a user may be notified upon downloading an application (“app”) that their personal information data will be accessed and then reminded again just before personal information data is accessed by the app. Moreover, it is the intent of the present disclosure that personal information data should be managed and handled in a way to minimize risks of unintentional or unauthorized access or use. Risk can be minimized by limiting the collection of data and deleting data once it is no longer needed. In addition, and when applicable, including in certain health related applications, data de-identification can be used to protect a user's privacy. De-identification may be facilitated, when appropriate, by removing specific identifiers (e.g., date of birth, etc.), controlling the amount or specificity of data stored (e.g., collecting location data at a city level rather than at an address level), controlling how data is stored (e.g., aggregating data across users), and/or other methods. Therefore, although the present disclosure broadly covers use of information that may include personal information data to implement one or more various disclosed embodiments, the present disclosure also contemplates that the various embodiments can also be implemented without the need for accessing personal information data. That is, the various embodiments of the present technology are not rendered inoperable due to the lack of all or a portion of such personal information data. The foregoing is merely illustrative and various modifications can be made to the described embodiments. The foregoing embodiments may be implemented individually or in any combination. | 30,541 |
11862084 | DETAILED DESCRIPTION Technical solutions of the embodiments of the present disclosure will be clearly and completely described hereinafter with reference to the accompanying drawings of the embodiments of the present disclosure. Apparently, the described embodiments are a part rather than all of the embodiments of the present disclosure. Based on the described embodiments of the present disclosure, all other embodiments obtained by those skilled in the art without creative efforts fall within the protection scope of the present disclosure. Transistors used in all the embodiments of the present disclosure may be triodes, thin film transistors or field effect transistors or other devices with the same characteristics. In the embodiments of the present disclosure, in order to distinguish two electrodes of a transistor other than a control electrode, one of the two electrodes is referred to as a first electrode, and the other of the two electrodes is referred to as a second electrode. In actual operations, when the transistor is a triode, the control electrode may be a base electrode, the first electrode may be a collector electrode, and the second electrode may be an emitter electrode; or, the control electrode may be a base electrode, the first electrode may be an emitter electrode, and the second electrode may be a collector electrode. In actual operations, when the transistor is a thin film transistor or a field effect transistor, the control electrode may be a gate electrode, the first electrode may be a source electrode, and the second electrode may be a drain electrode; or, the control electrode may be a gate electrode, the first electrode may be a drain electrode, and the second electrode may be a source electrode. As shown inFIG.1, a pixel circuit according to at least one embodiment of the present disclosure includes a driving circuit11, a first light-emitting control circuit12, a light-emitting element EL, a first initialization circuit13, and a second initialization circuit14. The driving circuit11is configured to generate, under the control of a control terminal of the driving circuit, a driving current for driving the light-emitting element EL, and a cathode of the light-emitting element is electrically connected to a first voltage line V1. The first light-emitting control circuit12is electrically connected to a light-emitting control line E1, the driving circuit11, and an anode of the light-emitting element EL, and is configured to control, under the control of a light-emitting control signal provided by the light-emitting control line E1, the driving circuit11to be connected to or disconnected from the anode of the light-emitting element EL. The first initialization circuit13is electrically connected to a first initialization control line R1, the control terminal of the driving circuit11, and an initialization voltage line I1, and is configured to write, under the control of a first initialization control signal provided by the first initialization control line R1, an initialization voltage provided by the initialization voltage line I1, to the control terminal of the driving circuit11. The second initialization circuit14is electrically connected to a second initialization control line R2, the anode of the light-emitting element EL, and an initial data line D02, and is configured to write, under the control of a second initialization control signal R2provided by the second initialization control line, an initial data voltage provided by the initial data line D02, to the anode of the light-emitting element EL. In the pixel circuit described in at least one embodiment of the present disclosure, the initialization voltage is written to the control terminal of the driving circuit11via the first initialization circuit13, to initialize the control terminal of the driving circuit11, and the initial data voltage is written to the anode of the light-emitting element EL via the second initialization circuit14, to initialize the anode of the light-emitting element EL. By adjusting the initial data voltage, light emission of the light-emitting element from a leakage current, and lateral leakage at low gray scales, which are caused when initializing the anode of the light-emitting element, can be prevented. Optionally, the first initialization control line and the second initialization control line may be a same initialization control line; or, the first initialization control line and the second initialization control line may be different. In at least one embodiment of the present disclosure, the pixel circuit is included in a display substrate, and the display substrate includes a base substrate, multiple rows of gate lines, multiple columns of display data lines, and multiple rows and multiple columns of pixel circuits, where the multiple rows of gate lines, the multiple columns of display data lines, and the multiple rows and multiple columns of pixel circuits are provided on the base substrate;the n-th row of pixel circuits is electrically connected to the n-th row of first initialization control line and the n-th row of gate line; n is a positive integer; the n-th row of first initialization control line and the n-th row of second initialization control line are a same initialization control line;the n-th row of first initialization control signal on the n-th row of first initialization control line is the same as the (n−1)-th row of gate driving signal on the (n−1)-th row of gate line;n is a positive integer. In at least one embodiment of the present disclosure, the pixel circuit is included in a display substrate, and the display substrate includes a base substrate, multiple rows of gate lines, multiple columns of display data lines, and multiple rows and multiple columns of pixel circuits, where the multiple rows of gate lines, the multiple columns of display data lines, and the multiple rows and multiple columns of pixel circuits are provided on the base substrate;the n-th row of pixel circuits is electrically connected to the n-th row of first initialization control line, the n-th row of the second initialization control line, and the n-th row of gate line; n is a positive integer; the n-th row of first initialization control line and the n-th row of second initialization control line are different initialization control lines;the n-th row of first initialization control signal on the n-th row of first initialization control line is the same as the (n−1)-th row of gate driving signal on the (n−1)-th row of gate line;the n-th row of second initialization control signal on the n-th row of second initialization control line is the same as the n-th row of gate driving signal on the n-th row of gate line;n is a positive integer. In specific implementations, the rows of pixel circuits included in the display substrate may be sequentially arranged along an extension direction of the display data lines. For example, the rows of pixel circuits may be sequentially arranged in a direction toward a side of the display substrate where the driving chip is provided, and the present disclosure is not limited thereto. Optionally, the first voltage line may be a ground line or a low-voltage signal line, and the present disclosure is not limited thereto. In at least one embodiment of the present disclosure, the light-emitting element EL may be an OLED (Organic Light-Emitting Diode), and the present disclosure is not limited thereto. Optionally, the second initialization circuit includes a first transistor;a gate electrode of the first transistor is electrically connected to the second initialization control line, a first electrode of the first transistor is electrically connected to the initial data line, and a second electrode of the first transistor is electrically connected to the anode of the light-emitting element. Optionally, the first initialization circuit includes a second transistor;a gate electrode of the second transistor is electrically connected to the first initialization control line, a first electrode of the second transistor is electrically connected to the initialization voltage line, and a second electrode of the second transistor is electrically connected to the control terminal of the driving circuit. As shown inFIG.2, based on at least one embodiment of the pixel circuit shown inFIG.1, the pixel circuit further includes a second light-emitting control circuit21, a storage circuit22, a data writing circuit23, and a compensation circuit24, where the first light-emitting control circuit12is electrically connected to a second terminal of the driving circuit11. The second light-emitting control circuit21is electrically connected to the light-emitting control line E1, a second voltage line V2, and a first terminal of the driving circuit11, and is configured to control, under the control of the light-emitting control signal, the first terminal of the driving circuit11to be connected to or disconnected from the second voltage line V2. The storage circuit22is electrically connected to the control terminal of the driving circuit11, and is configured to maintain a potential of the control terminal of the driving circuit11. The data writing circuit23is electrically connected to the gate line G0, a display data line D01, and the first terminal of the driving circuit11, and is configured to write, under the control of a gate driving signal, a display data voltage on the display data line D01, to the first terminal of the driving circuit11. The compensation circuit24is electrically connected to the gate line G0, the control terminal of the driving circuit11, and the second terminal of the driving circuit11, and is configured to control, under the control of the gate driving signal, the control terminal of the driving circuit11to be connected to or disconnected from the second terminal of the driving circuit11. In at least one embodiment of the present disclosure, the second voltage line may be a high-voltage signal line, and the present disclosure is not limited thereto. The pixel circuit described in at least one embodiment of the present disclosure may further include the second light-emitting control circuit21, the storage circuit22, the data writing circuit23, and the compensation circuit24. The second light-emitting control circuit21controls the first terminal of the driving circuit11to be connected to or disconnected from the second voltage line V2, the storage circuit22maintains the potential of the control terminal of the driving circuit11, the data writing circuit23controls the writing of the display data voltage to the first terminal of the driving circuit11, and the compensation circuit24controls the compensation of the threshold voltage of the driving transistor included in the driving circuit11. Optionally, the driving circuit includes a driving transistor, a gate electrode of the driving transistor is the control terminal of the driving circuit, a first electrode of the driving transistor is the first terminal of the driving circuit, and a second electrode of the driving transistor is the second terminal of the driving circuit;the first light-emitting control circuit includes a third transistor; a gate electrode of the third transistor is electrically connected to the light-emitting control line, a first electrode of the third transistor is electrically connected to the second electrode of the driving transistor, and a second electrode of the third transistor is electrically connected to the anode of the light-emitting element;the second light-emitting control circuit includes a fourth transistor, a gate electrode of the fourth transistor is electrically connected to the light-emitting control line, a first electrode of the fourth transistor is electrically connected to the second voltage line, and a second electrode of the fourth transistor is electrically connected to the first electrode of the driving transistor;the storage circuit includes a storage capacitor, a first electrode plate of the storage capacitor is electrically connected to the gate electrode of the driving transistor, and a second electrode plate of the storage capacitor is electrically connected to the second voltage line;the data writing circuit includes a fifth transistor, a gate electrode of the fifth transistor is electrically connected to the gate line, a first electrode of the fifth transistor is electrically connected to the display data line, and a second electrode of the fifth transistor is electrically connected to the first electrode of the driving transistor;the compensation circuit includes a sixth transistor, a gate electrode of the sixth transistor is electrically connected to the gate line, a first electrode of the sixth transistor is electrically connected to the gate electrode of the driving transistor, and a second electrode of the sixth transistor is electrically connected to the second electrode of the driving transistor. As shown inFIG.3, based on at least one embodiment of the pixel circuit shown in the figure, the light-emitting element is an organic light emitting diode O1; the driving circuit11includes a driving transistor T7. The second initialization circuit includes a first transistor T1. A gate electrode G1of the first transistor T1is electrically connected to the second initialization control line R2, a first electrode S1of the first transistor T1is electrically connected to the initial data line D02, and a second electrode D1of the first transistor T1is electrically connected to the anode of O1. The first initialization circuit includes a second transistor T2. A gate electrode G2of the second transistor T2is electrically connected to the first initialization control line R1, a first electrode S2of the second transistor T2is electrically connected to the initialization voltage line I1, and a second electrode D2of the second transistor T2is electrically connected to the control terminal of the driving circuit. A gate electrode G7of the driving transistor T7is the control terminal of the driving circuit11, a first electrode S7of the driving transistor T7is the first terminal of the driving circuit11, and a second electrode D7of the driving transistor T7is the second terminal of the driving circuit11. The first light-emitting control circuit includes a third transistor T3. A gate electrode G3of the third transistor T3is electrically connected to the light-emitting control line E1, a first electrode S3of the third transistor T3is electrically connected to the second electrode D7of the driving transistor T7, and a second electrode D3of the third transistor T3is electrically connected to the anode of O1. The second light-emitting control circuit includes a fourth transistor T4. A gate electrode G4of the fourth transistor T4is electrically connected to the light-emitting control line E1, a first electrode S4of the fourth transistor T4is electrically connected to the second voltage line V2, and a second electrode D4of the fourth transistor T4is electrically connected to the first electrode S7of the driving transistor T7. The storage circuit includes a storage capacitor C1, a first electrode plate C1aof the storage capacitor C1is electrically connected to the gate electrode G7of the driving transistor T7, and a second electrode plate C1bof the storage capacitor C1is electrically connected to the second voltage line V2. The data writing circuit includes a fifth transistor T5, a gate electrode G5of the fifth transistor T5is electrically connected to the gate line G0, a first electrode of the fifth transistor is electrically connected to the display data line D01, and a second electrode of the fifth transistor is electrically connected to the first electrode of the driving transistor. The compensation circuit includes a sixth transistor T6, a gate electrode G6of the sixth transistor T6is electrically connected to the gate line G0, a first electrode S6of the sixth transistor T6is electrically connected to the gate electrode G7of the driving transistor T7, and a second electrode D6of the sixth transistor T6is electrically connected to the second electrode D7of the driving transistor T7. In at least one embodiment of the pixel circuit shown inFIG.3, all the transistors are p-type thin film transistors, and the present disclosure is not limited thereto. In at least one embodiment of the pixel circuit shown inFIG.3, the second initialization control signal on R2is the same as the gate driving signal on G0, and the present disclosure is not limited thereto. When at least one embodiment of the pixel circuit shown inFIG.3of the present disclosure is in operation, a display period includes an initialization phase, a data writing phase, and a light-emitting phase that are sequentially arranged. In the initialization phase, R1provides a low-voltage signal, E1, G0, and R2each provide a high-voltage signal, T2is turned on, T1, T5, T3, T4, T6are all turned off, and I1provides the initialization voltage signal to the gate electrode of T7to cause T7to be turned off. In the data writing phase, R1provides a high-voltage signal, G0and R2each provide a low-voltage signal, E1provides a high-voltage signal, T1, T5and T6are all turned on, T2, T3and T4are all turned off, and D02provides the initial data voltage to the anode of O1to cause O1not to emit light; D01provides the display data voltage Vd to S7, and the connection between G7and D7is turned on, to perform data voltage writing and compensation of the threshold voltage of T7. In the light-emitting phase, R1, G0, and R2each provide a high-voltage signal, E1provides a low-voltage signal, T1, T2, T5, and T6are all turned off, T3and T4are turned on, and T7drives O1to emit light. A driving method described in at least one embodiment of the present disclosure is applied to the above-mentioned pixel circuit, and the driving method includes:writing, by the first initialization circuit, under the control of the first initialization control signal, the initialization voltage provided by the initialization voltage line, to the control terminal of the driving circuit; andwriting, by the second initialization circuit, under the control of the second initialization control signal, the initial data voltage provided by the initial data line, to the anode of the light-emitting element. In the driving method described in at least one embodiment of the present disclosure, the initialization voltage is written to the control terminal of the driving circuit11via the first initialization circuit to initialize the control terminal of the driving circuit, and the initial data voltage is written to the anode of the light-emitting element via the second initialization circuit to initialize the anode of the light-emitting element. By adjusting the initial data voltage, light emission of the light-emitting element from a leakage current, and lateral leakage at low gray scales, which are caused when initializing the anode of the light-emitting element, can be prevented. In specific implementations, the pixel circuit may further include a data writing circuit, and the pixel circuit is included in a display panel (the display panel may include the display substrate), and the driving method may further include:writing, by the data writing circuit, a display data voltage on a display data line to the first terminal of the driving circuit, under the control of a gate driving signal;a minimum display data voltage among all display data voltage connected to all pixel circuits in the display panel is greater than a predetermined gray-scale voltage, and the initial data voltage is the same as a first voltage provided by the first voltage line; or,a minimum display data voltage among all display data voltage connected to all pixel circuits in the display panel is less than a predetermined gray-scale voltage, the initial data voltage is different from a first voltage provided by the first voltage line, an absolute value of a difference between the initial data voltage and the first voltage is less than a predetermined voltage value, the difference between the initial data voltage and the first voltage is smaller than a turn-on voltage of the light-emitting element, and the predetermined voltage value is a positive value. In at least one embodiment of the present disclosure, the predetermined gray-scale voltage and the predetermined voltage value may be selected according to actual conditions. For example, the predetermined gray-scale voltage may be a gray-scale voltage corresponding to L32, and the present disclosure is not limited thereto. In at least one embodiment of the present disclosure, the initial data voltage is set according to the minimum display data voltage connected to the pixel circuits of the display panel. When the minimum display data voltage is relatively large, the initial data voltage is set to be the same as the first voltage. In this way, when initializing the anode of the light-emitting element, it can be ensured that the light-emitting element does not emit light, which prevents light emission of the light-emitting element caused by leakage. When the minimum display data voltage is relatively small, according to the actual situation, the initial data voltage is set to be slightly larger than the first voltage, or, the initial data voltage is set to be slightly smaller than the first voltage, to improve the situation of lateral leakage in the case of low gray scales, and the difference between the initial data voltage and the first voltage is less than the turn-on voltage of the light-emitting element, to ensure that the light-emitting element does not emit light when initializing the anode of the light-emitting element. In the embodiments of the present disclosure, a display period may include an initialization phase, a data writing phase, and a light-emitting phase that are sequentially arranged. When R1and R2are different initialization control lines, in the initialization phase, the first initialization circuit controls, under the control of the first initialization control signal, the initialization voltage provided by the initialization voltage line, to be written to the control terminal of the driving circuit; in the data writing phase, the second initialization circuit controls, under the control of the second initialization control signal, the initial data voltage provided by the initial data line, to be written to the anode of the light-emitting element, and the data writing circuit writes, under the control of the gate driving signal, the display data voltage on the display data line, to the first terminal of the driving circuit. When R1and R2are the same initialization control line, in the initialization phase, the first initialization circuit controls, under the control of the first initialization control signal, the initialization voltage provided by the initialization voltage line, to be written to the control terminal of the driving circuit, the second initialization circuit controls, under the control of the second initialization control signal, the initial data voltage provided by the initial data line, to be written to the anode of the light-emitting element; in the data writing stage, the data writing circuit writes, under the control of the gate driving signal, the display data voltage on the display data line, to the first terminal of the driving circuit. A display substrate according to at least one embodiment of the present disclosure includes a base substrate and the above-mentioned pixel circuit provided on the base substrate. Optionally, the pixel circuit includes a driving transistor and a storage capacitor, and the display substrate further includes an initial data line arranged on the base substrate. A gate electrode of the driving transistor is also used as a first electrode plate of the storage capacitor. The initial data line and the gate electrode of the driving transistor are arranged in a same layer and are made of a same material, or, the initial data line and a second electrode plate of the storage capacitor are arranged in a same layer and are made of a same material. In at least one embodiment of the present disclosure, the display substrate may include an active layer, a first gate metal layer, a second gate metal layer, and a first source and drain metal layer that are sequentially disposed on the base substrate. A patterning process may be performed on the first gate metal layer to form the gate lines and the gate electrode of each transistor, and a patterning process may be performed on the second gate metal layer to form the second electrode plate of the storage capacitor; the initial data line and the gate electrode of each transistor may be arranged in a same layer and made of a same material, or, the initial data line and the second electrode plate of the storage capacitor may be arranged in a same layer and made of a same material. That is, the initial data line may be formed in the first gate metal layer or the second gate metal layer. In specific implementations, when the initial data line is formed in the first gate metal layer or the second gate metal layer, the display substrate further includes a gate line disposed on the base substrate. An extension direction of the initial data line is the same as an extension direction of the gate line. In at least one embodiment of the present disclosure, the extension direction of the initial data line being the same as the extension direction of the gate line may refer to that: the extension direction of the initial data line is exactly the same as the extension direction of the gate line, or, an angle between the extension direction of the initial data line and the extension direction of the gate line is less than a predetermined angle to cause the extension direction of the initial data line to be substantially the same as the extension direction of the gate line; and the present disclosure is not limited thereto. Optionally, the display substrate further includes a display data line arranged on the base substrate. The initial data line and the display data line are arranged in a same layer and made of a same material, or, the initial data line is arranged on a side of the display data line facing away from the base substrate. In at least one embodiment of the present disclosure, the display substrate may include an active layer, a first gate metal layer, a second gate metal layer, and a first source and drain metal layer that are sequentially disposed on the base substrate. A patterning process may be performed on the first source and drain metal layer to form the display data line and the initial data line, and the initial data line and the display data line may be arranged in a same layer and made of a same material, that is, the initial data line is formed in the first source and drain metal layer; or, the display substrate may include an active layer, a first gate metal layer, a second gate metal layer, a first source and drain metal layer, and a second source and drain metal layer that are sequentially disposed on the base substrate. A patterning process may be performed on the first source and drain metal layer to form the display data line, and a patterning process may be performed on the second source and drain metal layer to form the initial data line, that is, the initial data line is disposed on a side of the display data line facing away from the base substrate, and the initial data line is formed in the second source and drain metal layer. In specific implementations, when the initial data line is formed in the first source and drain metal layer or the second source and drain metal layer, an extension direction of the initial data line may be the same as an extension direction of the display data line. In at least one embodiment of the present disclosure, the extension direction of the initial data line being the same as the extension direction of the display data line may refer to that: the extension direction of the initial data line is completely the same as the extension direction of the display data line, or, an angle between the extension direction of the initial data line and the extension direction of the display data line is less than a predetermined angle to cause the extension direction of the initial data line to be substantially the same as the extension direction of the display data line; and the present disclosure is not limited thereto. Optionally, the pixel circuit includes a first transistor, and the display substrate further includes the second initialization control line and the initial data line provided on the base substrate;a gate electrode of the first transistor and a gate electrode of the driving transistor are in a same layer and made of a same material, and the gate electrode of the first transistor is electrically connected to the second initialization control line;a first electrode of the first transistor, a second electrode of the first transistor, a first electrode of the driving transistor, and a second electrode of the driving transistor are arranged in a same layer and made of a same material;the first electrode of the first transistor is electrically connected to the initial data line, and the second electrode of the first transistor is electrically connected to the anode of the light-emitting element. In at least one embodiment of the present disclosure, the second initialization control line and the gate line may be arranged in a same layer and made of a same material, and the second initialization control signal on the second initialization control line may be the same as the gate driving signal on the gate line. Optionally, the pixel circuit may include a driving transistor and a storage capacitor;a gate electrode of the driving transistor is also used as a first electrode plate of the storage capacitor;the initialization voltage line and the gate electrode of the driving transistor are arranged in a same layer and made of a same material, or, the initialization voltage line and a second electrode plate of the storage capacitor are arranged in a same layer and made of a same material. In specific implementations, the initialization voltage line may be formed in the first gate metal layer, or, the initialization voltage line may be formed in the second gate metal layer, and the present disclosure is not limited thereto. Optionally, the display substrate may further include a gate line arranged on the base substrate;an extension direction of the initialization voltage line is the same as an extension direction of the gate line. In at least one embodiment of the present disclosure, the extension direction of the initialization voltage line being the same as the extension direction of the gate line may refer to that: the extension direction of the initialization voltage line is exactly the same as the extension direction of the gate line, or, an angle between the extension direction of the initialization voltage line and the extension direction of the gate line is less than a predetermined angle to cause the extension direction of the initialization voltage line to be substantially the same as the extension direction of the gate line; and the present disclosure is not limited thereto. FIG.4is a schematic diagram of the layout of the pixel circuit according to at least one embodiment of the present disclosure. The pixel circuit is arranged in the active area of the display substrate. InFIG.4, I11is a first initialization voltage line portion included in the initialization voltage line, D021is a first initial data line portion included in the initial data line, I12is a second initialization voltage line portion included in the initialization voltage line, and D022is a second initial data line portion included in the initial data line; I11, I12, D021and D022may be set in the active area; each of I11and I12may be electrically connected to the initialization voltage wire outside the active area, I11and I12are electrically connected to each other, each of D021and D022may be electrically connected to the initial data wire outside the active area, and D021and D022are electrically connected to each other; and the present disclosure is not limited thereto. In at least one embodiment of the present disclosure, the display substrate includes multiple rows and multiple columns of pixel circuits disposed on the base substrate, each row of pixel circuits is electrically connected to the same row of gate line, and the same column of pixel circuits are electrically connected to the same column of display data line. When the pixel circuit adopts the structure shown inFIG.4, the initialization voltage line includes multiple initialization voltage line portions extending in the first direction, each row of pixel circuits is electrically connected to the corresponding initialization voltage line portion, the initial data line includes multiple initial data line portions extending in the first direction, and each row of pixel circuits is electrically connected to the corresponding initial data line portion. The same row of pixel circuits may be electrically connected to the same initialization voltage line portion, and the same row of pixel circuits may be electrically connected to the same initial data line portion. In addition, the initial data wire and the initialization voltage wire are provided outside the active area of the display substrate, the initialization voltage wire is used to provide the initialization voltage to each of the initialization voltage line portions, the initial data line is used to provide the initial data voltage to each of the initial data line portions, the initialization voltage line portions are electrically connected to each other, and the initial data line portions are electrically connected to each other. InFIG.4, R1is the first initialization control line, G0is the gate line, C1bis the second electrode plate of the storage capacitor in the pixel circuit, E1is the light-emitting control line, D01is the display data line, and R2is the second initialization control line. The second initialization control signal provided to the second initialization control line R2is the same as the gate driving signal provided to G0. In at least one embodiment corresponding toFIG.4, on the base substrate, an active layer, a first gate metal layer, a second gate metal layer, and a first source and drain metal layer are sequentially arranged along a direction leaving the base substrate. A patterning process is performed on the first gate metal layer to form the gate line G0, the first initialization control line R1, the second initialization control line R2, the light-emitting control line E1, and the gate electrode of each transistor in the pixel circuits. A patterning process is performed on the second gate metal layer to form the initial data line, the initialization voltage line, and the second electrode plate of the storage capacitor in the pixel circuit. In at least one embodiment shown inFIG.4, the initial data line and the initialization voltage line are formed in the second gate metal layer, the extension direction of the initial data line is the same as the extension direction of the gate line G0, and the extension direction of the initialization voltage line is the same as the extension direction of the gate line G0. In at least one embodiment shown inFIG.4, the extension direction of G0may be a first direction, the first direction may be, for example, a horizontal direction, the extension direction of D01may be a second direction, and the second direction may be, for example, a vertical direction; and the present disclosure is not limited thereto. In at least one embodiment of the present disclosure, the extension direction of the gate line may be the first direction, the extension direction of the display data line may be the second direction, and the first direction and the second direction intersect; and the present disclosure is not limited thereto. As shown inFIG.5, the pattern of the active layer inFIG.4includes the first electrode S1of the first transistor, the second electrode D1of the first transistor, the first electrode S2of the second transistor, the second electrode D2of the second transistor, the first electrode S4of the fourth transistor, the first electrode S5of the fifth transistor, the second electrode D5of the fifth transistor, and the second electrode D6of the sixth transistor. In at least one embodiment corresponding toFIGS.4and5, the second electrode D2of the second transistor is also used as the first electrode of the sixth transistor, the second electrode D5of the fifth transistor is also used as the second electrode of the fourth transistor, the second electrode D5of the fifth transistor is also used as the first electrode of the driving transistor, and the second electrode D6of the sixth transistor is also used as the second electrode of the driving transistor. As shown inFIG.6, T2is a dual gate transistor, G21is the first gate electrode pattern included in the gate electrode of the second transistor, and G22is the second gate electrode pattern included in the gate electrode of the second transistor;G5is the gate electrode of the fifth transistor;T6is a dual gate transistor, G61is the third gate electrode pattern included in the gate electrode of the sixth transistor, and G62is the fourth gate electrode pattern included in the gate electrode of the sixth transistor;G3is the gate electrode of the third transistor, G4is the gate electrode of the fourth transistor, and G1is the gate electrode of the first transistor;G7is the gate electrode of the driving transistor, and G7is also used as the first electrode plate of the storage capacitor in the pixel circuit. As shown inFIG.7, C1bis the second electrode plate of the storage capacitor, H0is the connecting hole provided in C1b, and D2is electrically connected to G2through the connecting hole H0. After the active layer, the first gate metal layer and the second gate metal layer are arranged in sequence, an interlayer dielectric layer may be provided, and after the interlayer dielectric layer is provided, via holes may be formed. As shown inFIG.8, H1is the first via hole, H2is the second via hole, H3is the third via hole, H4is the fourth via hole, H5is the fifth via hole, H6is the sixth via hole, H7is the seventh via hole, H8is the eighth via hole, H9is the ninth via hole, H10is the tenth via hole, H11is the eleventh via hole, H12is the twelfth via hole, H13is the thirteenth via hole, H14is the fourteenth via hole, and H15is the fifteenth via hole. As shown inFIG.9, the pattern of the first source and drain metal layer includes the display data line D01, the second voltage line, the first conductive connection portion L1, the second conductive connection portion L2, the third conductive connection portion L3, the fourth conductive connection portion L3, the conductive connection portion L4, the fifth conductive connection portion L5, and the sixth conductive connection portion L6. InFIG.8, V21is the first voltage line portion included in the second voltage line. When the pixel circuit adopts the structure shown inFIG.4, the second voltage line includes multiple voltage line portions extending in the second direction, and each column of pixel circuits is electrically connected to the corresponding voltage line portion; a second voltage wire is provided outside the active area, the second voltage wire is used to provide the second voltage signal to each of the voltage line portions included in the second voltage line, and the voltage line portions included in the second voltage line are electrically connected to each other. As shown inFIGS.4to9, S2is electrically connected to the first conductive connection portion L1through the fourth via hole H4, and L1is electrically connected to I11through the first via hole H1, so that S2and I11are electrically connected, that is, S2is electrically connected to the initialization voltage line;S1is electrically connected to L6through H14, and L6is electrically connected to D022through H11, so that S1is electrically connected to D022, that is, S1is electrically connected to the initial data line;D2is electrically connected to L3through H7, and L3is electrically connected to G7through H0;S5is electrically connected to D01through H3;S4is electrically connected to V21through H8;D1is electrically connected to L4through H9, and L4is electrically connected to the anode layer through a via hole. When manufacturing the display substrate, after fabricating the first source and drain metal layer, the first planarization layer and the anode layer are manufactured in sequence. The anode layer includes multiple mutually independent anodes. L4may be electrically connected to the anode through a via hole penetrating through the first planarization layer. After the anode layer is fabricated, a PDL layer (pixel definition layer), an organic light-emitting function layer, and a cathode layer may be fabricated in sequence. In at least one embodiment of the present disclosure, the cathode layer may cover the entire active area, and the cathode layer may bond, in the non-display area of the display substrate, with the first voltage line through the anode layer, so that the cathode of the light-emitting element is electrically connected to the first voltage line; and the present disclosure is not limited thereto. Optionally, the first voltage line may be arranged around the active area, and the present disclosure is not limited thereto. In at least one embodiment of the present disclosure, a first gate insulating layer may be provided between the active layer and the first gate metal layer, a second gate insulating layer may be provided between the first gate metal layer and the second gate metal layer, and an interlayer dielectric layer may be provided between the second gate metal layer and the first source and drain metal layer; and the present disclosure is not limited thereto. In at least one embodiment shown inFIG.4, both the initial data line and the initialization voltage line are formed in the second gate metal layer, and the present disclosure is not limited thereto. When the display substrate includes at least one embodiment of the pixel circuit as shown inFIG.4, both the initial data line and the initialization voltage line extend in the first direction. For example, the first direction may be a horizontal direction. Then the multiple rows of initial data line portions included in the initial data line also extend in the first direction, and the multiple rows of initialization voltage line portions included in the initialization voltage line also extend in the first direction. Outside the active area of the display substrate, the initial data wire providing the initial data voltage and the initialization voltage wire providing the initialization voltage may be provided, and the initial data wire and the initialization voltage wire may be arranged at the first side and/or the second side (the first side may be the left side, and the second side may be the right side) of the display substrate. At least part of wires included in the initial data wire and at least part of wires included in the initialization voltage wire may extend in the second direction (the second direction may be, for example, the vertical direction), each row of initial data line portion may extend in the first direction until it is electrically connected to the initial data wire, and each row of initialization voltage line portion may extend in the first direction until it is electrically connected to the initialization voltage wire. In at least one embodiment shown inFIG.4, the second voltage line extends in the second direction, and the multiple voltage line portions included in the second voltage line extend in the second direction, so the second voltage wire used for providing the second voltage signal may be arranged at a side of the active area of the display substrate that is close to the driving chip. For example, the second voltage wire may be arranged at the lower side of the display substrate; and the present disclosure is not limited thereto. Optionally, the second voltage wire may include a second voltage wire portion extending in the first direction and a first voltage wire portion extending in the second direction. The second voltage wire portion is used for electrically connecting the multiple voltage line portions included in the second voltage line (for example, when the second voltage wire is disposed at the lower side of the display substrate, each voltage line portion included in the second voltage line may extend downward, in the second direction, so as to be electrically connected to the second voltage wire portion), the first terminal of the first voltage wire portion is electrically connected to the second voltage wire portion, and the second terminal of the first voltage wire portion is directly electrically connected to the driving chip to receive the second voltage signal provided by the driving chip. In at least one embodiment shown inFIGS.4, R1and R2are different initialization control lines. In actual operations, R1and R2may be the same initialization control line. The difference between at least one embodiment of the pixel circuit shown inFIG.10and at least one embodiment of the pixel circuit shown inFIG.4is that: a second source and drain metal layer is further provided on a side of the first source and drain metal layer away from the base substrate, and the initial data line extends in the second direction, that is, the extension direction of the initial data line is the same as the extension direction of the display data line, and a patterning process is performed on the second source and drain metal layer to form each initial data line portion included in the initial data line. InFIG.10, D021is the first initial data line portion included in the initial data line. The structure diagram of the active layer inFIG.10is shown inFIG.5, the structure diagram of the first gate metal layer inFIG.10is shown inFIG.6, and the structure diagram of the second gate metal layer inFIG.10is shown inFIG.11, the schematic diagram of the via holes inFIG.10is shown inFIG.12, the structure diagram of the first source and drain metal layer inFIG.10is shown inFIG.9, and the structure diagram of the second source and drain metal layer inFIG.10is shown inFIG.13. As shown inFIG.9andFIGS.10-13, L6is electrically connected to D021through H11. As shown inFIG.10, the extension direction of D021is the same as the extension direction of D01. The connection relationship of other components inFIG.10is the same as the connection relationship of components inFIG.4. When the display substrate includes multiple rows and multiple columns of pixel circuits according to at least one embodiment shown inFIG.10, the same column of the pixel circuits may be electrically connected to the same initial data line portion, and the same row of the pixel circuits may be electrically connected to the same initialization voltage line portion. In at least one embodiment shown inFIGS.10, R1and R2are different initialization control lines. In actual operations, R1and R2may be the same initialization control line. A display device according to the embodiments of the present disclosure includes the above-mentioned display substrate. In specific implementations, the display device described in at least one embodiment of the present disclosure may further include a driving chip and an initial data wire, a first voltage line and a second voltage wire, where the initial data wire, the first voltage line and the second voltage wire are outside an active area of the base substrate;the initial data wire includes a first initial data wire portion directly electrically connected to the driving chip;the first voltage line includes a first voltage line portion directly electrically connected to the driving chip, and the second voltage wire includes a first voltage wire portion directly electrically connected to the driving chip;the first initial data wire portion is between the first voltage line portion and the first voltage wire portion. In at least one embodiment of the present disclosure, the first initial data wire portion is disposed between the first voltage line portion and the first voltage wire portion, the first voltage line portion is used to provide the first voltage signal, the first voltage wire portion is used to provide the second voltage signal, and the first voltage signal and the second voltage signal are both direct-current voltage signals, so that interference may not be caused for the initial data voltage on the first initial data wire portion. Optionally, the driving chip may be arranged on a COF (chip on film) or directly bound to the base substrate, and the COF may be attached to a side of the display substrate; and the present disclosure is not limited thereto. The driving chip may be used to provide the first voltage signal, the second voltage signal, the initialization voltage, and the initial data voltage. In at least one embodiment of the present disclosure, the base substrate may be a flexible substrate or a rigid substrate, and the driving chip may use COP (Chip On Pi, COP is a technology in which the chip is bound on a flexible substrate) technology or COG (Chip On Glass, COG is a technology in which the chip is directly bound on the glass surface) technology so as to be bound on the base substrate. Optionally, the first initial data wire portion, the first voltage line portion, and the first voltage wire portion all extend in the second direction, and the present disclosure is not limited thereto. The second direction is a direction in which the display data line extends. In at least one embodiment of the present disclosure, the display device may further include a driving chip, an initial data wire, an initialization voltage wire, a first voltage line, and a second voltage wire, where the initial data wire, the initialization voltage wire, the first voltage line, and the second voltage wire are outside an active area of the base substrate;the initial data wire includes a first initial data wire portion directly electrically connected to the driving chip;the initialization voltage wire includes a first initialization voltage wire portion directly electrically connected to the driving chip;the first voltage line includes a first voltage line portion directly electrically connected to the driving chip, and the second voltage wire includes a first voltage wire portion directly electrically connected to the driving chip;the first initialization voltage wire portion, the first initial data wire portion, the first voltage line portion, and the first voltage wire portion are sequentially in a direction toward the active area. In specific implementations, if the space is limited, the first initial data wire portion may be arranged between the first initialization voltage wire portion and the first voltage line portion. As shown inFIG.14, when the extension direction of the initialization voltage line and the extension direction of the initial data line are both the same as the extension direction of the gate line (the extension direction of the gate line is the first direction), the display substrate according to at least one embodiment of the present disclosure further includes the initial data wire, the initialization voltage wire, the gate driving circuit140, the first voltage line, and the second voltage wire that are arranged outside the active area A0. The second voltage wire includes the second voltage wire portion L22extending in the first direction and the first voltage wire portion L21extending in the second direction. The second voltage wire portion L22is used to electrically connect the multiple voltage line portions included in the second voltage line (when the second voltage wire is disposed at the lower side of the display substrate, each voltage line portion included in the second voltage line may extend downward, in the second direction, so as to be electrically connected to the second voltage wire portion). The first terminal of the first voltage wire portion L21is electrically connected to the second voltage wire portion L22, and the second terminal of the first voltage wire portion L21is directly electrically connected to the driving chip141to receive the second voltage signal provided by the driving chip141. The initial data wire includes the first initial data wire portion L31extending in the second direction, the second initial data wire portion L32disposed on the left side of the active area A0, and the third initial data wire portion L33used for electrically connecting L31and L32; L32may extend in the second direction; each initial data line portion included in the initial data line is directly electrically connected to the second initial data wire portion L32; L31, L32and L33are an integrated structure; L31is directly electrically connected to the driving chip141to receive the second voltage signal provided by the driving chip141. The initialization voltage wire includes the first initialization voltage wire portion L41extending in the second direction, and the second initialization voltage wire portion L42arranged on the left side of the active area A0, where L41is directly electrically connected to L42; L41and L42are an integrated structure; L41is directly electrically connected to the driving chip141, and the driving chip141is used to provide the initialization voltage to L41; L42also extends in the second direction. The gate driving circuit140is arranged on a side of L42away from the active area A0; the gate driving circuit140may be electrically connected to multiple rows of gate lines. The first voltage line includes the first voltage line portion L51directly electrically connected to the driving chip141, the second voltage line portion L52provided on the left side of the active area A0, and the third initial data wire portion L53used for electrically connecting L51and L52; L51and L52extend in the second direction, and L53extends in the first direction; the driving chip141provides the first voltage signal to L51. In actual operations, the first voltage line may be arranged on each of sides of the display substrate where the driving chip is not provided, and the first voltage line is electrically connected to the driving chip for receiving the first voltage signal provided by the driving chip; and the present disclosure is not limited thereto. In addition, the initial data wire, the initialization voltage wire, and the second voltage wire may also be provided on the right side of the active area, and the present disclosure is not limited thereto. In at least one embodiment of the present disclosure, the initialization voltage wire and the initialization voltage line may be arranged in a same layer and made of a same material, the initial data wire and the initial data line may be arranged in a same layer and made of a same material, and the second voltage wire and the second voltage line may be arranged in a same layer and made of a same material; and the present disclosure is not limited thereto. In at least one embodiment of the present disclosure, the first voltage line may be made of the first source and drain metal layer or the second source and drain metal layer, and the present disclosure is not limited thereto. In at least one embodiment shown inFIGS.14, L42and L32may be arranged in a same layer and made of a same material, and L42is electrically connected to the initialization voltage line in the active area through a jumper wire, to avoid a connection line between L42and the initialization voltage line in the active area bonding with L32so as to avoid short circuit. As shown inFIG.15, when the extension direction of the initialization voltage line is the same as the extension direction of the gate line, and the extension direction of the initial data line is the same as the extension direction of the display data line (the extension direction of the gate line is the first direction, and the extension direction of the display data line is the second direction), the display substrate according to at least one embodiment of the present disclosure further includes the initial data wire, the initialization voltage wire, the gate driving circuit140, the first voltage line and the second voltage wire that are arranged outside the active area A0. The initial data wire is arranged below the active area A0(that is, the initial data wire is arranged on the lower side of the display substrate). The initial data wire includes the first initial data wire portion L31extending along the second direction, the second initial data wire portion L32extending along the first direction, and the third initial data wire portion L32for electrically connecting L31and L32; L33extends in the second direction; L31is directly electrically connected to the driving chip141to receive the second voltage signal provided by the driving chip141. The second voltage wire is arranged below the active area A0(that is, the second voltage wire is arranged on the lower side of the display substrate), the second voltage wire includes the second voltage wire portion L22extending along the first direction and the first voltage wire portion L21extending in the second direction, the second voltage wire portion L22is used to electrically connect the multiple voltage line portions included in the second voltage line, the first terminal of the first voltage wire portion L21is electrically connected to the second voltage wire portion L22, and the second terminal of the first voltage wire portion L21is directly electrically connected to the driving chip141to receive the second voltage signal provided by the driving chip141; the second voltage wire portion L22may be electrically connected to the voltage line portions included in the second voltage line in the active area A0through jumper wires, so as to avoid short circuit caused by bonding with L32. The initialization voltage wire includes the first initialization voltage wire portion L41extending in the second direction, the second initialization voltage wire portion L42provided on the left side of the active area A0, and the third initialization voltage wire portion L43that is used for electrically connecting L41and L42and extends in the first direction; L41is directly electrically connected to the driving chip141, and the driving chip141is used to provide the initialization voltage to L41; L42also extends in the second direction. The gate driving circuit140is arranged on a side of L41away from the active area A0; the gate driving circuit140may be electrically connected to multiple rows of gate lines. The first voltage line includes the first voltage line portion L51directly electrically connected to the driving chip141, the second voltage line portion L52provided on the left side of the active area A0, and the third initial data wire portion L53for electrically connecting L51and L52; L51and L52extend in the second direction, and L53extends in the first direction; the driving chip141provides the first voltage signal to L51. In actual operations, the first voltage line may be arranged on each of sides of the display substrate where the driving chip is not provided, and the first voltage line is electrically connected to the driving chip for receiving the first voltage signal provided by the driving chip; and the present disclosure is not limited thereto. In addition, the initialization voltage wire and the second voltage wire may also be provided on the right side of the active area, and the present disclosure is not limited thereto. The display device provided by the embodiments of the present disclosure may be any product or component with a display touch function, such as a mobile phone, a tablet computer, a television, a monitor, a notebook computer, a digital photo frame, a navigator, or the like. The above descriptions illustrate preferred implementations of the present disclosure. It should be noted that for those skilled in the art, without departing from the principles of the present disclosure, various improvements and polishments may be made. These improvements and polishments shall fall within the protection scope of the present disclosure. | 60,734 |
11862085 | DETAILED DESCRIPTION Technical solutions in some embodiments of the present disclosure will be described clearly and completely below with reference to the accompanying drawings. However, the described embodiments are merely some but not all embodiments of the present disclosure. All other embodiments obtained on a basis of the embodiments of the present disclosure by a person of ordinary skill in the art shall be included in the protection scope of the present disclosure. Unless the context requires otherwise, throughout the specification and the claims, the term “comprise” and other forms thereof such as the third-person singular form “comprises” and the present participle form “comprising” are construed as an open and inclusive meaning, i.e., “including, but not limited to.” In the description of the specification, the terms such as “one embodiment”, “some embodiments”, “exemplary embodiments”, “example”, “specific example” or “some examples” are intended to indicate that specific features, structures, materials or characteristics related to the embodiment(s) or example(s) are included in at least one embodiment or example of the present disclosure. Schematic representations of the above terms do not necessarily refer to the same embodiment(s) or example(s). In addition, the specific features, structures, materials, or characteristics may be included in any one or more embodiments or examples in any suitable manner. Hereinafter, terms “first” and “second” are only used for descriptive purposes, and are not to be construed as indicating or implying the relative importance or implicitly indicating the number of indicated technical features. Thus, a feature defined with “first” or “second” may explicitly or implicitly include one or more of the features. In the description of the embodiments of the present disclosure, the term “a plurality of/the plurality of” means two or more unless otherwise specified. In the description of some embodiments, the terms “coupled” and “connected” and their extensions may be used. For example, the term “connected” may be used in the description of some embodiments to indicate that two or more components are in direct physical contact or electrical contact with each other. For another example, the term “coupled” may be used in the description of some embodiments to indicate that two or more components are in direct physical or electrical contact. However, the term “coupled” or “communicatively coupled” may also mean that two or more components are not in direct contact with each other, but still cooperate or interact with each other. The embodiments disclosed herein are not necessarily limited to the contents herein. The phrase “A and/or B” includes the following three combinations: only A, only B, and a combination of A and B. The use of the phrase “applicable to” or “configured to” as used herein indicates an open and inclusive expression, which does not exclude devices that are applicable to or configured to perform additional tasks or steps. Exemplary embodiments are described herein with reference to cross-sectional views and/or plan views as idealized exemplary drawings. In the accompanying drawings, thickness of layers and regions are enlarged for clarity. Therefore, variations in shape with respect to the drawings due to, for example, manufacturing technologies and/or tolerances may be envisaged. Therefore, the exemplary embodiments should not be construed as being limited to the shapes of the regions shown herein, but including shape deviations due to, for example, manufacturing. For example, an etched region shown in a rectangular shape generally has a curved feature. Therefore, the regions shown in the accompanying drawings are schematic in nature, and their shapes are not intended to show actual shapes of the region in a device, and are not intended to limit the scope of the exemplary embodiments. In the related art, as shown inFIG.1A, an array substrate2includes a plurality of sub-pixels P. The sub-pixels P may be arranged in an array of n rows and m columns. A sub-pixel circuit of each sub-pixel P is used to drive a light-emitting device to emit light, and the sub-pixel circuit may be, for example, a 7T1C-type sub-pixel circuit. On this basis, as shown inFIG.1A, the array substrate2further includes: a plurality of pairs of scanning signal lines S11and S21, a plurality of enable signal lines EM (1) to EM (n), a plurality of data signal lines Data (1) to Data (m), and a plurality of first power supply voltage signal lines VDD (1) to VDD (m). In some embodiments, the array substrate may further include a plurality of initial voltage signal lines and a plurality of second power supply voltage signal lines. In this case, a row of sub-pixel circuits are electrically connected to a pair of scanning signal lines, an enable signal line, an initial voltage signal line, and a second power supply voltage signal line. One scanning signal line of the pair of scanning signal lines is used for providing scanning signals for scanning signal terminals S11to S1n, and the other scanning signal line of the pair of scanning signal lines is used for providing scanning signals for scanning signal terminals S21to S2n(n is an integer greater than or equal to 1). The plurality of enable signal lines EM are used for providing enable signals for enable terminals EM; the plurality of initial voltage signal lines are used for providing an initial voltage signal for initial signal terminals Vinit; the plurality of second power supply voltage signal lines are used for providing a power supply voltage signal to second power supply voltage terminals VSS, thereby providing a scanning signal, an enable signal, the initial voltage signal and the power supply voltage signal to each sub-pixel circuit. It will be noted that the enable signal line may be understood as a light-emitting signal line, and the enable terminal may be understood as a light-emitting control terminal, in this way, the light-emitting signal line provides a light-emitting signal to the light-emitting control terminal. A column of sub-pixel circuits are electrically connected to a data signal line and a first power supply voltage signal line. The data signal lines are used for providing data signals for data terminals Data, and the plurality of first power supply voltage signal lines are used for providing a power supply voltage signal for the power supply voltage terminals VDD, so as to provide a data signal and the power supply voltage signal for each sub-pixel circuit. Since a distance between the data signal line and the first power supply voltage signal line is insufficient in space wiring, the two signal lines are easily short-circuited, so that a poor X-Bright line happens, which affects the yield. There is a significant linear correlation between the occurrence rate of poor X-Bright line and the distance between the data signal line and the first power supply voltage signal line. For example, as shown inFIG.1B, considering “Cupid” and “Panda” products as examples, in the source-drain mask (SD Mask) process, in the detection of critical dimension (CD) after development exposure and the detection of final critical dimension, it may be clearly seen that as the critical dimension becomes larger, that is, the distance between the data signal line and the first power supply voltage signal line becomes less, the occurrence rate of X-Bright line will become larger. In order to solve the above problems, as shown inFIG.2, some embodiments of the present disclosure provide a display apparatus1. The display apparatus1includes an array substrate2. The array substrate2includes a plurality of pixels P, and a pixel circuit10, a plurality of data signal lines Data, a plurality of first power supply voltage signal lines VDD, and a plurality of enable signal lines EM that are disposed on a substrate3. In some embodiments, the display apparatus may be configured to display an image (i.e., a picture). In this case, the display apparatus may include a display or a product including a display. The display may be a flat panel display (FPD), or a micro display, etc. If classified according to whether users can see scenes on the back of the display, the display may be a transparent display or non-transparent display. If classified according to whether the display may be bent or curled, the display may be a flexible display or a common display (which may be referred to as a rigid display). For example, the product including the display may be a computer display, a television, a billboard, a laser printer with a display function, a telephone, a mobile phone, a personal digital assistant (PDA), a laptop computer, a digital camera, a camcorder, a viewfinder, a vehicle, a large-area wall, a screen in a theater, or a sign in a stadium. As shown inFIG.3A, some embodiments of the present disclosure provide a pixel circuit10. The pixel circuit10includes a plurality of sub-pixel circuits. The plurality of sub-pixel circuits include a first sub-pixel circuit100and a second sub-pixel circuit200. The first sub-pixel circuit100and the second sub-pixel circuit200are located in two adjacent columns, and the first sub-pixel circuit100and the second sub-pixel circuit200are connected to a same data terminal Data. The first sub-pixel circuit100is disposed in a first sub-pixel, and the second sub-pixel circuit200is disposed in a second sub-pixel. In the embodiments, each of the plurality of sub-pixel circuits includes a reset sub-circuit and a driving sub-circuit. For example, as shown inFIG.3A, both the first sub-pixel circuit100and the second sub-pixel circuit200include: a reset sub-circuit101and a driving sub-circuit103. It will be noted that the circuit structures of the first sub-pixel circuit100and the second sub-pixel circuit200are completely the same. The reset sub-circuit101is electrically connected to a first reset control terminal Rst1, an initial voltage terminal Vinit and the driving sub-circuit103. The reset sub-circuit101is configured to input a voltage provided by the initial voltage terminal Vinit to the driving sub-circuit103under control of the first reset control terminal Rst1. That is, the reset sub-circuit101is configured to input the voltage provided by the initial voltage terminal Vinit to the driving sub-circuit103in response to a reset control signal provided by the first reset control terminal Rst1. The driving sub-circuit103is configured to control a driving current flowing through a light-emitting device according to a received data signal output by a data terminal Data. It may be understood that for the driving sub-circuit103in each sub-pixel circuit, the signal output by the data terminal Data may be the same or different. In some embodiments, each sub-pixel circuit further includes a writing compensation sub-circuit, a light-emitting control sub-circuit, and the light-emitting device. For example, as shown inFIG.3A, both the first sub-pixel circuit100and the second sub-pixel circuit200include a writing compensation sub-circuit102, a light-emitting control sub-circuit104, and a light-emitting device L. In some embodiments, the light-emitting devices L may be current-driven light-emitting devices, such as light-emitting diodes (LEDs), micro light-emitting diodes (Micro LEDs), mini light-emitting diodes (Mini LEDs), or organic light-emitting diodes (OLEDs). The light-emitting devices may also be voltage-driven light-emitting devices, which are not limited in the embodiments. The writing compensation sub-circuit102is electrically connected to a writing control terminal Input, the data terminal Data and the driving sub-circuit103. The writing compensation sub-circuit102is configured to write the data signal output by the data terminal Data into the driving sub-circuit103under control of the writing control terminal Input, so as to compensate for a threshold voltage of the driving sub-circuit103. That is, the writing compensation sub-circuit102is configured to write the data signal output by the data terminal Data into the driving sub-circuit103in response to a writing control signal provided by the writing control terminal Input, so as to compensate for the threshold voltage of the driving sub-circuit103. The light-emitting control sub-circuit104is electrically connected to an enable terminal EM, a first power supply voltage terminal VDD, the driving sub-circuit103, and the light-emitting device L. The light-emitting device L is further electrically connected to a second power supply voltage terminal VSS. The light-emitting control sub-circuit104is configured to close a current path between the first power supply voltage terminal VDD and the second power supply voltage terminal VSS under control of the enable terminal EM, so that the driving current is transmitted to the light-emitting device L. That is, the light-emitting control sub-circuit104is configured to close the current path between the first power supply voltage terminal VDD and the second power supply voltage terminal VSS in response to an enable signal provided by the enable terminal EM, so that the driving current is transmitted to the light-emitting device L. It will be noted that the light-emitting control sub-circuit104is connected to an anode (positive electrode) of the light-emitting device L, and a cathode (negative electrode) of the light-emitting device L is electrically connected to the second power supply voltage terminal VSS. In this way, when the light-emitting control sub-circuit104closes the current path between the first power supply voltage terminal VDD and the second power supply voltage terminal VSS under the control of the enable terminal EM, the driving current is transmitted to the light-emitting device L to drive the light-emitting device L to emit light. The first power supply voltage terminal VDD may be a high-level terminal and output a constant high voltage; while the second power supply voltage terminal VSS may be a low-level terminal and output a constant low voltage. The terms “high” and “low” here merely indicate a relative magnitude relationship between input voltages. The second power supply voltage terminal VSS may also be grounded. In the embodiments, as shown inFIG.3A, a first reset control terminal Rst1and a writing control terminal Input of the first sub-pixel circuit100are connected to a first scanning signal terminal S1and a second scanning signal terminal S2, respectively. A first reset control terminal Rst1and a writing control terminal Input of the second sub-pixel circuit200are connected to the second scanning signal terminal S2and a third scanning signal terminal S3, respectively. It may be understood that, since the first reset control terminals Rst1and the writing control terminals Input of the first sub-pixel circuit100and the second sub-pixel circuit200are sequentially connected to different scanning signal terminals, and in a case where the scanning signal terminals sequentially output scanning signals, the first sub-pixel circuit100and the second sub-pixel circuit200are both in different states under triggering of any scanning signal. For example, in a case where the first scanning signal terminal S1, the second scanning signal terminal S2and the third scanning signal terminal S3sequentially output scanning signals, the corresponding states of the first sub-pixel circuit100and the second sub-pixel circuit200are as follows. When the first scanning signal terminal S1outputs a scanning signal, the first reset control terminal Rst1of the first sub-pixel circuit100receives the scanning signal of the first scanning signal terminal S1, and inputs the voltage provided by the initial voltage terminal Vinit to the driving sub-circuit103, so as to reset the driving sub-circuit103of the first sub-pixel circuit100. At this time, the second sub-pixel circuit200is not operating. When the second scanning signal terminal S2outputs a scanning signal, the first sub-pixel circuit100and the second sub-pixel circuit200receive the scanning signal of the second scanning signal terminal S2, simultaneously. Although the first sub-pixel circuit100and the second sub-pixel circuit200operate, simultaneously, since the second scanning signal terminal S2is electrically connected to the writing control terminal Input of the first sub-pixel circuit100and the first reset control terminal Rst1of the second sub-pixel circuit200, the writing control terminal Input of the first sub-pixel circuit100receives the scanning signal provided by the second scanning signal terminal S2and writes a data signal output by the data terminal Data into the driving sub-circuit103, so as to compensate for the threshold voltage of the driving sub-circuit103; while the driving sub-circuit103of the second sub-pixel circuit200receives the voltage provided by the initial voltage terminal Vinit to reset the driving sub-circuit103of the second sub-pixel circuit200. When the third scanning signal terminal S3outputs a scanning signal, the writing control terminal Input of the second sub-pixel circuit200receives the scanning signal provided by the third scanning signal terminal S3, and writes a data signal output by the data terminal Data into the driving sub-circuit103, so as to compensate for the threshold voltage of the driving sub-circuit103of the second sub-pixel circuit200. Therefore, the sub-pixel circuits located in two adjacent columns are controlled through different scanning signals, so that the two sub-pixel circuits are written the signals output by the data terminal at different time periods, and the threshold voltages are compensated. Since the writing state occurs at different time, two adjacent sub-pixels may share a single data signal line. On the basis of the above, sub-pixel circuits located in two adjacent columns are connected to the same data terminal, and the sub-pixel circuits in the two adjacent columns are controlled through different scanning signals, so that the threshold voltages of the sub-pixel circuits in the two adjacent columns may be compensated in different time periods. Some embodiments of the present disclosure provide the pixel circuit including the first sub-pixel circuit100disposed in the first sub-pixel and the second sub-pixel circuit200disposed in the second sub-pixel. The first sub-pixel circuit100and the second sub-pixel circuit200are located in two adjacent columns, and the first sub-pixel circuit100and the second sub-pixel circuit200are of the same structure. Based on this, the first reset control terminal Rst1and the writing control terminal Input of the first sub-pixel circuit100are connected to the first scanning signal terminal S1and the second scanning signal terminal S2, respectively, and the first reset control terminal Rst1and the writing control terminal Input of the second sub-pixel circuit200are connected to the second scanning signal terminal S2and the third scanning signal terminal S3, respectively, so that the first sub-pixel circuit100and the second sub-pixel circuit200may be turned on in a staggered manner. Signals output by the same data terminal are written in different time periods, thereby achieving that two adjacent sub-pixels share a single data signal line on the basis of achieving the threshold voltage compensation. Since two adjacent columns of sub-pixels may share the single data signal line, the number of data signal lines is reduced, and a purpose of reducing the wiring density of the data signal lines and the first power supply voltage signal lines is achieved. As a result, a risk of X-Bright line is reduced. On the basis that the wiring density is reduced, the critical dimension of the data signal line may be increased appropriately to improve the transmission of the data signal and display effect. In some embodiments, as shown inFIG.4A, the pixel circuit10further includes a third sub-pixel circuit300. The third sub-pixel circuit300and the first sub-pixel circuit100are located in two adjacent rows, respectively, and the third sub-pixel circuit300and the first sub-pixel circuit100are located in a same column and connected to the same data terminal Data. As shown inFIG.4A, the third sub-pixel circuit300includes: a reset sub-circuit101, a writing compensation sub-circuit102, a driving sub-circuit103, a light-emitting control sub-circuit104, and a light-emitting device L. Here, the first sub-pixel circuit100, the second sub-pixel circuit200, and the third sub-pixel circuit300are of the same circuit structure. A first reset control terminal Rst1and a writing control terminal Input of the third sub-pixel circuit300are connected to the third scanning signal terminal S3and a fourth scanning signal terminal S4, respectively. It may be understood that, since the first reset control terminals Rst1and the writing control terminals Input of the first sub-pixel circuit100, the second sub-pixel circuit200and the third sub-pixel circuit300are sequentially connected to different scanning signal terminals, and in a case where the scanning signal terminals sequentially output scanning signals, the first sub-pixel circuit100and each of the second sub-pixel circuit200and the third sub-pixel circuit300are in different states under the triggering of any scanning signal. For example, when the first scanning signal terminal S1, the second scanning signal terminal S2, the third scanning signal terminal S3and the fourth scanning signal terminal S4sequentially output scanning signals, the corresponding states of the first sub-pixel circuit100, the second sub-pixel circuit200and the third sub-pixel circuit300are as follows. When the first scanning signal terminal S1outputs a scanning signal, the first reset control terminal Rst1of the first sub-pixel circuit100receives the scanning signal of the first scanning signal terminal S1, and inputs the voltage provided by the initial voltage terminal Vinit to the driving sub-circuit103, so as to reset the driving sub-circuit103of the first sub-pixel circuit100. At this time, the second sub-pixel circuit200and the third sub-pixel circuit300are not operating. When the second scanning signal terminal S2outputs a scanning signal, the first sub-pixel circuit100and the second sub-pixel circuit200receive the scanning signal of the second scanning signal terminal S2, simultaneously. Although the first sub-pixel circuit100and the second sub-pixel circuit200operate, simultaneously, since the second scanning signal terminal S2is electrically connected to the writing control terminal Input of the first sub-pixel circuit100and the first reset control terminal Rst1of the second sub-pixel circuit200, the writing control terminal Input of the first sub-pixel circuit100receives the scanning signal provided by the second scanning signal terminal S2and writes a data signal provided by the data terminal Data into the driving sub-circuit103, so as to compensate for the threshold voltage of the driving sub-circuit103; while the driving sub-circuit103of the second sub-pixel circuit200receives the voltage provided by the initial voltage terminal Vinit to reset the driving sub-circuit103of the second sub-pixel circuit200. At this time, the third sub-pixel circuit300is not operating. When the third scanning signal terminal S3outputs a scanning signal, the second sub-pixel circuit200and the third sub-pixel circuit300receive the scanning signal of the third scanning signal terminal S3, simultaneously. Although the second sub-pixel circuit200and the third sub-pixel circuit300operate, simultaneously, since the third scanning signal terminal S3is electrically connected to the writing control terminal Input of the second sub-pixel circuit200and the first reset control terminal Rst1of the third sub-pixel circuit300, the writing control terminal Input of the second sub-pixel circuit200receives the scanning signal of the third scanning signal terminal S3and writes a data signal output by the data terminal Data into the driving sub-circuit103, so as to compensate for the threshold voltage of the driving sub-circuit103of the second sub-pixel circuit200; while the driving sub-circuit103of the third sub-pixel circuit300receives the voltage provided by the initial voltage terminal Vinit to reset the driving sub-circuit of the third sub-pixel circuit300. When the fourth scanning signal terminal S4outputs a scanning signal, the writing control terminal Input of the third sub-pixel circuit300receives the scanning signal provided by the fourth scanning signal terminal S4, and writes a data signal provided by the data terminal Data into the driving sub-circuit103, so as to compensate for the threshold voltage of the driving sub-circuit103. Therefore, three of sub-pixel circuits that are arranged in a 2×2 array may be controlled through different scanning signals, so that the threshold voltages of the three sub-pixel circuits may be compensated at different time periods. Since the writing state occurs at different time, the three sub-pixel circuits may share a single data signal line. Based on the above, as shown inFIG.5A, the pixel circuit may further includes a fourth sub-pixel circuit400. The fourth sub-pixel circuit400and the first sub-pixel circuit100are located in two adjacent rows, respectively. The fourth sub-pixel circuit400and the first sub-pixel circuit100are located in different columns and connected to the same data terminal. For example, as shown inFIG.5A, the pixel circuit includes sub-pixel circuits that are arranged in a 2×2 array, and the fourth sub-pixel circuit400is of the same structure as other sub-pixel circuits. A first reset control terminal Rst1and a writing control terminal Input of the fourth sub-pixel circuit400are connected to the fourth scanning signal terminal S4and a fifth scanning signal terminal S5, respectively. In this case, when the first scanning signal terminal S1, the second scanning signal terminal S2, the third scanning signal terminal S3, the fourth scanning signal terminal S4and the fifth scanning signal terminal S5sequentially output scanning signals, the corresponding states of the first sub-pixel circuit100, the second sub-pixel circuit200, the third sub-pixel circuit300and the fourth sub-pixel circuit400are as follows. When the first scanning signal terminal S1outputs a scanning signal, the first reset control terminal Rst1of the first sub-pixel circuit100receives the scanning signal of the first scanning signal terminal S1, and inputs the voltage provided by the initial voltage terminal Vinit to the driving sub-circuit103, so as to reset the driving sub-circuit103of the first sub-pixel circuit100. At this time, the second sub-pixel circuit200, the third sub-pixel circuit300and the fourth sub-pixel circuit400are not operating. When the second scanning signal terminal S2outputs a scanning signal, the writing control terminal Input of the first sub-pixel circuit100receives the scanning signal output by the second scanning signal terminal S2, and writes a data signal output by the data terminal Data into the driving sub-circuit103, so as to compensate for the threshold voltage of the driving sub-circuit103of the first sub-pixel circuit100; while the first reset control terminal Rst1of the second sub-pixel circuit200receives the scanning signal of the second scanning signal terminal S2, and inputs the voltage provided by the initial voltage terminal Vinit to the driving sub-circuit103, so as to reset the driving sub-circuit103of the second sub-pixel circuit200. At this time, the third sub-pixel circuit300and the fourth sub-pixel circuit400are not operating. When the third scanning signal terminal S3outputs a scanning signal, the second sub-pixel circuit200and the third sub-pixel circuit300receive the scanning signal of the third scanning signal terminal S3, simultaneously. Although the second sub-pixel circuit200and the third sub-pixel circuit300operate, simultaneously, since the third scanning signal terminal S3is electrically connected to the writing control terminal Input of the second sub-pixel circuit200and the first reset control terminal Rst1of the third sub-pixel circuit300, the writing control terminal Input of the second sub-pixel circuit200receives the scanning signal output by the third scanning signal terminal S3and writes a data signal provided by the data terminal Data into the driving sub-circuit103, so as to compensate for the threshold voltage of the driving sub-circuit103of the second sub-pixel circuit200; while the driving sub-circuit103of the third sub-pixel circuit300receives the voltage provided by the initial voltage terminal Vinit to reset the driving sub-circuit103of the third sub-pixel circuit300. When the fourth scanning signal terminal S4outputs a scanning signal, the third sub-pixel circuit300and the fourth sub-pixel circuit400receive the scanning signal of the fourth scanning signal terminal S4, simultaneously. Although the third sub-pixel circuit300and the fourth sub-pixel circuit400operate, simultaneously, since the fourth scanning signal terminal S4is electrically connected to the writing control terminal Input of the third sub-pixel circuit300and the first reset control terminal Rst1of the fourth sub-pixel circuit400, the writing control terminal Input of the third sub-pixel circuit300receives the scanning signal output by the fourth scanning signal terminal S4and writes a data signal output by the data terminal Data into the driving sub-circuit103, so as to compensate for the threshold voltage of the driving sub-circuit103of the third sub-pixel circuit300; while the driving sub-circuit103of the fourth sub-pixel circuit400receives the voltage provided by the initial voltage terminal Vinit to reset the driving sub-circuit103of the fourth sub-pixel circuit400. When the fifth scanning signal terminal S5outputs a scanning signal, the writing control terminal Input of the fourth sub-pixel circuit400receives the scanning signal output by the fifth scanning signal terminal S5, and writes a data signal output by the data terminal Data into the driving sub-circuit103, so as to compensate for the threshold voltage of the driving sub-circuit103of the fourth sub-pixel circuit400. Therefore, the sub-pixel circuits that are arranged in a 2×2 array may be controlled through different scanning signals, so that the four sub-pixel circuits are written the signals output by the data terminal at different time periods to compensate the threshold voltages. Since the writing state occurs at different time, the four sub-pixel circuits may share a single data signal line. Followed by analogy, a first reset control terminal Rst1and a writing control terminal Input of every two adjacent columns of sub-pixel circuits are sequentially connected to two adjacent scanning signal terminals in a staggered manner, and the two adjacent columns of sub-pixels may be controlled through different scanning signals, so that the threshold voltages of the two adjacent columns of sub-pixels may be compensated at different time periods, and thus the two adjacent columns of sub-pixels may share a single data signal line. In some embodiments, as shown inFIG.6, the pixel circuit includes sub-pixel circuits in three rows and two columns. That is, the pixel circuit includes a first sub-pixel circuit100, a second sub-pixel circuit200, a third sub-pixel circuit300, a fourth sub-pixel circuit400, a fifth sub-pixel circuit500, and a sixth sub-pixel circuit600. As shown inFIG.7, if the array substrate includes2348rows of pixels and a scanning frequency is 60 Hz, a scanning time of each row is 1/(2348×60) s, i.e., 33333 μs. For a single sub-pixel circuit, its writing compensation time is half of the scanning time, i.e., 16666 μs. On this basis, an operating principle of the pixel circuit as shown inFIG.6will be illustrated in detail in combination with the signal timing diagram as shown inFIG.7. The operating principle of the pixel circuit may be divided into eight periods, i.e., a first scanning period P1to an eighth scanning period P8. Each period will be described below. In the first scanning period P1, as shown inFIG.7, since the first scanning signal terminal S1outputs a low-level signal, a first transistor T1of the first sub-pixel circuit100is turned on, and a third transistor T3, a fourth transistor T4, a fifth transistor T5, a sixth transistor T6and a driving transistor Td of the first sub-pixel circuit100are all turned off. At this time, the second sub-pixel circuit200, the third sub-pixel circuit300, the fourth sub-pixel circuit400, the fifth sub-pixel circuit500, and the sixth sub-pixel circuit600are not operating. The first transistor T1of the first sub-pixel circuit100is turned on, so that the voltage (denoted as V0) provided by the initial voltage terminal Vinit is input to a gate of the driving transistor Td to reset the gate of the driving transistor Td. In the second scanning period P2, as shown inFIG.7, since the second scanning signal terminal S2outputs a low-level signal, the third transistor T3and the fourth transistor T4of the first sub-pixel circuit100are turned on, the driving transistor Td of the first sub-pixel circuit100is turned on, and the first transistor T1, the fifth transistor T5, and the sixth transistor T6of the first sub-pixel circuit100are all turned off. A first transistor T1of the second sub-pixel circuit200is turned on, and a third transistor T3, a fourth transistor T4, a fifth transistor T5, a sixth transistor T6, and a driving transistor Td of the second sub-pixel circuit200are all turned off. Since the third transistor T3and the fourth transistor T4of the first sub-pixel circuit100are turned on, the data signal (denoted as Vdata1) provided by the data terminal Data may be input to a first electrode of the driving transistor Td, so that Vgs of the drive transistor Td is equal to (V0−Vdata1) (Vgs=V0−Vdata1). In a case where V0is −3 V, Vgs is less than 0 V, and the driving transistor Td is in an on state. In this case, a gate voltage of the driving transistor Td gradually increases until the gate voltage reaches (Vdata1+Vth) (Vth is the threshold voltage of the driving transistor), thereby achieving the threshold voltage compensation on the driving transistor. Therefore, Vgs of the driving transistor Td is equal to (Vdata1+Vth−Vdata1), i.e., equal to Vth (Vgs=Vdata1+Vth−Vdata1=Vth), so that the driving transistor Td is in an off state. Since the first transistor T1of the second sub-pixel circuit200is turned on, the voltage (denoted as V0) provided by the initial voltage terminal Vinit may be input to the gate of the driving transistor Td to reset the gate of the driving transistor Td. In the third scanning period P3, as shown inFIG.7, since the third scanning signal terminal S3outputs a low-level signal, the third transistor T3and the fourth transistor T4of the second sub-pixel circuit200are turned on, the driving transistor Td of the second sub-pixel circuit200is turned on, and the first transistor T1, the fifth transistor T5, and the sixth transistor T6of the second sub-pixel circuit200are all turned off. A first transistor T1of the third sub-pixel circuit300is turned on, and a third transistor T3, a fourth transistor T4, a fifth transistor T5, a sixth transistor T6, and a driving transistor Td of the third sub-pixel circuit300are all turned off. Since the third transistor T3and the fourth transistor T4of the second sub-pixel circuit200are turned on, the data signal provided by the data terminal Data may be input to a first electrode of the driving transistor Td to compensate for the threshold voltage of the driving transistor Td. Moreover, since the first transistor T1of the third sub-pixel circuit300is turned on, the voltage provided by the initial voltage terminal Vinit may be input to a gate of the driving transistor Td to reset the gate of the driving transistor Td. In the fourth scanning period P4, as shown inFIG.7, since the fourth scanning signal terminal S4outputs a low-level signal, the third transistor T3and the fourth transistor T4of the third sub-pixel circuit300are turned on, the driving transistor Td of the third sub-pixel circuit300is turned on, and the first transistor T1, the fifth transistor T5, and the sixth transistor T6of the third sub-pixel circuit300are all turned off. A first transistor T1of the fourth sub-pixel circuit400is turned on, and a third transistor T3, a fourth transistor T4, a fifth transistor T5, a sixth transistor T6, and a driving transistor Td of the fourth sub-pixel circuit400are all turned off. Since the third transistor T3and the fourth transistor T4of the third sub-pixel circuit300are turned on, the data signal provided by the data terminal Data may be input to a first electrode of the driving transistor Td to compensate for the threshold voltage of the driving transistor Td. Moreover, since the first transistor T1of the fourth sub-pixel circuit400is turned on, the voltage provided by the initial voltage terminal Vinit may be input to a gate of the driving transistor Td to reset the gate of the driving transistor Td. In the fifth scanning period P5, as shown inFIG.7, since the fifth scanning signal terminal S5outputs a low-level signal, the third transistor T3and the fourth transistor T4of the fourth sub-pixel circuit400are turned on, the driving transistor Td of the fourth sub-pixel circuit400is turned on, and the first transistor T1, the fifth transistor T5, and the sixth transistor T6of the fourth sub-pixel circuit400are all turned off. A first transistor T1of the fifth sub-pixel circuit500is turned on, and a third transistor T3, a fourth transistor T4, a fifth transistor T5, a sixth transistor T6, and a driving transistor Td of the fifth sub-pixel circuit500are all turned off. Since the third transistor T3and the fourth transistor T4of the fourth sub-pixel circuit400are turned on, the data signal output by the data terminal Data may be input to a first electrode of the driving transistor Td to compensate for the threshold voltage of the driving transistor Td. Moreover, since the first transistor T1of the fifth sub-pixel circuit500is turned on, the voltage provided by the initial voltage terminal Vinit may be input to a gate of the driving transistor Td to reset the gate of the driving transistor Td. In the sixth scanning period P6, as shown inFIG.7, since the sixth scanning signal terminal S6outputs a low-level signal, the third transistor T3and the fourth transistor T4of the fifth sub-pixel circuit500are turned on, the driving transistor Td of the fifth sub-pixel circuit500is turned on, and the first transistor T1, the fifth transistor T5, and the sixth transistor T6of the fifth sub-pixel circuit500are all turned off. A first transistor T1of the sixth sub-pixel circuit600is turned on, and a third transistor T3, a fourth transistor T4, a fifth transistor T5, a sixth transistor T6, and a driving transistor Td of the sixth sub-pixel circuit600are all turned off. Since the third transistor T3and the fourth transistor T4of the fifth sub-pixel circuit500are turned on, the data signal provided by the data terminal Data may be input to a first electrode of the driving transistor Td to compensate for the threshold voltage of the driving transistor Td. Moreover, since the first transistor T1of the sixth sub-pixel circuit600is turned on, the voltage provided by the initial voltage terminal Vinit may be input to a gate of the driving transistor Td to reset the gate of the driving transistor Td. In the seventh scanning period P7, as shown inFIG.7, since the seventh scanning signal terminal S7outputs a low-level signal, the third transistor T3and the fourth transistor T4of the sixth sub-pixel circuit600are turned on, the driving transistor Td of the six sub-pixel circuit600is turned on, and the first transistor T1, the fifth transistor T5, and the sixth transistor T6of the six sub-pixel circuit600are all turned off. Since the third transistor T3and the fourth transistor T4of the sixth sub-pixel circuit600are turned on, the data signal provided by the data terminal Data may be input to a first electrode of the driving transistor Td to compensate for the threshold voltage of the driving transistor Td. In the eighth scanning period P8(light-emitting phase), as shown inFIG.7, since the enable terminal EM (E1) outputs a low-level signal, the first sub-pixel circuit100, the second sub-pixel circuit200, the third sub-pixel circuit300and the fourth sub-pixel circuit400each may close the current path between the first power supply voltage terminal and the second power supply voltage terminal in response to the low-level signal (i.e., the enable signal) output by the enable terminal, so that the driving current is transmitted to the light-emitting device. Since the enable terminal EM (E2) outputs a low-level signal, the fifth sub-pixel circuit500and the sixth sub-pixel circuit600each may close the current path between the first power supply voltage terminal and the second power supply voltage terminal in response to the low-level signal (i.e., the enable signal) output by the enable terminal, so that the driving current is transmitted to the light-emitting device. It will be noted that a starting time of the light-emitting phase is not limited in this embodiment. For example, as shown inFIG.7, after the seventh scanning period is completed, that is, after the data signal provided by the data terminal Data may be input to the first electrode of the driving transistor Td of the sixth sub-pixel circuit600, the light-emitting device in each of the first sub-pixel circuit100, the second sub-pixel circuit200, the third sub-pixel circuit300, and the fourth sub-pixel circuit400emit light. In this way, a situation that subsequent timings are staggered may be avoided. In some embodiments, as shown inFIGS.3B,4B and5B, the reset sub-circuit101is electrically connected to a second reset control terminal Rst2and the light-emitting device L. The reset sub-circuit101is further configured to input the voltage provided by the initial voltage terminal Vinit to the light-emitting device L under control of the second reset control terminal Rst2. That is, the reset sub-circuit101is further configured to input the voltage provided by the initial voltage terminal Vinit to the light-emitting device L in response to a reset control signal output by the second reset control terminal Rst2. A second reset control terminal Rst2of the first sub-pixel circuit100is connected to the third scanning signal terminal S3; and a second reset control terminal Rst2of the second sub-pixel circuit200is connected to the fourth scanning signal terminal S4. It may be understood that, since the second reset control terminals Rst2of the first sub-pixel circuit100and the second sub-pixel circuit200are connected to different scanning signal terminals, the first sub-pixel circuit100and the second sub-pixel circuit200are in different states under triggering of different scanning signals. For example, in a case where the third scanning signal terminal S3and the fourth scanning signal terminal S4output scanning signals at different time, the corresponding states of the first sub-pixel circuit100, the second sub-pixel circuit200and the third sub-pixel circuit300are as follows. When the third scanning signal terminal S3outputs a scanning signal, the writing control terminal Input of the second sub-pixel circuit200receives the scanning signal of the third scanning signal terminal S3, and writes the data signal provided by the data terminal Data into the driving sub-circuit103to compensate for the threshold voltage of the driving sub-circuit103. At this time, the second reset control terminal Rst2of the first sub-pixel circuit100receives the scanning signal of the third scanning signal terminal S3, and inputs the voltage provided by the initial voltage terminal Vinit to the light-emitting device L, so as to reset the anode of the light-emitting device L, thereby forcing a black picture and improving the afterimage. When the fourth scanning signal terminal S4outputs a scanning signal, the writing control terminal Input of the third sub-pixel circuit300receives the scanning signal output by the fourth scanning signal terminal S4, and writes the data signal output by the data terminal Data into the driving sub-circuit103to compensate for the threshold voltage of the driving sub-circuit103. At this time, the second reset control terminal Rst2of the second sub-pixel circuit200receives the scanning signal of the fourth scanning signal terminal S4, and inputs the voltage provided by the initial voltage terminal Vinit to the light-emitting device L, so as to reset the anode of the light-emitting device L, thereby forcing the black picture and improving the afterimage. In some embodiments, as shown inFIG.5B, in a case where the pixel circuit10includes the third sub-pixel circuit300, a second reset control terminal Rst2of the third sub-pixel circuit300is connected to the fifth scanning signal terminal S5. When the fifth scanning signal terminal S5outputs a scanning signal, the writing control terminal Input of the fourth sub-pixel circuit400receives the scanning signal of the fifth scanning signal terminal S5, and writes the data signal provided by the data terminal Data into the driving sub-circuit103to compensate for the threshold voltage of the driving sub-circuit103. At this time, the second reset control terminal Rst2of the third sub-pixel circuit300receives the scanning signal of the fifth scanning signal terminal S5, and inputs the voltage provided by the initial voltage terminal Vinit to the light-emitting device L, so as to reset the anode of the light-emitting device L, thereby forcing the black picture and improving the afterimage. Based on the above, as shown inFIG.5B, in a case where the pixel circuit further includes the fourth sub-pixel circuit400, a second reset control terminal Rst2of the fourth sub-pixel circuit400is connected to the sixth scanning signal terminal S6. When the sixth scanning signal terminal S6outputs a scanning signal, the second reset control terminal Rst2of the fourth sub-pixel circuit400receives the scanning signal of the sixth scanning signal terminal S6, and inputs the voltage provided by the initial voltage terminal Vinit to the light-emitting device L, so as to reset the anode of the light-emitting device L, thereby forcing the black picture and improving the afterimage. Followed by analogy, the first reset control terminal Rst1and the second reset control terminal Rst2of every two adjacent columns of sub-pixel circuits are sequentially connected to two adjacent scanning signal terminals, and two adjacent columns of sub-pixels are controlled through different scanning signals, so that the two adjacent columns of sub-pixel circuits may input the voltage provided by the initial voltage terminal Vinit to the light-emitting devices L at different time periods, so as to force the black picture and improve the afterimage. In some embodiments, as shown inFIGS.3C,4C and5C, the driving sub-circuit103includes a driving transistor Td, and a gate of the driving transistor Td is electrically connected to the reset sub-circuit101. A first electrode of the driving transistor Td is electrically connected to the writing compensation sub-circuit102, and a second electrode of the driving transistor Td is electrically connected to the light-emitting control sub-circuit104. In some embodiments, as shown inFIGS.3C,4C and5C, in addition to the driving transistor Td, the driving sub-circuit103further includes a capacitor C. A first end of the capacitor C is electrically connected to the gate of the driving transistor Td, and a second end of the capacitor C is electrically connected to the first power supply voltage terminal VDD. In some embodiments, as shown inFIGS.3C,4C and5C, the reset sub-circuit101includes a first transistor T1and a second transistor T2. A gate of the first transistor T1is electrically connected to the first reset control terminal Rst1, a first electrode of the first transistor T1is electrically connected to the initial voltage terminal Vinit, and a second electrode of the first transistor T1is electrically connected to the gate of the driving transistor Td. A gate of the second transistor T2is electrically connected to the second reset control terminal Rst2, a first electrode of the second transistor T2is electrically connected to the initial voltage terminal Vinit, and a second electrode of the second transistor T2is electrically connected to the light-emitting device L. In a case where the first reset control terminal Rst1and the second reset control terminal Rst2are electrically connected to different scanning signal terminals, the first transistor T1is capable of being turned on or turned off under the control of the first reset control terminal Rst1, and the second transistor T2is capable of being turned on or turned off under the control of the second reset control terminal Rst2. That is, the first and the second transistors both function as switches. It will be noted that the reset sub-circuit101may further include a plurality of switching transistors connected in parallel with the first transistor T1, and/or a plurality of switching transistors connected in parallel with the second transistor T2. The above is merely an example of the reset sub-circuit101, other structures with the same function as the reset sub-circuit101will not be repeated herein, but all shall be included in the protection scope of the present disclosure. In some embodiments, as shown inFIGS.3C,4C and5C, the writing compensation sub-circuit102includes a third transistor T3and a fourth transistor T4. A gate of the third transistor T3is electrically connected to the writing control terminal Input, a first electrode of the third transistor T3is electrically connected to the gate of the driving transistor Td, and a second electrode of the third transistor T3is electrically connected to the second electrode of the driving transistor Td. A gate of the fourth transistor T4is electrically connected to the writing control terminal Input, a first electrode of the fourth transistor T4is electrically connected to the data terminal Data, and a second electrode of the fourth transistor T4is electrically connected to the first electrode of the driving transistor. In a case where the writing control terminal Input is electrically connected to a corresponding scanning signal terminal, the third transistor T3and the fourth transistor T4both are capable of being turned on or turned off under the control of the writing control terminal Input, and function as switches. It will be noted that the writing compensation sub-circuit102may further include a plurality of switching transistors connected in parallel with the third transistor T3, and/or a plurality of switching transistors connected in parallel with the fourth transistor T4. The above is merely an example of the writing compensation sub-circuit102, other structures with the same function as the writing compensation sub-circuit102will not be repeated herein, but all shall be included in the protection scope of the present disclosure. In some embodiments, as shown inFIGS.3C,4C and5C, the light-emitting control sub-circuit104includes a fifth transistor T5and a sixth transistor T6. A gate of the fifth transistor T5is electrically connected to the enable terminal EM, a first electrode of the fifth transistor T5is electrically connected to the second electrode of the driving transistor Td, and a second electrode of the fifth transistor T5is electrically connected to the light-emitting device L. A gate of the sixth transistor T6is electrically connected to the enable terminal EM, a first electrode of the sixth transistor T6is electrically connected to the first power supply voltage terminal VDD, and a second electrode of the sixth transistor T6is electrically connected to the first electrode of the driving transistor Td. It will be noted that the light-emitting control sub-circuit104may further include a plurality of switching transistors connected in parallel with the fifth transistor T5, and/or a plurality of switching transistors connected in parallel with the sixth transistor T6. The above is merely an example of the light-emitting control sub-circuit104. Other structures having the same function as the light-emitting control sub-circuit104will not be repeated herein, but all shall be included in the protection scope of the present disclosure. Based on the above description of each sub-pixel circuit, a specific driving process of the above pixel circuit will be described in detail with reference toFIG.5C. First transistors T1, second transistors T2, third transistors T3, fourth transistors T4, fifth transistors T5, sixth transistors T6and driving transistors Td of the first sub-pixel circuit100, the second sub-pixel circuit200, the third sub-pixel circuit300and the fourth sub-pixel circuit400are all P-type transistors. In the first scanning period P1, the first scanning signal terminal S1outputs a low-level signal, the second scanning signal terminal S2, the third scanning signal terminal S3, the fourth scanning signal terminal S4, the fifth scanning signal terminal S5and the sixth scanning signal terminal S6all output high-level signals, and the enable terminal EM outputs a high-level signal. Based on this, an equivalent circuit diagram of the pixel circuit as shown inFIG.5Cis shown inFIG.8A. The first transistor T1of the first sub-pixel circuit100is turned on, and the second transistor T2, the third transistor T3, the fourth transistor T4, the fifth transistor T5, the sixth transistor T6, and the driving transistor Td of the first sub-pixel circuit100are all turned off. A first transistor T1of the first sub-pixel circuit100is turned on, so that the voltage (denoted as V0) of the initial voltage terminal Vinit is input to a gate of the driving transistor Td to reset the gate of the driving transistor Td. In the second scanning period P2, the second scanning signal terminal S2outputs a low-level signal, the first scanning signal terminal S1, the third scanning signal terminal S3, the fourth scanning signal terminal S4, the fifth scanning signal terminal S5and the sixth scanning signal terminal S6all output high-level signals, and the enable terminal EM outputs the high-level signal. Based on this, an equivalent circuit diagram of the pixel circuit shown inFIG.5Cis shown inFIG.8B. The third transistor T3and the fourth transistor T4of the first sub-pixel circuit100are turned on, and the driving transistor Td of the first sub-pixel circuit100is turned on, and the first transistor T1, the second transistor T2, the fifth transistor T5and the sixth transistor T6of the first sub-pixel circuit100are all turned off. Since the third transistor T3and the fourth transistor T4of the first sub-pixel circuit100are turned on, the data signal (denoted as Vdata1) output by the data terminal Data is written into a first electrode of the driving transistor Td, so that Vgs of the driving transistor Td is equal to (V0−Vdata1) (Vgs=V0−Vdata1). In a case where V0is −3 V, Vgs is less than 0 V, and the driving transistor Td is in an on state. In this case, a gate voltage of the driving transistor Td gradually increases until the gate voltage reaches (Vdata1+Vth) (Vth is the threshold voltage of the driving transistor), so as to achieve the threshold voltage compensation on the driving transistor. Therefore, Vgs of the driving transistor Td is equal to (Vdata1+Vth−Vdata1), i.e., equal to Vth (Vgs=Vdata1+Vth−Vdata1=Vth), so that the driving transistor Td is in an off state. The first transistor T1of the second sub-pixel circuit200is turned on, and the second transistor T2, the third transistor T3, the fourth transistor T4, the fifth transistor T5, the sixth transistor T6, and the driving transistor Td of the second sub-pixel circuit200are all turned off. A first transistor T1of the second sub-pixel circuit200is turned on, so that the voltage (denoted as V0) of the initial voltage terminal Vinit is input to a gate of the driving transistor Td to reset the gate of the driving transistor Td. In the third scanning period P3, the third scanning signal terminal S3outputs a low-level signal, the first scanning signal terminal S1, the second scanning signal terminal S2, the fourth scanning signal terminal S4, the fifth scanning signal terminal S5and the sixth scanning signal terminal S6all output high-level signals, and the enable terminal EM outputs the high-level signal. Based on this, an equivalent circuit diagram of the pixel circuit shown inFIG.5Cis shown inFIG.8C. The second transistor T2of the first sub-pixel circuit100is turned on, and the first transistor T1, the third transistor T3, the fourth transistor T4, the fifth transistor T5, and the sixth transistor T6of the first sub-pixel circuit100are all turned off. A capacitor of the first sub-pixel circuit100keeps the gate voltage of the driving transistor Td at (Vdata1+Vth), and the second transistor T2of the first sub-pixel circuit100is turned on, so that the voltage provided by the initial voltage terminal Vinit is input to an anode of a light-emitting device L to force the black picture and improve the afterimage. The third transistor T3and the fourth transistor T4of the second sub-pixel circuit200are turned on, and the driving transistor Td of the second sub-pixel circuit200is turned on, and the first transistor T1, the second transistor T2, the fifth transistor T5and the sixth transistor T6of the second sub-pixel circuit200are all turned off. Since the third transistor T3and the fourth transistor T4of the second sub-pixel circuit200are turned on, the data signal (denoted as Vdata2) output by the data terminal Data is written into a first electrode of the driving transistor Td, so that Vgs of the driving transistor Td is equal to (V0−Vdata2) (Vgs=V0−Vdata2). In a case where V0is −3 V, Vgs is less than 0 V, and the driving transistor Td is in an on state. In this case, a gate voltage of the driving transistor Td gradually increases until the gate voltage reaches (Vdata2+Vth), so as to achieve the threshold voltage compensation on the driving transistor. Therefore, Vgs of the driving transistor Td is equal to (Vdata2+Vth−Vdata2), i.e., equal to Vth (Vgs=Vdata2+Vth−Vdata2=Vth), so that the driving transistor Td is in an off state. The first transistor T1of the third sub-pixel circuit300is turned on, and the second transistor T2, the third transistor T3, the fourth transistor T4, the fifth transistor T5, the sixth transistor T6, and the driving transistor Td of the third sub-pixel circuit300are all turned off. The first transistor T1of the third sub-pixel circuit300is turned on, so that the voltage (denoted as V0) of the initial voltage terminal Vinit is input to a gate of the driving transistor Td to reset the gate of the driving transistor Td. In the fourth scanning period P4, the fourth scanning signal terminal S4outputs a low-level signal, the first scanning signal terminal S1, the second scanning signal terminal S2, the third scanning signal terminal S3, the fifth scanning signal terminal S5and the sixth scanning signal terminal S6all output high-level signals, and the enable terminal EM outputs the high-level signal. Based on this, an equivalent circuit diagram of the pixel circuit as shown inFIG.5Cis shown inFIG.8D. The second transistor T2of the second sub-pixel circuit200is turned on, and the first transistor T1, the third transistor T3, the fourth transistor T4, the fifth transistor T5, and the sixth transistor T6of the second sub-pixel circuit200are all turned off. A capacitor of the second sub-pixel circuit200keeps the gate voltage of the driving transistor Td at (Vdata2+Vth), and the second transistor T2of the first sub-pixel circuit100is turned on, so that the voltage provided by the initial voltage terminal Vinit is input to an anode of the light-emitting device L to force the black picture and improve the afterimage. The third transistor T3and the fourth transistor T4of the third sub-pixel circuit300are turned on, and the driving transistor Td of the third sub-pixel circuit300is turned on, and the first transistor T1, the second transistor T2, the fifth transistor T5and the sixth transistor T6of the third sub-pixel circuit300are all turned off. Since the third transistor T3and the fourth transistor T4of the third sub-pixel circuit300are turned on, the data signal (denoted as Vdata3) output by the data terminal Data is written into a first electrode of the driving transistor Td, so that Vgs of the driving transistor Td is equal to (V0−Vdata3) (Vgs=V0−Vdata3). In a case where V0is −3 V, Vgs is less than 0 V, and the driving transistor Td is in an on state. In this case, a gate voltage of the driving transistor Td gradually increases until the gate voltage reaches (Vdata3+Vth), so as to achieve the threshold voltage compensation on the driving transistor. Therefore, Vgs of the driving transistor Td is equal to (Vdata3+Vth−Vdata3), i.e., equal to Vth (Vgs=Vdata3+Vth−Vdata3=Vth), so that the driving transistor Td is in an off state. The first transistor T1of the fourth sub-pixel circuit400is turned on, and the second transistor T2, the third transistor T3, the fourth transistor T4, the fifth transistor T5, the sixth transistor T6, and the driving transistor Td of the fourth sub-pixel circuit400are all turned off. The first transistor T1of the fourth sub-pixel circuit400is turned on, so that the voltage (denoted as V0) of the initial voltage terminal Vinit is input to a gate of the driving transistor Td to reset the gate of the driving transistor. In the fifth scanning period P5, the fifth scanning signal terminal S5outputs a low-level signal, the first scanning signal terminal S1, the second scanning signal terminal S2, the third scanning signal terminal S3, the fourth scanning signal terminal S4and the sixth scanning signal terminal S6all output high-level signals, and the enable terminal EM outputs the high-level signal. Based on this, an equivalent circuit diagram of the pixel circuit as shown inFIG.5Cis shown inFIG.8E. The second transistor T2of the third sub-pixel circuit300is turned on, and the first transistor T1, the third transistor T3, the fourth transistor T4, the fifth transistor T5, and the sixth transistor T6of the third sub-pixel circuit300are all turned off. A capacitor of the third sub-pixel circuit300keeps the gate voltage of the driving transistor Td at (Vdata3+Vth), and the second transistor T2of the third sub-pixel circuit300is turned on, so that the voltage provided by the initial voltage terminal Vinit is input to an anode of the light-emitting device L to force the black picture and improve the afterimage. The third transistor T3and the fourth transistor T4of the fourth sub-pixel circuit400are turned on, and the driving transistor Td of the fourth sub-pixel circuit400is turned on, and the first transistor T1, the second transistor T2, the fifth transistor T5and the sixth transistor T6of the fourth sub-pixel circuit400are all turned off. Since the third transistor T3and the fourth transistor T4of the fourth sub-pixel circuit400are turned on, the data signal (denoted as Vdata4) output by the data terminal Data is written into a first electrode of the driving transistor Td, so that Vgs of the driving transistor Td is equal to (V0−Vdata4) (Vgs=V0−Vdata4). In a case where V0is −3 V, Vgs is less than 0 V, and the driving transistor Td is in an on state. In this case, a gate voltage of the driving transistor Td gradually increases until the gate voltage reaches (Vdata4+Vth), so as to achieve the threshold voltage compensation on the driving transistor. Therefore, Vgs of the driving transistor Td is equal to (Vdata4+Vth−Vdata4), i.e., equal to Vth (Vgs=Vdata4+Vth−Vdata4=Vth), so that the driving transistor Td is in an off state. In the sixth scanning period P6, the sixth scanning signal terminal S6outputs a low-level signal, the first scanning signal terminal S1, the second scanning signal terminal S2, the third scanning signal terminal S3, the fourth scanning signal terminal S4and the fifth scanning signal terminal S5all output high-level signals, and the enable terminal EM outputs the high-level signal. Based on this, an equivalent circuit diagram of the pixel circuit as shown inFIG.5Cis shown inFIG.8F. The second transistor T2of the fourth sub-pixel circuit400is turned on, and the first transistor T1, the third transistor T3, the fourth transistor T4, the fifth transistor T5, and the sixth transistor T6of the fourth sub-pixel circuit400are all turned off. A capacitor of the fourth sub-pixel circuit400keeps the gate voltage of the driving transistor Td at (Vdata4+Vth), and the second transistor T2of the first sub-pixel circuit100is turned on, so that the voltage provided by the initial voltage terminal Vinit is input to an anode of the light-emitting device L to force the black picture and improve the afterimage. Based on the above, followed by analogy until the light-emitting phase, in the light-emitting phase, the first scanning signal terminal S1, the second scanning signal terminal S2, the third scanning signal terminal S3, the fourth scanning signal terminal S4, the fifth scanning signal terminal S5and the sixth scanning signal terminal S6all output high-level signals, and the enable terminal EM (E1) outputs a low-level signal. Based on this, an equivalent circuit diagram of the pixel circuit as shown inFIG.5Cis shown inFIG.8G. The fifth transistors T5and the sixth transistors T6of the first sub-pixel circuit100, the second sub-pixel circuit200, the third sub-pixel circuit300, and the fourth sub-pixel circuit400are turned on, the first transistors T1of the first sub-pixel circuit100, the second sub-pixel circuit200, the third sub-pixel circuit300, and the fourth sub-pixel circuit400are turned on, and the second transistors T2, the third transistors T3, and the fourth transistors T4of the first sub-pixel circuit100, the second sub-pixel circuit200, the third sub-pixel circuit300, and the fourth sub-pixel circuit400are all turned off. The first electrode of the driving transistor Td of the first sub-pixel circuit100and the first power supply voltage signal terminal VDD are connected, and the second electrode of the driving transistor Td of the first sub-pixel circuit100and the light-emitting device L are connected. On this basis, in a case where a difference between the gate voltage of the driving transistor Td and the power supply voltage signal Vdd provided by the first power supply voltage signal terminal VDD is less than the threshold voltage Vth thereof, the driving transistor Td is turned on. That is, in a case where ((Vdata1+Vth)−Vdd)<Vth, the driving current is capable of being transmitted into the light-emitting device L to drive the light-emitting device L to emit light. The first electrode of the driving transistor Td of the second sub-pixel circuit200and the first power supply voltage signal terminal VDD are connected, and the second electrode of the driving transistor Td of the second sub-pixel circuit200and the light-emitting device L are connected. On this basis, in the case where the difference between the gate voltage of the driving transistor Td and the power supply voltage signal Vdd provided by the first power supply voltage signal terminal VDD is less than the threshold voltage Vth thereof, the driving transistor Td is turned on. That is, in a case where ((Vdata2+Vth)−Vdd)<Vth, the driving current is capable of being transmitted into the light-emitting device L to drive the light-emitting device L to emit light. The first electrode of the driving transistor Td of the third sub-pixel circuit300and the first power supply voltage signal terminal VDD are connected, and the second electrode of the driving transistor Td of the third sub-pixel circuit300and the light-emitting device L are connected. On this basis, in the case where the difference between the gate voltage of the driving transistor Td and the power supply voltage signal Vdd provided by the first power supply voltage signal terminal VDD is less than the threshold voltage Vth thereof, the driving transistor Td is turned on. That is, in a case where ((Vdata3+Vth)−Vdd)<Vth, the driving current is capable of being transmitted into the light-emitting device L to drive the light-emitting device L to emit light. The first electrode of the driving transistor Td of the fourth sub-pixel circuit400and the first power supply voltage signal terminal VDD are connected, and the second electrode of the driving transistor Td of the fourth sub-pixel circuit400and the light-emitting device L are connected. On this basis, in the case where the difference between the gate voltage of the driving transistor Td and the power supply voltage signal Vdd provided by the first power supply voltage signal terminal VDD is less than the threshold voltage Vth thereof, the driving transistor Td is turned on. That is, in a case where ((Vdata4+Vth)−Vdd)<Vth, the driving current is capable of being transmitted into the light-emitting device L to drive the light-emitting device L to emit light. It will be understood by a person skilled in the art that the current for driving the light-emitting device L to emit light is I, I=K*(VG−Vs−Vth)2, where K=12*μ*Cox*WL, μ is a mobility rate of electrons, Cox is a gate oxide capacitance per unit area, WL is a width to length ratio of the driving transistor Td, and Vth is the threshold voltage. Followed by analogy, the current flowing through the driving transistor Td in each sub-pixel circuit is related only to the data voltage provided by the data terminal Data for achieving display and the first power supply voltage input by the first power supply voltage terminal VDD, and is not related to the threshold voltage Vth of the driving transistor Td, thereby eliminating the effect of the threshold voltage Vth of the driving transistor Td on the light-emitting brightness of the light-emitting device L. On this basis, it may be understood that in a case where different sub-pixel circuits receive different signals of the data terminal, the included driving sub-circuits may output different currents, so that the brightness of each light-emitting device is different. As shown inFIG.9, a data voltage of the data terminal received by a sub-pixel circuit corresponding to d1is 4 V, a data voltage of the data terminal received by a sub-pixel circuit corresponding to d2is 3.5 V, a data voltage of the data terminal received by a sub-pixel circuit corresponding to d3is 3V, a data voltage of the data terminal received by a sub-pixel circuit corresponding to d4is less than a data voltage of the data terminal received by a sub-pixel circuit corresponding to d3, and a data voltage of the data terminal received by a sub-pixel circuit corresponding to d5is less than a data voltage of the data terminal received by a sub-pixel circuit corresponding to d4. It will be understood by a person skilled in the art that for an electroluminescent display panel, the smaller the Vdata voltage on the data line Data, the larger the current output to the light-emitting device L and the larger the brightness of the light emitted by the light-emitting device L. In the above embodiments, all the transistors may be N-type transistors. Since the transistors are all N-type transistors, a corresponding scanning signal is required to be in a high-level state when the transistor is turned on. It will be noted that a scanning direction is not limited in the embodiments. For example, the scanning direction may be row-by-row scanning from top to bottom. That is, sub-pixel circuits in a first row are scanned first, then sub-pixel circuits in a second row are scanned, and so on, until sub-pixel circuits in a last row are scanned. For another example, the scanning direction may be row-by-row scanning from bottom to top. That is, sub-pixel circuits in a last row are scanned first, then sub-pixel circuits in a previous row are scanned, and so on, until sub-pixel circuits in a first row are scanned. In some embodiments, as shown inFIG.10, the pixel circuit includes n rows and m columns of sub-pixel circuits SPC, and the description is made by considering an example where the scanning direction is from bottom to top. In a first scanning period, the first scanning signal terminal S1outputs a low-level signal, so as to reset odd-numbered sub-pixel circuits in an n-th row (i.e., a last row, for example, a first row from bottom to top) of sub-pixel circuits. That is, in the first scanning period, the driving sub-circuits in the first sub-pixel circuit, the third sub-pixel circuit, the fifth sub-pixel circuit, etc. in the last row of sub-pixel circuits are reset. In a second scanning period, the second scanning signal terminal S2outputs a low-level signal, so as to compensate threshold voltages of driving sub-circuits of the odd-numbered sub-pixel circuits in the last row of sub-pixel circuits, and reset driving sub-circuits of even-numbered sub-pixel circuits in the last row of sub-pixel circuits. That is, in the second scanning period, the threshold voltages of the driving sub-circuits of the first sub-pixel circuit, the third sub-pixel circuit, the fifth sub-pixel circuit, etc. in the last row of sub-pixel circuits are compensated; and the driving sub-circuits of the second sub-pixel circuit, the fourth sub-pixel circuit, the sixth sub-pixel circuit, etc. in the last row of sub-pixel circuits are reset. In a third scanning period, the third scanning signal terminal S3outputs a low-level signal, and anodes of light-emitting devices of the odd-numbered sub-pixel circuits in the last row of sub-pixel circuits are reset; the threshold voltages of the driving sub-circuits of the even-numbered sub-pixel circuits in the last row of sub-pixel circuits are compensated; and driving sub-circuits of odd-numbered sub-pixel circuits in a second-to-last row (e.g., a second row from bottom to top) of sub-pixel circuits are reset. In a fourth scanning period, the fourth scanning signal terminal S4outputs a low-level signal, and anodes of light-emitting devices of even-numbered sub-pixel circuits in the last row of sub-pixel circuits are reset; the threshold voltages of the driving sub-circuits of the odd-numbered sub-pixel circuits in the second-to-last row of sub-pixel circuits are compensated; and driving sub-circuits of even-numbered sub-pixel circuits in the second-to-last row of sub-pixel circuits are reset. In a fifth scanning period, the fifth scanning signal terminal S5outputs a low-level signal, and anodes of light-emitting devices of odd-numbered sub-pixel circuits in the second-to-last row of sub-pixel circuits are reset; the threshold voltage of the driving sub-circuits of the even-numbered sub-pixel circuits in the second-to-last row of sub-pixel circuits are compensated; and driving sub-circuits of odd-numbered sub-pixel circuits in a third-to-last row (e.g., a third row from bottom to top) of sub-pixel circuits are reset. In a sixth scanning period, the sixth scanning signal terminal S6outputs a low-level signal, and anodes of light-emitting devices of even-numbered sub-pixel circuits in the second-to-last row of sub-pixel circuits are reset; the threshold voltage of the driving sub-circuits of the odd-numbered sub-pixel circuits in the third-to-last row of sub-pixel circuits are compensated; and driving sub-circuits of even-numbered sub-pixel circuits in the third-to-last row of sub-pixel circuits are reset. In a seventh scanning period, the seventh scanning signal terminal S7outputs a low-level signal, and anodes of light-emitting devices of odd-numbered sub-pixel circuits in the third-to-last row of sub-pixel circuits are reset; and the threshold voltage of the driving sub-circuits of the even-numbered sub-pixel circuits in the third-to-last row of sub-pixel circuits are compensated. In the embodiments, in a case where the reset sub-circuit includes the first reset control terminal Rst1and the second reset control terminal Rst2, since the first scanning signal terminal S1, the second scanning signal terminal S2, the third scanning signal terminal S3and the fourth scanning signal terminal S4control the operation of the sub-pixel circuits in the first row, the third scanning signal terminal S3, the fourth scanning signal terminal S4, the fifth scanning signal terminal S5and the sixth scanning signal terminal S6control the operation of sub-pixel circuits in the second row, and the fifth scanning signal terminal S5, the sixth scanning signal terminal S6, the seventh scanning signal terminal S7, and the eighth scanning signal terminal S8control the operation of sub-pixel circuits in the third row, thus, in a case where the pixel circuit includes n rows of sub-pixel circuits, a total of (2n+2) scanning signal terminals are required. Some embodiments of the present disclosure provide an array substrate2, as shown inFIG.2, including a substrate3, the pixel circuit10and a plurality of data signal lines disposed on the substrate3. Each of the plurality of data signal lines is connected to a data terminal, the data signal line is configured to provide the data signal to the data terminal, and every two adjacent columns of sub-pixel circuits share a single data signal line. The pixel circuit10includes a plurality of sub-pixel circuits. In some embodiments, the array substrate2further includes: a plurality of first power supply voltage signal lines, and the plurality of data signal lines and the plurality of first power voltage signal lines are disposed in a same layer and in parallel. The array substrate further includes: a plurality of scanning signal lines, a plurality of initial signal lines, and a plurality of enable signal lines. The plurality of scanning signal lines are disposed in a same layer; and the plurality of initial signal lines and the plurality of enable signal lines are disposed in a same layer. It will be noted that, in a case where the pixel circuit includes the capacitor, the plurality of scanning signal lines and a first electrode of the capacitor of the pixel circuit are disposed in a same layer; and the plurality of initial signal lines, the plurality of enable signal lines and a second electrode of the capacitor are disposed in a same layer. Based on this, for example, as shown inFIG.11, in the first sub-pixel circuit100, the first transistor T1includes a first active layer, a first insulating layer, a first gate, a first source, and a first drain. The first insulating layer is disposed between the first active layer and the first source and the first drain; the first gate is connected to a first scanning signal line S1; and the first source is electrically connected to an initial signal line Vinit, and the first drain is electrically connected to the third transistor T3. The second transistor T2includes a second active layer, a second insulating layer, a second gate, a second source and a second drain, and the second insulating layer is disposed between the second active layer and the second source and the second drain; the second gate is electrically connected to a third scanning signal line S3; and the second source is electrically connected to an initial signal line Vinit, and the second drain is electrically connected to the anode of the light-emitting device L. The third transistor T3includes a third active layer, a third insulating layer, a third gate, a third source and a third drain, and the third insulating layer is disposed between the third active layer and the third source and the third drain; and the third gate is electrically connected to a second scanning signal line S2, the third source is electrically connected to the gate of the driving transistor, and the third drain is electrically connected to the drain of the driving transistor. The fourth transistor T4includes a fourth active layer, a fourth insulating layer, a fourth gate, a fourth source and a fourth drain, and the fourth insulating layer is disposed between the fourth active layer and the fourth source and the fourth drain; the fourth source passes through a via hole Q1in the fourth insulating layer and is electrically connected to the fourth active layer. The fourth drain passes through a via hole Q2in the fourth insulating layer and is electrically connected to the fourth active layer; the fourth gate is electrically connected to the second scanning signal line S2; and the fourth source is electrically connected to a data line Data. The fifth transistor T5includes a fifth active layer, a fifth insulating layer, a fifth gate, a fifth source and a fifth drain, and the fifth insulating layer is disposed between the fifth active layer and the fifth source and the fifth drain; the fifth source passes through a via hole Q3in the fifth insulating layer and is electrically connected to the fifth active layer, and the fifth drain passes through a via hole Q4in the fifth insulating layer and is electrically connected to the fifth active layer; the fifth gate is electrically connected to an enable signal line EM; and the fifth source is electrically connected to the drain of the driving transistor, and the fifth drain is electrically connected to the anode of the light-emitting device L. The sixth transistor T6includes a sixth active layer, a sixth insulating layer, a sixth gate, a sixth source and a sixth drain, and the sixth insulating layer is disposed between the sixth active layer and the sixth source and the sixth drain; the sixth source passes through a via hole Q5in the sixth insulating layer and is electrically connected to the sixth active layer, and the sixth drain passes through a via hole Q6in the sixth insulating layer and is electrically connected to the sixth active layer; the sixth gate is electrically connected to the enable signal line EM; and the sixth source is electrically connected to a first power supply voltage signal line VDD, and the sixth drain is electrically connected to the fourth drain. Referring toFIG.11, the fourth drain and the sixth drain are the same one in a case where the via hole Q2and the via hole Q6are the same one. In the embodiments of the present disclosure, the first active layer, the second active layer, the third active layer, the fourth active layer, the fifth active layer and the sixth active layer are in a same layer and are made of a same material. Followed by analogy, scanning signal lines connected to gates of the transistors of the other sub-pixel circuits are sequentially staggered by a single scanning signal line, and the other connection manners are similar to those described above and will not be repeated herein. In some embodiments,FIG.12is a diagram showing a film layer structure defined by the dashed frame X inFIG.10. For the specific explanation of film layers, reference may be made to the above explanation ofFIG.11, which will not be repeated herein. The embodiments of the present disclosure further provide a method for driving the pixel circuit as described above. As shown inFIG.13, the method includes S10to S30. In S10, in a first scanning period P1, the reset sub-circuit101of the first sub-pixel circuit100inputs a voltage provided by the initial voltage terminal to the driving sub-circuit in response to a scanning signal provided by the first scanning signal terminal. In S20, in a second scanning period P2, the first sub-pixel circuit100inputs a data signal provided by the data terminal to the driving sub-circuit in response to a scanning signal provided by the second scanning signal terminal; and the second sub-pixel circuit200inputs the voltage provided by the initial voltage terminal Vinit to the driving sub-circuit103in response to the scanning signal provided by the second scanning signal terminal. In S30, in a third scanning period P3, the second sub-pixel circuit200writes a data signal output by the data terminal into the driving sub-circuit103in response to a scanning signal provided by the third scanning signal terminal. In some embodiments, in a case where the pixel circuit further includes a third sub-pixel circuit, after S30, the method for driving the pixel circuit further includes the following steps. In the third scanning period P3, the third sub-pixel circuit300inputs the voltage provided by the initial voltage terminal Vinit to the driving sub-circuit103in response to the scanning signal provided by the third scanning signal terminal. In the fourth scanning period P4, the third sub-pixel circuit300writes a data signal output by the data terminal into the driving sub-circuit103in response to a scanning signal output by a fourth scanning signal terminal to compensate for a threshold voltage of the driving sub-circuit103. In some embodiments, the driving method for the pixel circuit further includes the following steps. In the third scanning period P3, the reset sub-circuit101of the first sub-pixel circuit inputs the voltage provided by the initial voltage terminal Vinit to the light-emitting device L in response to the scanning signal provided by the third scanning signal terminal. In the fourth scanning period P4, the reset sub-circuit101of the second sub-pixel circuit200inputs the voltage provided by the initial voltage terminal Vinit to the light-emitting device L in response to the scanning signal provided by the fourth scanning signal terminal. In the case where the pixel circuit further includes the third sub-pixel circuit300disposed in a third sub-pixel circuit, in a fifth scanning period P5, a reset sub-circuit101of the third sub-pixel circuit300inputs the voltage provided by the initial voltage terminal Vinit to the light-emitting device L in response to a scanning signal provided by a fifth scanning signal terminal. In some embodiments, the method for driving the pixel circuit further includes in a light-emitting phase, a light-emitting control sub-circuit of the sub-pixel circuit closes a current path between a first power supply voltage terminal and a second power supply voltage terminal in response to an enable signal provided by an enable terminal, so that a driving current is transmitted to the light-emitting device. It will be noted that in a case where the pixel circuit further includes a fourth sub-pixel circuit400disposed in a fourth sub-pixel, the methods for driving the second sub-pixel circuit200, the third sub-pixel circuit300and the fourth sub-pixel circuit400are the same as the methods for driving the first sub-pixel circuit100, the second sub-pixel circuit200and the third sub-pixel circuit300. The method for driving the subsequent sub-pixel circuit, followed by analogy, will not be repeated herein. In the embodiments of the present disclosure, for example, since two columns of sub-pixels share a single data line and every two rows of sub-pixels share a single enable signal line EM, the enable signal is controlled and output by a gate driver on array (GOA). In order to ensure that the every two rows of sub-pixels emit light normally simultaneously, generally, sub-pixels of a first row and a second row emit light after sub-pixels of a third row are written signals of the data terminal and when sub-pixels of a fourth row are written signals of the data terminal, and so on. The method for driving the pixel circuit provided by the embodiments of the present disclosure has the same beneficial effects as those of the above described pixel circuits, which will not be repeated herein. The foregoing descriptions are merely specific implementations of the present disclosure, but the protection scope of the present disclosure is not limited thereto. Any changes or replacements that a person skilled in the art could conceive of within the technical scope of the present disclosure shall be included in the protection scope of the present disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims. | 88,554 |
11862086 | DETAILED DESCRIPTION The advantages and features of the present disclosure and methods for accomplishing the same will be more clearly understood from embodiments described below with reference to the accompanying drawings. However, the present disclosure is not limited to the following embodiments but may be implemented in various different forms. Rather, the present embodiments will make the disclosure of the present disclosure complete and allow those skilled in the art to completely comprehend the scope of the present disclosure. The shapes, sizes, ratios, angles, numbers, and the like illustrated in the accompanying drawings for describing the embodiments of the present disclosure are merely examples, and the present disclosure is not limited thereto. Like reference numerals generally denote like elements throughout the present specification. Further, in describing the present disclosure, detailed descriptions of known related technologies may be omitted to avoid unnecessarily obscuring the subject matter of the present disclosure. The terms such as “comprising,” “including,” “having,” and “consist of” used herein are generally intended to allow other components to be added unless the terms are used with the term “only.” Any references to singular may include plural unless expressly stated otherwise. Components are interpreted to include an ordinary error range even if not expressly stated. When the position relation between two components is described using the terms such as “on,” “above,” “below,” and “next,” one or more components may be positioned between the two components unless the terms are used with the term “immediately” or “directly.” The terms “first,” “second,” and the like may be used to distinguish components from each other, but the functions or structures of the components are not limited by ordinal numbers or component names in front of the components. The same reference numerals may refer to substantially the same elements throughout the present disclosure. The following embodiments can be partially or entirely bonded to or combined with each other and can be linked and operated in technically various ways. The embodiments can be carried out independently of or in association with each other. Each of the pixels may include a plurality of sub-pixels having different colors in order to reproduce the color of the image on a screen of the display panel. Each of the sub-pixels includes a transistor used as a switch element or a driving element. Such a transistor may be implemented as a TFT (Thin Film Transistor). A driving circuit of the display device writes a pixel data of an input image to pixels on the display panel. To this end, the driving circuit of the display device may include a data driving circuit configured to supply data signals to the data lines, a gate driving circuit configured to supply a gate signal to the gate lines, and the like. In a display device of the present disclosure, the pixel circuit and the gate driving circuit may include a plurality of transistors. Transistors may be implemented as oxide thin film transistors (oxide TFTs) including an oxide semiconductor, low temperature polysilicon (LTPS) TFTs including low temperature polysilicon, or the like. In embodiments, descriptions will be given based on an example in which the transistors of the pixel circuit and the gate driving circuit are implemented as the n-channel oxide TFTs, but the present disclosure is not limited thereto. Generally, a transistor is a three-electrode element including a gate, a source, and a drain. The source is an electrode that supplies carriers to the transistor. In the transistor, carriers start to flow from the source. The drain is an electrode through which carriers exit from the transistor. In a transistor, carriers flow from a source to a drain. In the case of an n-channel transistor, since carriers are electrons, a source voltage is a voltage lower than a drain voltage such that electrons may flow from a source to a drain. The n-channel transistor has a direction of a current flowing from the drain to the source. In the case of a p-channel transistor (p-channel metal-oxide semiconductor (PMOS)), since carriers are holes, a source voltage is higher than a drain voltage such that holes may flow from a source to a drain. In the p-channel transistor, since holes flow from the source to the drain, a current flows from the source to the drain. It should be noted that a source and a drain of a transistor are not fixed. For example, a source and a drain may be changed according to an applied voltage. Therefore, the disclosure is not limited due to a source and a drain of a transistor. In the following description, a source and a drain of a transistor will be referred to as a first electrode and a second electrode. A gate signal swings between a gate-on voltage and a gate-off voltage. The gate-on voltage is set to a voltage higher than a threshold voltage of a transistor, and the gate-off voltage is set to a voltage lower than the threshold voltage of the transistor. The transistor is turned on in response to the gate-on voltage and is turned off in response to the gate-off voltage. In the case of an n-channel transistor, a gate-on voltage may be a gate high voltage VGH and VEH, and a gate-off voltage may be a gate low voltage VGL and VEL. The gate-on voltages VGH, VEH may be the same or different from each other, and the gate-off voltages VGL, VEL may be the same or different from each other. Hereinafter, various embodiments of the present disclosure will be described in detail with reference to the accompanying drawings. In the following embodiments, a display device will be described focusing on an organic light emitting display device, but the present disclosure is not limited thereto. Referring toFIGS.1A and1B, a display device according to an embodiment of the present disclosure includes a display panel100, a display panel driver for writing pixel data to pixels of the display panel100, and a power supply140for generating power beneficial for driving the pixels and the display panel driver. The display panel100may be a display panel of a rectangular structure having a length in the X-axis direction, a width in the Y-axis direction, and a thickness in the Z-axis direction. The display panel100includes a pixel array that displays an input image on a screen. The pixel array includes a plurality of data lines102, a plurality of gate lines103intersected with the data lines102, and pixels arranged in a matrix form. The display panel100may further include power lines commonly connected to the pixels. InFIG.5, the power lines may include a first power line VDDL to which a pixel driving voltage VDD is applied, a second power line INL to which an initialization voltage Vinit is applied, and a third power line REFL to which a reference voltage Vref is applied. The display panel100may further include a fourth power line to which a low-potential power supply voltage VSS is applied. The cross-sectional structure of the display panel100may include a circuit layer12, a light emitting element layer14, and an encapsulation layer16stacked on a substrate10as shown inFIG.1B. The circuit layer12may include a TFT array including a pixel circuit connected to wirings such as a data line, a gate line, and a power line, a de-multiplexer array112, a gate driver120, and the like. The wirings and circuit elements of the circuit layer12may include a plurality of insulating layers, two or more metal layers separated with the insulating layer therebetween, and an active layer having a semiconductor material. All transistors formed in the circuit layer12may be implemented as n-channel oxide TFTs. The light emitting element layer14may include a light emitting element EL driven by a pixel circuit. The light emitting element EL may include a red (R) light emitting element, a green (G) light emitting element, and a blue (B) light emitting element. In another embodiment, the light emitting element layer14may include a white light emitting element and a color filter. The light emitting elements EL of the light emitting element layer14may be covered by a protective layer including an organic film and a passivation film. The encapsulation layer16covers the light emitting element layer14to seal the circuit layer12and the light emitting element layer14. The encapsulation layer16may have a multilayered insulating structure in which an organic film and an inorganic film are alternately stacked. The inorganic film blocks the penetration of moisture and oxygen. The organic film planarizes the surface of the inorganic film. When the organic film and the inorganic film are stacked in multiple layers, a movement path of moisture or oxygen becomes longer compared to a single layer, so that penetration of moisture and oxygen affecting the light emitting element layer14can be effectively blocked. A touch sensor layer (not shown) may be formed on the encapsulation layer16, and a polarizing plate or a color filter layer may be disposed thereon. The touch sensor layer may include capacitive type touch sensors that sense a touch input based on a change in capacitance before and after the touch input. The touch sensor layer may include metal wiring patterns and insulating layers forming the capacitance of the touch sensors. The insulating layers may insulate a portion where the metal wiring patterns are intersected, and may planarize the surface of the touch sensor layer. The polarizing plate may improve visibility and contrast ratio by converting the polarization of external light reflected by metal of the touch sensor layer and the circuit layer. The polarizing plate may be implemented as a polarizing plate in which a linear polarizing plate and a phase delay film are bonded, or a circular polarizing plate. A cover glass may be adhered to the polarizing plate. The color filter layer may include red, green, and blue color filters. The color filter layer may further include a black matrix pattern. The color filter layer may replace the polarizing plate by absorbing a part of the wavelength of light reflected from the circuit layer and the touch sensor layer, and increase the color purity of an image reproduced in the pixel array. The pixel array includes a plurality of pixel lines L1to Ln. Each of the pixel lines L1to Ln includes one line of pixels arranged along a line direction X in the pixel array of the display panel100. Pixels arranged in one pixel line share the gate lines103. Sub-pixels arranged in a column direction Y along a data line direction share the same data line102. One horizontal period1H is a time obtained by dividing one frame period by the total number of pixel lines L1to Ln. The display panel100may be implemented as a non-transmissive display panel or a transmissive display panel. The transmissive display panel may be applied to a transparent display device in which an image is displayed on a screen and an actual background is visible. The display panel may be manufactured as a flexible display panel. The flexible display panel may be implemented as an OLED panel using a plastic substrate. A pixel array and a light emitting device of the plastic OLED panel may be disposed on an organic thin film adhered to a back plate. The organic thin film may be disposed on the back plate of the plastic OLED panel. A pixel circuit and a light emitting device may be stacked on the organic thin film, and a touch sensor array may be formed thereon. The back plate blocks the moisture permeation towards the organic thin film so that the pixel array is not exposed to humidity. The organic thin film may be a thin polyimide (PI) film substrate. A multi-layered buffer film may be formed of an insulating material (not shown) on the organic thin film. Lines of the pixel array may be formed on the organic thin film so as to supply power or signals applied to the pixel circuit and the touch sensor array. Each of the pixels101may be divided into a red sub-pixel, a green sub-pixel, and a blue sub-pixel to implement color. Each of the pixels may further include a white sub-pixel. Each of the sub-pixels includes a pixel circuit. Hereinafter, a pixel may be interpreted as having the same meaning as a sub-pixel. Each pixel circuit is connected to the data lines, the gate lines, and the power lines. The pixels may be arranged as real color pixels and pentile pixels. The pentile pixel may realize a higher resolution than the real color pixel by driving two sub-pixels having different colors as one pixel101through a preset or selected pixel rendering algorithm. The pixel rendering algorithm may compensate for insufficient color representation in each pixel with the color of light emitted from an adjacent pixel. Touch sensors may be disposed on the display panel100. A touch input may be sensed using separate touch sensors or may be sensed through pixels. The touch sensors may be disposed as an on-cell type or an add-on type on the screen of the display panel or implemented as in-cell type touch sensors embedded in the pixel array. The power supply140generates direct current (DC) power beneficial for driving the pixel array and the display panel driver of the display panel100by using a DC-DC converter. The DC-DC converter may include a charge pump, a regulator, a buck converter, a boost converter, and the like. The power supply140may adjust the level of a DC input voltage applied from a host system (not shown) and thereby generate DC voltages such as a gamma reference voltage VGMA, gate-on voltages VGH and VEH, gate-off voltages VGL and VEL, pixel driving voltage VDD, a low-potential power supply voltage VSS, a reference voltage Vref, and an initialization voltage Vinit. The gamma reference voltage VGMA is supplied to a data driver110. The gate-on voltages VGH and VEH and the gate-off voltages VGL and VEL are supplied to a gate driver120. The pixel driving voltage VDD, the low-potential power supply voltage VSS, the reference voltage Vref, and the initialization voltage Vinit are commonly supplied to the pixels. The reference voltage Vref and the initialization voltage Vinit may be generated from the data driver110. The display panel driver writes pixel data of an input image to the pixels of the display panel100under the control of a timing controller (TCON)130. The display panel driver includes the data driver110and the gate driver120. The display panel driver may further include a de-multiplexer array112disposed between the data driver110and the data lines102. The de-multiplexer array112sequentially connects channels of the data driver110to the data lines102by using a plurality of de-multiplexers (DEMUX) to transfer the data voltage output from the data driver110to the data lines102. The de-multiplexer array112may include a plurality of switch elements disposed on the display panel100. When the de-multiplexer array112is disposed between output terminals of the data driver110and the data lines102, the number of channels of the data driver110may be reduced. The de-multiplexer array112may be omitted. The display panel driver may further include a touch sensor driver for driving the touch sensors. The touch sensor driver is omitted fromFIG.1A. The data driver and the touch sensor driver may be integrated into one drive integrated circuit (IC). In a mobile device or a wearable device, the timing controller130, the power supply140, the data driver110, and the like may be integrated into one drive IC. The display panel driver may operate in a low-speed driving mode under the control of the timing controller130. The low-speed driving mode may be set to reduce power consumption of the display device when the input image does not change by a preset or selected number of frames under analysis of the input image. In the low-speed driving mode, the power consumption of the display panel driver and the display panel100may be reduced by lowering a refresh rate of the pixels when still images are inputted for a predetermined or selected time or longer. The low-speed driving mode is not limited to a case where still images are inputted. For example, when the display device operates in a standby mode or when a user command or an input image is not inputted to a display panel driving circuit for a predetermined or selected time or longer, a display panel driving circuit may operate in the low-speed driving mode. The data driver110generates a data voltage by converting pixel data of an input image received as a digital signal from the timing controller130with a gamma compensation voltage every frame period by using a digital to analog converter (DAC). The gamma reference voltage VGMA is divided into gamma compensation voltages for respective gray scales through a voltage divider circuit. The gamma compensation voltage for each gray scale is provided to the DAC of the data driver110. The data voltage is outputted through an output buffer in each of the channels of the data driver110. The gate driver120may be implemented as a gate in panel (GIP) circuit directly formed on the display panel100together with a TFT array and wirings of the pixel array. The GIP circuit may be disposed in a bezel (BZ) area, which is a non-display area, of the display panel100or may be dispersedly disposed in the pixel array in which an input image is reproduced. The gate driver120sequentially outputs gate signals to the gate lines103under the control of the timing controller130. The gate driver120may sequentially supply the gate signals to the gate lines103by shifting the gate signals using a shift register. The gate signal may include a scan signal and a light emission control signal (hereinafter, referred to as an “EM signal”) in the organic light emitting diode display. The scan signal includes a scan pulse swinging between the gate-on voltage VGH and the gate-off voltage VGL. The EM signal may include an emission control (EM) pulse swinging between the gate-on voltage VEH and the gate-off voltage VEL. The scan pulse is synchronized with the data voltage to select pixels of a line to which data is to be written. The EM signal defines or controls the emission time of the pixels. For example, the pixels may begin emitting light at a rising edge of the EM signal, and may cease emitting light at a falling edge of the EM signal. The gate driver120may include a first gate driver121and a second gate driver122. The first gate driver121outputs the scan pulse in response to a start pulse and a shift clock from the timing controller130, and shifts the scan pulse according to the shift clock timing. The second gate driver122outputs the EM pulse in response to the start pulse and the shift clock from the timing controller130, and sequentially shifts the EM pulse according to the shift clock. The timing controller130receives digital video data DATA of an input image, and a timing signal synchronized therewith, from the host system. The timing signal may include a vertical synchronization signal Vsync, a horizontal synchronization signal Hsync, a clock CLK, a data enable signal DE, and the like. Because a vertical period and a horizontal period can be known by counting the data enable signal DE, the vertical synchronization signal Vsync and the horizontal synchronization signal Hsync may be omitted. The data enable signal DE has a cycle of one horizontal period (1H). The host system may be one of a television (TV) system, a tablet computer, a notebook computer, a navigation system, a personal computer (PC), a home theater system, a mobile device, a wearable device, and a vehicle system. The host system may scale an image signal from a video source to fit the resolution of the display panel100and transmit it to the timing controller130together with the timing signal. The timing controller130lowers a frame rate (or frequency) at which pixel data is written to pixels in the low-speed driving mode compared to the normal driving mode. For example, a data refresh frame in which pixel data is written to pixels in the normal driving mode may occur at a frequency of 60 Hz or higher, for example, at any one refresh rate of 60 Hz, 120 Hz, and 144 Hz, and the data refresh frame (DRF) in the low-speed driving mode may occur at a refresh rate of a lower frequency than that of the normal driving mode. In order to lower the refresh rate of pixels in the low-speed driving mode, the timing controller130may lower the frame frequency to a frequency between 1 Hz and 30 Hz and thereby lower the driving frequency of the display panel driver. Based on the timing signals Vsync, Hsync, and DE received from the host system, the timing controller130generates a data timing control signal for controlling the operation timing of the data driver110, MUX signals MUX1and MUX2for controlling the operation timing of the de-multiplexer array112, and a gate timing control signal for controlling the operation timing of the gate driver120. Controlling the operation timing of the display panel driver, the timing controller130synchronizes the data driver110, the de-multiplexer array112, the touch sensor driver, and the gate driver120. The gate timing control signal outputted from the timing controller130may be inputted to the gate driver120through a level shifter (not shown). The level shifter may receive the gate timing control signal, generate a start signal and a shift clock swinging between the gate-on voltages VGH and VEH and the gate-off voltages VGL and VEL, and supply them to the gate driver120. The timing controller130may control the power supply140to vary the output voltage of the power supply140according to the accumulated driving time of the pixels101. For example, based on the result of measuring a reliability characteristic of positive bias temperature stress (PBTS) for transistors constituting the pixel circuit before product shipment, the shift amount of the threshold voltage Vth according to the accumulated driving time of the pixels may be derived. The timing controller130may have a look-up table (LUT) in which the shift amount of the threshold voltage according to the accumulated driving time of the switch element and corresponding voltage compensation values are preset or selected. The timing controller130may provide a voltage compensation value for compensating for the shift amount of the threshold voltage according to the accumulated driving time of the pixels to the power supply140, based on data stored in the look-up table. In this case, the power supply140may change at least one of the gamma reference voltage VGMA, the gate-on voltages VGH and VEH, and the gate-off voltages VGL and VEL according to the voltage compensation value from the timing controller130. The data voltage Vdata outputted from the data driver110may be changed according to the gamma reference voltage VGMA. The voltages of the scan pulse and the EM pulse outputted from the gate driver120may be changed according to the gate-on voltages VGH and VEH and the gate-off voltages VGL and VEL. Due to device characteristic deviations and process deviations caused in the manufacturing process of the display panel100, there may be differences in electrical characteristics of the driving element among pixels, and such differences may increase as driving time of the pixels elapses. In order to compensate for differences in electrical characteristics of the driving element among pixels, an internal compensation technique or an external compensation technique may be applied to the organic light emitting diode display. The internal compensation technique samples a threshold voltage of the driving element for each sub-pixel by using an internal compensation circuit implemented in each pixel circuit and compensates the gate-source voltage (Vgs) of the driving element by the threshold voltage. The external compensation technique senses in real time a current or voltage of the driving element that varies according to electrical characteristics of the driving element by using an external compensation circuit. The external compensation technique compensates for the deviation (or variation) of the electrical characteristics of the driving element in each pixel in real time by modulating the pixel data (digital data) of the input image by the electrical characteristic deviation (or variation) of the driving element sensed for each pixel. Using the external compensation technique and/or the internal compensation technique, the display panel driver may drive the pixels. The pixel circuit may be implemented as a circuit to which the internal compensation circuit is applied, for example, the circuits shown inFIGS.5to10. FIG.2is a circuit diagram illustrating a pixel circuit according to one embodiment of the present disclosure. Referring toFIG.2, the pixel circuit includes a light emitting element EL, a driving element DT for driving the light emitting element EL, a first switch element T1connected between a first gate electrode G1and a first electrode D of the driving element DT, and a second switch element T2connected between a second electrode S of the driving element DT and the light emitting element EL. The driving element DT and the switch elements T1and T2may be implemented as n-channel oxide TFTs. The light emitting element EL may be implemented as an OLED. The OLED includes an organic compound layer formed between an anode electrode and a cathode electrode. The organic compound layer may include, but is not limited to, a hole injection layer (HIL), a hole transport layer (HTL), an emission layer (EML), an electron transport layer (ETL), and an electron injection layer (EIL). When a voltage is applied to the anode and cathode electrodes of the OLED, holes passing through the hole transport layer (HTL) and electrons passing through the electron transport layer (ETL) are moved to the emitting layer (EML) to form exciton, and thus visible light is emitted from the emitting layer (EML). The OLED used as the light emitting element EL may have a tandem structure in which a plurality of emitting layers are stacked. The OLED of the tandem structure can improve the luminance and lifespan of pixels. The driving element DT may be a MOSFET with a double gate structure including a first gate electrode G1and a second gate electrode G2. The second gate electrode G2may be a body electrode or bottom gate electrode. The first gate electrode G1and the second gate electrode G2may overlap each other with a semiconductor active pattern therebetween. A predetermined or selected voltage, for example, an initialization voltage Vinit to be described later, may be applied to the second gate electrode G2. A voltage Vbs between the second gate electrode G2of the driving element DT and the second electrode of the driving element may shift the threshold voltage of the driving element DT to a desired voltage. The first electrode may be a drain electrode, and the second electrode may be a source electrode. Hereinafter, the voltage between the second gate electrode G2of the driving element DT and the second electrode of the driving element is abbreviated as “Vbs.” The first switch element T1includes a first electrode connected to the first electrode D of the driving element DT, a second electrode connected to the first gate electrode G1of the driving element DT, and a gate electrode to which a scan pulse is applied. The first switch element T1is turned on in response to the gate-on voltage VGH of the scan pulse and is turned off according to the gate-off voltage VGL of the scan pulse. When the first switch element T1is turned on, the driving element DT operates as a diode because the first gate electrode G1and the first electrode are connected. When the first switch element T1is turned off, the first gate electrode G1and the first electrode D of the driving element DT are separated. The second switch element T2includes a first electrode connected to the second electrode S of the driving element DT, a second electrode connected to the anode electrode of the light emitting element EL, and a gate electrode to which an EM pulse is applied. The second switch element T2is turned on in response to the gate-on voltage VEH of the EM pulse and is turned off according to the gate-off voltage VEL of the EM pulse. When the second switch element T2is turned on, a current path is formed between the driving element DT and the light emitting element EL to supply current to the light emitting element EL. When the second switch element T2is turned off, the current path between the driving element DT and the light emitting element EL is cut off. The pixel circuit may further include a first capacitor C1, a second capacitor C2, a third switch element T3, and a fourth switch element T4. The first capacitor C1includes a first electrode connected to the data line and a second electrode connected to the first gate electrode G1of the driving element DT and may supply the data voltage Vdata of the pixel data to the first gate electrode G1of the driving element DT. The second capacitor C2includes a first electrode connected to the power line to which the initialization voltage Vinit is applied, and a second electrode connected to the second electrode of the driving element DT. The first electrode of the first capacitor C1and the first electrode of the second capacitor C2are connected. The third switch element T3supplies the initialization voltage Vinit to the first and second capacitors C1and C2. The fourth switch element T4supplies the data voltage Vdata to the first and second capacitors C1and C2. The first, second, third and fourth switch elements T1to T4are turned on in response to the gate-on voltage VGH of the scan pulse and turned off in response to the gate-off voltage VGL of the scan pulse. InFIG.3, the horizontal axis represents a gate-source voltage (Vgs[V]) of the driving element DT, and the vertical axis represents a drain-source current Ids[A] of the driving element DT. When sensing the threshold voltage of the driving element DT, Vbs may shift the threshold voltage of the driving element DT within a range capable of sensing as shown inFIG.3. Therefore, it is possible to accurately sense the threshold voltage of the driving element DT even if the shift of the threshold voltage of the driving element DT exceeds the range capable of sensing. For example, if the threshold voltage of the driving element DT is shifted to a voltage of 0V or less, the threshold voltage of the driving element DT cannot be sensed. However, because the threshold voltage of the driving element DT can be shifted to a positive voltage greater than 0V by applying Vbs to the driving element DT, the threshold voltage of the driving element DT can be sensed. The degree of threshold voltage shift of the driving element DT depends on Vbs, a parasitic capacitance (Cgi inFIG.4) connected to the first gate electrode G1, and a parasitic capacitance (Cbuf inFIG.4) connected to the second gate electrode G2, so that it is possible to shift the threshold voltage of the driving element to a desired voltage. When the reference voltage Vref is applied to the first gate electrode G1of the driving element DT and the initialization voltage Vinit is applied to the second gate electrode G2, the voltage of the first gate electrode G1may be Vref+Vth inFIG.2. Vref is a reference voltage, and Vth is a threshold voltage of the driving element DT shifted by Vbs. In this case, if Vref>Vinit, the threshold voltage of the driving element DT may be shifted to a positive voltage. FIG.4is a cross-sectional diagram schematically illustrating a cross-sectional structure of the driving element DT in the display panel100. Referring toFIG.4, a first metal pattern may be formed on a substrate of the display panel100. The first metal pattern may include the second gate electrode G2of the driving element DT. A first insulating layer BUF may be formed on the substrate to cover the first metal pattern. A semiconductor layer may be formed on the first insulating layer BUF. The semiconductor layer includes the semiconductor active pattern ACT of the driving element DT. A second insulating layer GI may be formed on the first insulating layer BUF to cover the semiconductor pattern. A second metal pattern may be formed on the second insulating layer GI. The second metal pattern may include the first gate electrode G1of the driving element DT. InFIG.4, “Cgi” is a capacitance formed between the first gate electrode G1and the semiconductor active pattern ACT in the driving element DT, and “Cbuf” is a capacitance connected between the second gate electrode G2and the semiconductor active pattern ACT in the driving element DT. In order to increase the effect of Vbs applied to the driving element DT, the capacitance of Cbuf may be greater than the capacitance of Cgi by setting the thickness tbuf of the first insulating layer BUF to be smaller than the thickness tgi of the second insulating layer GI. FIG.5is a circuit diagram illustrating a pixel circuit according to another embodiment of the present disclosure. The pixel circuit illustrated inFIG.5includes an internal compensation circuit that samples the threshold voltage of the driving element DT and compensates for a variation in the threshold voltage of the driving element DT.FIG.6is a waveform diagram illustrating a method of driving a pixel circuit according to one embodiment of the present disclosure. Referring toFIGS.5and6, the pixel circuit includes a light emitting element EL, a driving element DT, first and second capacitors C1and C2, and first, second, third, fourth, fifth, sixth and seventh switch elements T1to T7. The driving element DT and the switch elements T1to T7may be implemented as n-channel oxide TFTs. In this pixel circuit, a direct current voltage such as a pixel driving voltage VDD, a low-potential power supply voltage VSS, a reference voltage Vref, and an initialization voltage Vinit, a data voltage Vdata that varies according to a gray scale of pixel data, scan pulses SC1, SC2, and SC3, and EM pulses EM1and EM2are supplied. Voltages of the scan pulses SC1, SC2, and SC3and the EM pulses EM1and EM2swing between the gate-on voltages VGH and VEH and the gate-off voltages VGL and VEL, respectively. A voltage relationship commonly applied to the pixels may be set as VDD>Vref>Vinit>VSS. The data voltage Vdata may be generated as a gamma compensation voltage selected according to the gray scale of the pixel data from the data driver110in a voltage range lower than the pixel driving voltage VDD and higher than the low-potential power supply voltage VSS. The initialization voltage Vinit may be set as a voltage equal to or less than the threshold voltage of the light emitting element EL. The reference voltage Vref may be set as a voltage higher than the initialization voltage Vinit so that a negative back-bias is applied to the driving element DT in the sampling step SMPL. The gate-on voltages VGH and VEH may be set to be higher than the pixel driving voltage VDD. The gate-off voltages VGL and VEL may be set to be lower than the low-potential power supply voltage VSS. The scan pulses SC1, SC2, and SC3may include a first scan pulse SC1applied to a first gate line GL1, a second scan pulse SC2applied to a second gate line GL2, and a third scan pulse SC3applied to a third gate line GL3. The EM pulses EM1and EM2may include a first EM pulse EM1applied to a fourth gate line GL4and a second EM pulse EM2applied to a fifth gate line GL5. The driving period of the pixel circuit may be divided into or include an initialization step INIT in which the pixel circuit is initialized, a sampling step SMPL in which the threshold voltage Vth of the driving element DT is sampled, an addressing step ADDR in which the data voltage Vdata is charged and pixel data is written, and a light emission step EMIS in which the light emitting element EL emits light with a brightness corresponding to the gray scale of the pixel data. InFIG.6, “(N−1)th FR.” denotes an (N−1)th frame period, and “Nth FR.” denotes an Nth frame period. The first scan pulse SC1may be the gate-on voltage VGH in the addressing step ADDR. The first scan pulse SC1may be the gate-off voltage VGL in the initialization step INIT, the sampling step SMPL, and the light emission step EMIS. The first scan pulse SC1may be generated as a pulse equal to or less than one horizontal period1H synchronized with the data voltage Vdata of the pixel data. The data voltage Vdata is supplied to the pixel circuit through the data line DL in the addressing step ADDR in synchronization with the first scan pulse SC1. The second scan pulse SC2may rise to the gate-on voltage VGH prior to the third scan pulse SC3and fall to the gate-off voltage VGL prior to a falling edge of the third scan pulse SC3. The second scan pulse SC2may be the gate-on voltage VGH in the initialization step INIT and the sampling step SMPL. The second scan pulse SC2may be the gate-off voltage VGL in the addressing step ADDR and the light emission step EMIS. The third scan pulse SC3may be generated as the gate-on voltage VGH in the sampling step SMPL and the addressing step ADDR. In the addressing step ADDR, a gate-on voltage section of the third scan pulse SC3may overlap with a gate-on voltage section of the first scan pulse SC1. The third scan pulse SC3may rise to the gate-on voltage VGH after a rising edge of the second scan pulse SC2and then fall to the gate-off voltage VGL after a falling edge of the second scan pulse SC2. The third scan pulse SC3may be the gate-off voltage VGL in the initialization step INIT and the light emission step EMIS. The first EM pulse EM1may be generated as the gate-on voltage VEH in the initialization step INIT and generated as the gate-on voltage VEH during at least a partial period of the light emission step EMIS. The first EM pulse EM1may be the gate-off voltage VEL in the sampling step INIT and the addressing step ADDR. The first EM pulse EM1may fall to the gate-off voltage VEL after a falling edge of the second EM pulse EM2and rise to the gate-on voltage VEH before a rising edge of the second EM pulse EM2. The second EM pulse EM2may be generated as the gate-on voltage VEH during at least a partial period of the light emission step EMIS. The second EM pulse EM2may be the gate-off voltage VEL in the initialization step INIT, the sampling step INIT, and the addressing step ADDR. The light emitting element EL may be implemented as an OLED. The anode electrode of the light emitting element EL may be connected to a fourth node n4, and the low-potential power supply voltage VSS may be applied to the cathode electrode of the light emitting element EL. The first capacitor C1may be connected between the second node n2and the fifth node n5. The first capacitor C1stores the threshold voltage Vth of the driving element DT in the sampling step SMPL. In the addressing step ADDR, the data voltage Vdata is transferred to the first gate electrode G1of the driving element DT through the first capacitor C1. The second capacitor C2is connected between the third node n1and the fifth node n5. The second capacitor C2stores the second electrode voltage, e.g., the source voltage, of the driving element DT at the beginning of the light emission step EMIS and maintains the gate-source voltage Vgs of the driving element during the light emission step EMIS. The driving element DT may be a MOSFET having a double gate structure. The driving element DT includes a first gate electrode connected to the second node n2, a second gate electrode connected to the fourth node n4, a first electrode connected to the first node n1, and a second electrode connected to the third node n3. As shown inFIG.4, the first and second gate electrodes of the driving element DT may overlap each other with the semiconductor active pattern therebetween. The first switch element T1includes a first electrode connected to the first node n1, a second electrode connected to the second node n2, and a gate electrode to which the second scan pulse SC2is applied. The first switch element T1is turned on in the initialization step INIT and the sampling step SMPL in response to the gate-on voltage VGH of the second scan pulse SC2and connects the first node n1and the second node n2. When the first switch element T1is turned on, the driving element DT operates as a diode because the first gate electrode G1and the first electrode are connected. The second switch element T2includes a first electrode connected to the third node n3, a second electrode connected to the fourth node n4, and a gate electrode to which the second EM pulse EM2is applied. The second switch element T2is turned on during at least a partial period of the light emission step EMIS in response to the gate-on voltage VEH of the second EM pulse EM2and forms a current path between the driving element DT and the light emitting element EL. In the initialization step INIT, the sampling step SMPL, and the addressing step ADDR, in which the second switch element T2is in an off state, the current path between the driving element DT and the light emitting element EL is cut off, and thus the light emitting element EL does not emit light. The third switch element T3includes a first electrode connected to the second power line INL to which the initialization voltage Vinit is applied, a second electrode connected to the fifth node n5, and a gate electrode to which the second scan pulse SC2is applied. The third switch element T3is turned on in the initialization step INIT and the sampling step SMPL in response to the gate-on voltage VGH of the second scan pulse SC2and supplies the initialization voltage Vinit to the fifth node n5. In the addressing step ADDR and the light emission step EMIS in which the third switch element T3is turned off, a current path between the second power line INL and the fifth node n5is cut off. The fourth switch element T4includes a first electrode connected to the data line DL to which the data voltage Vdata is applied, a second electrode connected to the fifth node n5, and a gate electrode to which the first scan pulse SC1is applied. The fourth switch element T4is turned on in the addressing step ADDR in response to the gate-on voltage VGH of the first scan pulse SC1and supplies the data voltage Vdata to the fifth node n5. In the initialization step INIT, the sampling step SMPL, and the light emission step EMIS, in which the fourth switch element T4is turned off, a current path between the data line DL and the fifth node n5is cut off. The fifth switch element T5includes a first electrode connected to the first power line VDDL to which the pixel driving voltage VDD is applied, a second electrode connected to the first node n1, and a gate electrode to which the first EM pulse EM1is applied. The fifth switch element T5is turned on in the initialization step INIT and the light emission step EMIS in response to the gate-on voltage VEH of the first EM pulse EM1and supplies the pixel driving voltage VDD to the first node n1. In the sampling step SMPL and the addressing step ADDR, in which the fifth switch element T5is turned off, a current path between the first power line VDDL and the first node n1is cut off. The sixth switch element T6includes a first electrode connected to the third node n3, a second electrode connected to the third power line REFL to which the reference voltage Vref is applied, and a gate electrode to which the third scan pulse SC3is applied. The sixth switch element T6is turned on in the sampling step SMPL and the addressing step ADDR in response to the gate-on voltage VGH of the third scan pulse SC3and supplies the reference voltage Vref to the third node n3. In the initialization step INIT and the light emission step EMIS, in which the sixth switch element T6is turned off, a current path between the third power line REFL and the third node n3is cut off. The seventh switch element T7includes a first electrode connected to the second power line INL to which the initialization voltage Vinit is applied, a second electrode connected to the fourth node n4, and a gate electrode to which the third scan pulse SC3is applied. The seventh switch element T7is turned on in the sampling step SMPL and the addressing step ADDR in response to the gate-on voltage VGH of the third scan pulse SC3and supplies the initialization voltage Vinit to the fourth node n4. In the initialization step INIT and the light emission step EMIS, in which the seventh switch element T7is turned off, a current path between the second power line INL and the fourth node n4is cut off. In the present disclosure, the sampling step SMPL and the addressing step ADDR may be separated by applying the reference voltage Vref to the third node n3to sample the threshold voltage Vth of the driving element DT in the sampling step SMPL and applying the data voltage Vdata in the addressing step ADDR. As a result, according to the present disclosure, the threshold voltage Vth of the driving element DT can be accurately sensed by ensuring a sufficiently long time, for example, two or more horizontal periods, of the sampling step SMPL, and thereby the shift of the threshold voltage Vth can be compensated. Hereinafter, a step-by-step driving method of the pixel circuit will be described in detail with reference toFIGS.7to10. FIG.7is a circuit diagram illustrating an initialization step INIT of the pixel circuit shown inFIG.5. Referring toFIG.7, in the initialization step INIT, the second scan pulse SC2and the first EM pulse EM1are generated as the gate-on voltages VGH and VEH, and the other gate signals SC1, SC3, and EM2are the gate-off voltages VGL and VEL. In the initialization step INIT, the second, fourth, sixth and seventh switch elements T2, T4, T6, and T7are turned off. Therefore, in the initialization step INIT, the first, third and fifth switch elements T1, T3and T5and the driving element DT are turned on. In this case, the first gate electrode and the first electrode of the driving element DT are connected as a diode connection. In the initialization step INIT, the voltages of the first and second nodes n1and n2are initialized to the pixel driving voltage VDD, and the voltage of the third node n3is changed to VDD−Vth0. Here, Vth0is an initial threshold voltage that Vbs is not applied to the driving element DT. The voltage of the fifth node n5is the initialization voltage Vinit. The voltage of the fourth node n4is maintained as the initialization voltage Vinit applied to the previous frame. FIG.8is a circuit diagram illustrating a sampling step SMPL of the pixel circuit shown inFIG.5. Referring toFIG.8, in the sampling step SMPL, the third scan pulse SC3is inverted to the gate-on voltage VGH, and the first EM pulse EM1is inverted to the gate-off voltage VEL. In the sampling step SMPL, the second scan pulse SC2maintains the gate-on voltage VGH. In the sampling step SMPL, the second and third scan pulses SC2and SC3are the gate-on voltage VGH, and the other gate signals SC1, EM1, and EM2are the gate-off voltages VGL and VEL. Therefore, in the sampling step SMPL, the first, third, sixth, and seventh switch elements T1, T3, T6, and T7and the driving element DT are turned on. In the sampling step SMPL, the initialization voltage Vinit is applied to the second gate electrode G2of the driving element DT through the turned-on third switch element T3, and the reference voltage Vref higher than the initialization voltage Vinit is applied to the second electrode of the driving element DT through the turned-on sixth switch element T6. Therefore, Vbs is applied to the driving element DT, so that the threshold voltage of the driving element DT can be shifted to a positive voltage higher than zero. In the sampling step SMPL, the voltages of the first and second nodes n1and n2are changed to Vref+Vth0+α. Here, α is β (Vref−Vinit), and β is Cbuf/Cgi. The voltage of the third node n3is the reference voltage Vref, and the voltages of the fourth and fifth nodes n4and n5are maintained as the initialization voltage Vinit. FIG.9is a circuit diagram illustrating an addressing step ADDR of the pixel circuit shown inFIG.5. Referring toFIG.9, in the addressing step ADDR, the first scan pulse SC1synchronized with the data voltage Vdata of the pixel data is generated as the gate-on voltage VGH. In the addressing step ADDR, the third scan pulse SC3maintains the gate-on voltage VGH and is then inverted to the gate-off voltage VGL. In the addressing step ADDR, the first EM pulse EM1maintains the gate-off voltage VEL and is then inverted to the gate-on voltage after the falling edge of the first scan pulse SC1. In the addressing step ADDR, the second scan pulse SC2is inverted to the gate-off voltage VGL. In the addressing step ADDR, the voltages of the first and second EM pulses EM1and EM2may be the gate-off voltage VEL. Therefore, in the addressing step ADDR, the first, fourth, sixth, and seventh switch elements T1, T4, T6, and T7and the driving element DT are turned on. In the addressing step ADDR, the voltage of the first node n1is maintained at Vref+Vth0+α, and the voltage of the second node n2is changed to Vref+Vth0+α+C′(Vdata−Vinit). Here, C′ may be expressed as C1/(C1+Cpar). “Cpar” is a parasitic capacitance connected to the first gate electrode G1of the driving element DT. When Cpar is 0, C′ becomes 1, so the data transfer rate is high. The higher the Cpar, the lower the data transfer rate. The voltage of the third node n3is the reference voltage Vref, and the voltages of the fourth and fifth nodes n4and n5are maintained as the initialization voltage Vinit. FIG.10is a circuit diagram illustrating a light emission step EMIS of the pixel circuit shown inFIG.5. Referring toFIG.10, in the light emission step EMIS, the voltages of the scan pulses SC1, SC2, and SC3are the gate-off voltage VGL. The first and second EM pulses EM1and EM2are generated as the gate-on voltage VEH during at least a partial period in the light emission step EMIS. Therefore, in the light emitting step EMIS, the driving element DT and the second and fifth switch elements T2and T5are turned on, and the first, third, fourth, sixth and seventh switch elements T1, T3, T4, T6and T7are turned off. At this time, Vbs is not applied to the driving element DT, and a current is supplied to the light emitting element EL according to the gate-source voltage Vgs of the driving element DT, so that the light emitting element EL can be turned on. In the light emission step EMIS, a current Ioled flowing through the light emitting element EL is k[(Vref−Vinit)+C′(Vdata−Vref)±(Vth0+α−Vth0)]2. Here, k is a constant value determined according to the mobility and parasitic capacitance of the driving element DT. Assuming the condition C′=1 by ignoring the parasitic capacitance of the second node n2, Ioled may be k[(Vdata−Vinit)+α)]2. During the light emission step EMIS, the initialization voltage Vinit applied to the second gate electrode of the driving element DT is substantially the same as the source voltage of the driving element DT. For this reason, there is no shift in the threshold voltage of the driving element DT due to the voltage of the second gate electrode of the driving element DT in the light emission step EMIS. FIG.11is a diagram illustrating a refresh rate in a normal driving mode and a low-speed driving mode.FIG.12is a waveform diagram illustrating a signal applied to a pixel circuit in a normal driving mode and a low-speed driving mode. InFIG.11, “fx” indicates the x-th frame period. Referring toFIGS.11and12, the frequency of a data refresh frame in which pixel data is written into the pixel circuit is set to be lower in the low-speed driving mode than in the normal driving mode. The driving time of the pixel circuit may be divided into the initialization step INIT, the sampling step SMPL, the addressing step ADDR, and the light emission step EMIS in every frame of the normal driving mode and the data refresh frame of the low-speed driving mode. The low-speed driving mode may include one or more anode reset frames (ARFs) allocated after the data refresh frame. In the anode reset frame (ARF), the driving time of the pixel circuit may be divided into the sampling step SMPL and the light emission step EMIS without the initialization step INIT. At least one of the anode reset frames (ARFs) may further include the addressing step ADDR. The timing controller130lowers a frame rate frequency at which pixel data is written to pixels in the low-speed driving mode compared to the normal driving mode. For example, the data refresh frame (DRF) in which pixel data is written to pixels in the normal driving mode may occur at a frequency of 60 Hz or higher, for example, at any one refresh rate of 60 Hz, 120 Hz, and 144 Hz, and the data refresh frame (DRF) in the low-speed driving mode may occur at a refresh rate of a lower frequency than that of the low-speed driving mode. When the refresh rate of the low-speed driving mode is 1 Hz, one data refresh frame (DRF) is allocated per second, and the remainder of the 60 frames may be the anode reset frame (ARF). During the anode reset frame (ARF) of the low-speed driving mode, the source drive IC in which the data driver110is integrated does not output a data voltage and thus does not generate power consumption. During the anode reset frame (ARF), the reference voltage Vref is applied to the third node n3of each of the sub-pixels and thereby resets the Vgs of the driving element DT stored in the previous data refresh frame (DRF). Therefore, in the low-speed driving mode, the luminance of the sub-pixels is not reduced during the anode reset frame (ARF), so that flicker is not recognized. The second scan pulse SC2is not generated in the anode reset frame (ARF) of the low-speed driving mode, the second gate line GL2maintains the gate-off voltage VGL, and the other gate pulses SC1, SC3, EM1, and EM2may be generated substantially the same as in the normal driving mode. FIG.13is a waveform diagram illustrating a method of driving a pixel circuit according to another embodiment of the present disclosure.FIG.14is a circuit diagram illustrating a reset step of a pixel circuit. Referring toFIGS.13and14, the reset step RST may be set prior to the initialization step INIT. In the reset step RST, the third scan pulse SC3is generated as the gate-on voltage VGH, and the other gate signals SC1, SC2, EM1, and EM2are the gate-off voltages VGL and VEL. Therefore, in the reset step RST, the sixth and seventh switch elements T6and T7are turned on, so that residual charges accumulated in the anode electrode of the light emitting element EL are discharged, and charges of the capacitors C1and C2are discharged. As a result, the present disclosure can reset the voltages charged in the capacitors C1and C2and the capacitor of the OLED in the previous frame, thereby preventing voltage fluctuation due to the influence of the previous voltage before sampling starts. A hold step HOLD may be set between the reset step RST and the initialization step INIT. In the hold step HOLD, all the gate signals SC1, SC2, SC3, EM1, and EM2are generated as the gate-off voltages, so that main nodes of the pixel circuit can be floating. In the pixel circuit, the first switch element T1connects the driving element DT through a diode connection in the sampling step SMPL in response to the second scan pulse SC2. At this time, the threshold voltage Vth of the driving element DT is sampled at the second node n2. When the first switch element T1is turned off due to a change in the gate voltage at the falling edge of the second scan pulse SC2, a kickback voltage is generated as shown inFIG.15at the voltage of the node n2to which the second gate electrode of the driving element DT is connected. InFIG.15, ‘Vn2’ is the voltage of the second node n2, and ‘Vn4’ is the voltage of the fourth node n4. A variation in the kickback voltage of the second node voltage Vn2may cause a threshold voltage sampling error of the driving element DT. When the threshold voltage of the first switch element T1changes to a positive direction due to a positive bias temperature stress (PBTS) that increases as the accumulated driving time of the first switch element T1increases, the kickback voltage may increase. Such a kickback voltage variation may cause a threshold voltage sampling error of the driving device DT to increase a current variation width flowing through the light emitting element EL in the light emitting step EMIS. As shown inFIGS.16to18, the present disclosure adjusts the gate-on voltage VGH or the gate-off voltage VGL of at least the second scan pulse SC2among the gate signals or adjusts the data voltage Vdata according to the accumulated driving time of the pixel circuit, and thereby it can offset the kickback voltage that increases as the accumulated driving time increases. In the same manner as the voltage adjustment method of the second scan pulse SC2, the gate voltages of the other scan pulse SC1and the EM pulse EM can be changed according to the accumulated driving time of the pixel circuit. Referring toFIG.16, under the control of the timing controller130, the power supply140may increase the gate-on voltage VGH as the accumulated driving time of the pixel circuit increases. As a result, as the kickback voltage increases, the voltage of the second node Vn2may decrease. At this time, the threshold voltage sampling rate of the driving element DT may be increased. Referring toFIG.17, under the control of the timing controller130, the power supply140may lower the gate-off voltage VGL as the accumulated driving time of the pixel circuit increases. As a result, as the kickback voltage increases, the voltage of the second node Vn2may decrease. The timing controller130may change the data voltage Vdata outputted from the data driver110by changing the pixel data value of an input image or by changing the gamma reference voltage VGMA outputted from the power140. As shown inFIG.18, as the accumulated driving time of the pixel circuit increases, the data voltage Vdata may decrease, so that an increase in the kickback voltage Vdata may be offset. The embodiments shown inFIGS.16to18are also applicable to the pixel circuit shown inFIG.19. The pixel circuit shown inFIG.19includes a light emitting element EL, six transistors DT, T1to T5, and one capacitor Cst, and samples the threshold voltage Vth of the driving element DT using a diode connection circuit in the sampling step SMPL. Referring toFIG.19, a driving element DT may be a MOSFET having a double gate structure to which a negative back-bias can be applied. The driving element DT includes a first gate electrode connected to a second node n2, a second gate electrode connected to a fourth node n4, a first electrode connected to a first node n1, and a second electrode connected to a third node n3. A first switch element T1includes a first electrode connected to the first node n1, a second electrode connected to the second node n2, and a gate electrode to which the second scan pulse SC2is applied. A second switch element T2includes a first electrode connected to the third node n3, a second electrode connected to the fourth node n4, and a gate electrode to which the second EM pulse EM2is applied. A third switch element T3includes a first electrode to which the initialization voltage Vinit is applied, a second electrode connected to the fourth node n4, and a gate electrode to which the second scan pulse SC2is applied. A fourth switch element T4includes a first electrode to which the data voltage Vdata is applied, a second electrode connected to the third node n3, and a gate electrode to which the first scan pulse SC1is applied. A fifth switch element T5includes a first electrode to which the pixel driving voltage VDD is applied, a second electrode connected to the first node n1, and a gate electrode to which the first EM pulse EM1is applied. The objects to be achieved by the present disclosure, the means for achieving the objects, and effects of the present disclosure described above do not specify essential features of the claims, and thus, the scope of the claims is not limited to the disclosure of the present disclosure. Although the embodiments of the present disclosure have been described in more detail with reference to the accompanying drawings, the present disclosure is not limited thereto and may be embodied in many different forms without departing from the technical concept of the present disclosure. Therefore, the embodiments disclosed in the present disclosure are provided for illustrative purposes only and are not intended to limit the technical concept of the present disclosure. The scope of the technical concept of the present disclosure is not limited thereto. Therefore, it should be understood that the above-described embodiments are illustrative in all aspects and do not limit the present disclosure. The protective scope of the present disclosure should be construed based on the following claims, and all the technical concepts in the equivalent scope thereof should be construed as falling within the scope of the present disclosure. The various embodiments described above can be combined to provide further embodiments. All of the U.S. patents, U.S. patent application publications, U.S. patent applications, foreign patents, foreign patent applications and non-patent publications referred to in this specification and/or listed in the Application Data Sheet are incorporated herein by reference, in their entirety. Aspects of the embodiments can be modified, if necessary to employ concepts of the various patents, applications and publications to provide yet further embodiments. These and other changes can be made to the embodiments in light of the above-detailed description. In general, in the following claims, the terms used should not be construed to limit the claims to the specific embodiments disclosed in the specification and the claims, but should be construed to include all possible embodiments along with the full scope of equivalents to which such claims are entitled. Accordingly, the claims are not limited by the disclosure. | 62,481 |
11862087 | EMBODIMENTS Hereinafter, embodiments will be described with reference to the drawings. Elements common to the drawings are denoted by the same reference signs and some elements in the drawings are exaggerated in size or shape for clear understanding of the description. Disclosed in the following are configurations of a circuit for generating and outputting control signals for the pixel circuits of an electro-luminescent display device. The electro-luminescent display device is a display device utilizing light-emitting elements that emit light in response to driving current, like an organic light-emitting diode (OLED) display device. The OLED display device may exhibit variations in brightness among the pixels included in display region. The variations of brightness are caused by the characteristics of threshold voltage compensation applied by the pixel circuits to the driving transistors. The inventors' research revealed that the length of the threshold compensation period that minimizes the variations of brightness in the display region is different depending on the value of the electric current that flows through driving transistors (the brightness of light-emitting elements). A display device in an embodiment of this specification determines the length of the threshold compensation period for the driving transistors in the pixel circuits based on the brightness of a plurality of pixels included in the display region that are specified in video data. This feature effectively reduces the variations in brightness that change with the brightness of the displayed image. Embodiment 1 An overall configuration of the display device in an embodiment of this specification is described with reference toFIG.1. The elements in the drawings may be exaggerated in size or shape for clear understanding of the description. In the following, an organic light-emitting diode (OLED) display device is described as an example of the display device. FIG.1schematically illustrates a configuration example of an OLED display device10. The OLED display device10includes a thin-film transistor (TFT) substrate100on which OLED elements (light-emitting elements) are fabricated and a structural encapsulation unit150for encapsulating the OLED elements. In the periphery of a cathode electrode region114outer than the display region125of the TFT substrate100, control circuits, specifically a scanning driver131, an emission driver132, an electrostatic discharge protection circuit133, a driver IC134, and a demultiplexer136, are provided. The driver IC134is connected to the external devices via flexible printed circuits (FPC)135. The scanning driver131drives scanning lines on the TFT substrate100. The emission driver132drives emission control lines to control light emission of pixels. The electrostatic discharge protection circuit133saves the elements on the TFT substrate100from electrostatic discharge damage. The driver IC134is mounted with an anisotropic conductive film (ACF), for example. The driver IC134provides power and control signals including a timing signal to the scanning driver131and the emission driver132and further, provides power and a data signal to the demultiplexer136. The demultiplexer136outputs output of one pin of the driver IC134to d data lines (d is an integer greater than 1) in series. The demultiplexer136changes the output data line for the data signal from the driver IC134dtimes per scanning period to drive d times as many data lines as output pins of the driver IC134. FIG.2illustrates a configuration example of a pixel circuit107in an embodiment of this specification. The pixel circuit107is included in the N-th pixel circuit row (N is an integer). The pixel circuit107includes six transistors (TFTs) M11to M16each having a gate, a source, and a drain. All transistors M11to M16in this example are p-type TFTs. The transistor M11is a driving transistor for controlling the amount of electric current to an OLED element E1. The driving transistor M11controls the amount of electric current to be supplied from an anode power supply for supplying a power supply potential PVDD to the OLED element E1in accordance with a voltage stored in a storage capacitor C10. The storage capacitor C10holds a written voltage throughout one frame period. The cathode of the OLED element E1is connected to a power line204for transmitting a power supply potential PVEE from a cathode power supply. The power supply potentials PVDD and PVEE can be supplied from the driver IC134. The storage capacitor C10in the configuration example ofFIG.2consists of capacitors C11and C12connected in series. One end of the storage capacitor C10is supplied with the anode power supply potential PVDD and another end is connected to the source/drain regions of the switching transistors M13and M14. Still another end of the storage capacitor C10is connected to the gate of the driving transistor M11. More specifically, an end of the capacitor C12is connected to the anode power line241; an end of the capacitor C11is connected to the source/drain regions of the switching transistors M13and M14; and an intermediate node between the capacitors C11and C12is connected to the gate of the driving transistor M11. The voltage of the storage capacitor C10is a voltage between the gate of the driving transistor M11and the anode power line241. The source of the driving transistor M11is connected to the anode power line241; the source potential is the anode power supply potential PVDD. Accordingly, the storage capacitor C10stores the voltage between the gate and the source of the driving transistor M11. In the configuration example ofFIG.2, the capacitor C12stores the gate-source voltage of the driving transistor M11. The transistor M15is a switching transistor for controlling ON/OFF of light emission of the OLED element E1. The source of the transistor M15is connected to the drain of the driving transistor M11. The transistor M15switches ON/OFF the current supply to the OLED element E1connected from its drain. The gate of the transistor M15is connected to an Em signal line (emission control line)233and the transistor M15is controlled by the emission control signal Em input from the emission driver132to its gate. The transistor M16works to supply a reset potential Vrst to the anode of the OLED element E1. One end of the source/drain of the transistor M16is connected to a power line242for transmitting the reset potential Vrst and the other end is connected to the anode of the OLED element E1. The reset potential Vrst can be supplied from the driver IC134. The gate of the transistor M16is connected to an S1signal line231and the transistor M16is controlled by the control signal S1. When the transistor M16is turned ON by the control signal S1input from the scanning driver131to its gate, the transistor M16supplies the reset potential Vrst transmitted by the power line242to the anode of the OLED element E1. The transistor M16also has a function to prevent leak emission by bypassing the current flowing from the power supply PVDD into the OLED element E1via the transistors M11and M15during the reset period, while supplying the reset potential Vrst to the anode of the OLED element E1. The transistor M12is a switching transistor for writing a voltage for applying threshold compensation to the driving transistor M11to the storage capacitor C10and for resetting the gate potential of the driving transistor M11. The source and the drain of the transistor M12connect the gate and the drain of the driving transistor M11. Accordingly, when the transistor M12is ON, the driving transistor M11is diode-connected. The transistor M14is a switching transistor for writing a voltage for applying threshold compensation to the driving transistor M11to the storage capacitor C10. The transistor M14controls whether to supply a reference potential Vref to the storage capacitor C10. One end of the source/drain of the transistor M14is connected to a power line202for transmitting the reference potential Vref and the other end is connected to an end of the capacitor C11. The gate of the transistor M14is connected to the S1signal line231and the transistor M14is controlled by the control signal S1input from the scanning driver131to its gate. The transistors M12, M16, and M14are controlled by the control signal S1. Accordingly, these transistors M12, M16, and M14are turned ON/OFF simultaneously. During the period where the emission control transistor M15is ON, these transistors are turned ON to reset the gate potential of the driving transistor M11and the potential of the storage capacitor C10. After these potentials are reset, the emission control transistor M15is turned OFF. When the transistors M12and M14are ON, the transistor M11is a diode-connected transistor. A threshold compensation voltage between the power supply potential PVDD and the reference potential Vref is written to the storage capacitor C10. The transistor M13is a switching transistor for selecting a pixel circuit to be supplied with a data signal and writing the data signal (data signal voltage) to the storage capacitor C10. One end of the source/drain of the transistor M13is connected to a data line237for transmitting a data signal Vdata and the other end is connected to the storage capacitor C10, more specifically, an end of the capacitor C11. The gate of the transistor M13is connected to an S2signal line232for transmitting a control signal S2for selecting a pixel circuit row to write a data signal. The transistor M13is controlled by the control signal S2supplied from the scanning driver131. When the transistor M13is ON, the transistor M13supplies the data signal Vdata supplied from the driver IC134through the data line237to the storage capacitor C10. FIG.3is an example of the timing chart of the signals for controlling the pixel circuit107inFIG.2.FIG.3is a timing chart to write a threshold compensation voltage for the driving transistor M11and a data signal Vdata to a pixel circuit in the N-th pixel circuit row. Specifically,FIG.3illustrates temporal variation in one frame of the selection signals S1_N and S2_N for selecting the N-th pixel circuit row to write a data signal Vdata, the emission control signal Em_N for the N-th pixel circuit row, and the data signal Vdata.FIG.3shows variation in signal potential level of those signals. A selection signal is a kind of control signal and can be referred to as scanning signal. The selection signal S1is a first selection signal and the selection signal S2is a second selection signal. The period of 1H in the timing chart ofFIG.3is a period to write a data signal Vdata to a pixel circuit and a period where the selection signal S2is Low. A threshold compensation period is not shorter than 1H and in the example ofFIG.3, 5H. At a time T1, the selection signal S1_N changes from High to Low. The transistors M12, M14, and M16turn ON in response to the change of the selection signal S1_N. Since the emission control signal Em_N is Low at the time T1, the transistor M15is ON. Since the transistors M12and M14to M16are ON, the reset potential Vrst is supplied to the anode of the OLED element E1and in addition, to the gate of the driving transistor M11. At a time T2, the emission control signal Em_N changes from Low to High. The period from the time T1to the time T2is a period to reset the gate voltage of the driving transistor M11and the storage capacitor C10. The potential levels of the signals S1_N, S2_N, and Em_N are maintained from the time T2to a time T3. The transistors M12, M14, and M16are ON and the other transistors including the transistor M15are OFF. A threshold compensation voltage is written to the storage capacitor C10during this period from the time T2to the time T3. The period from the time T2to the time T3is a threshold compensation period and has a length of 5H. At the time T3, the selection signal S2_N changes from High to Low. The selection signal S1_N changes from Low to High. The transistors M12, M14, and M16turn OFF in response to the change of the selection signal S1_N. The selection signal S1_N is maintained to be High after the time T3. Further, the transistor M13turns from OFF to ON in response to the change of the selection signal S2_N. As a result, a data signal Vdata starts being written to the storage capacitor C10. At a time T4, the selection signal S2_N changes from Low to High. In response, the transistor M13turns from ON to OFF to end the data write to the N-th pixel circuit row. The period from the time T3to the time T4is a period to write data to the N-th pixel circuit row and has a length of 1H. The selection signal S2_N is maintained to be High after the time T4. At the time T4, the emission control signal Em_N changes from High to Low. In response, the transistor M15turns from OFF to ON. As a result, driving current is supplied to the OLED element E1and the OLED element E1starts emitting light. FIG.4schematically illustrates a relation among threshold compensation periods and data write periods in four consecutive pixel circuit rows. In each pixel circuit row, a data write period follows a threshold compensation period. The lengths of the data write periods and the threshold compensation periods are common to the pixel circuit rows. In the examples illustrated inFIGS.3and4, the length of a data write period is 1H and the length of a threshold compensation period is (q−1)*H, where q is an integer greater than 1. For more appropriate threshold compensation, the value of q is determined to be an integer greater than 2. In the example described with reference toFIG.3, the value of q is 6. The length of the threshold compensation period changes with the length of the selection signal S1_N. As described above, the length of the period where the selection signal S1_N is Low is qH and the length of the threshold compensation period is (q−1)*H. The OLED display device10can dynamically change the value of q to attain an appropriate threshold compensation period. As will be described later, the length of the threshold compensation period does not need to be an integral multiple of 1H. As illustrated inFIG.4, data signals are serially written to the pixel circuit rows. The data write period for a pixel circuit row starts immediately after the data write period for the previous row has ended. The data write periods for different pixel circuit rows never overlap. The threshold compensation period for a pixel circuit row overlaps the threshold compensation period and the data write period for the previous pixel circuit row. A threshold compensation period can overlap data write periods for some preceding pixel circuit rows from the immediately preceding pixel circuit row. Hereinafter, a method of dynamically changing the threshold compensation period for the pixels is described. The threshold compensation period that minimizes the variations in brightness of a display region125differs depending on the brightness of the display region125.FIG.5illustrates relations between the average brightness of a video frame and the best threshold compensation period in different pixel circuits.FIG.5provides results of simulation by principal component analysis (PCA). In the graph ofFIG.5, the horizontal axis represents the average brightness of the pixels according to a video frame and the vertical axis represents the best length for the threshold compensation period. More specifically, the horizontal axis represents the average of the brightness expressed by a luminous intensity level and the vertical axis represents the threshold compensation period expressed by a multiple of 1H period. The 1H period is 4.2 μs. One pixel can display a dot in one color at different brightness levels. Typically, each pixel displays a red, blue, or green dot; it can also be referred to as subpixel. FIG.5provides analysis results on three different examples of pixel circuits. The 7T1C pixel circuit includes seven transistors and one capacitive element. The 6T2C_D pixel circuit and the 6T2C_S pixel circuit both include six transistors and two capacitive elements but they are different in connection of the elements therein. The pixel circuit107illustrated inFIG.2is a 6T2C_D pixel circuit. As illustrated inFIG.5, the best length for the threshold compensation period changes with the brightness of the display region125in any pixel circuit. The inventors' research revealed that the best threshold compensation period differs depending on various statistics of the brightness of the display region125, for example, not only the average of the brightness of the display region125but also the mode of the brightness and the average of the brightness of a specific color. The OLED display device10in an embodiment of this specification dynamically changes the length of the threshold compensation period in the display region125during a period where it is displaying a picture. The period where the display device is displaying a picture is a period where the display device is displaying a picture composed of successive video frames. For example, the OLED display device10updates the threshold compensation period for every video frame or every predetermined number of video frames. The OLED display device10calculates a predetermined statistic about the brightness of the pixels specified by the video frame and determines the threshold compensation period for the video frame or its subsequent video frames based on the statistic. As a result, the variations in brightness of every image displayed on the screen decrease, raising the picture quality. FIG.6illustrates an example of the functional configuration of an OLED display device10in an embodiment of this specification that dynamically changes the threshold compensation period. The OLED display device10includes a brightness data calculator410and a pulse width controller400. The brightness data calculator410and the pulse width controller400can be included in the driver IC134or an external circuit (not shown inFIG.6). The brightness data calculator410receives video data from an external circuit. The brightness data calculator410includes a frame memory411. The video data is a sequence of video frames; the brightness data calculator410successively stores received video frames to the frame memory411. The brightness data calculator410calculates a statistic of the brightness specified by the video frames. The statistic can be calculated on each frame or calculated intermittently on some of the video frames. The statistic to be calculated can be the average brightness, the mode of the brightness, the highest brightness, or the lowest brightness, for example. The average brightness can be the average of the brightness levels assigned to the pixels in the whole or a part of the display region125or all or a part of the pixels of a specific color. The mode of the brightness can be the brightness level assigned by a video frame to the largest number of pixels among the brightness levels assigned to all pixels, pixels of a specific color, or pixels in a specific part of the display region125. The highest brightness and the lowest brightness can be the highest level and the lowest level among the brightness levels assigned by a video frame to all pixels, pixels of a specific color, or pixels in a specific part of the display region125. The pulse width controller400includes a compensation period pulse width calculator401, an SRAM402of a volatile storage device, and a timing controller (TCON)403. The compensation period pulse width calculator401receives the brightness statistic of a video frame from the brightness data calculator410. The compensation period pulse width calculator401determines a threshold compensation period based on the received brightness statistic. More specifically, the compensation period pulse width calculator401determines the pulse width for the start pulse signal for the S1selection signals that determines the threshold compensation period. The threshold compensation period is defined with this pulse width. The pulse width calculator401can determine the threshold compensation period by consulting a lookup table or calculating with a predefined function. The data indicating a start pulse width or the data indicating a threshold compensation period is stored to the SRAM402. The timing controller403controls the scanning driver131, the emission driver132, and a data driver421. The data driver421is included in the driver IC134and outputs data signals in accordance with video data (a video frame) to individual data lines. The demultiplexer136and the electrostatic discharge protection circuit133are excluded fromFIG.6. The timing controller403acquires video data from the frame memory411and further, acquires the data indicating the threshold compensation period (start pulse width) from the SRAM402. The timing controller403generates an internal clock signal and start pulse signals to control the scanning driver131, the emission driver132, and the data driver421. The timing controller403further generates video data to be sent to the data driver421in accordance with the video data from the external. The timing controller403sends the video data (video frame), the clock signal, and a start pulse signal (STH signal) to the data driver421. The data driver421operates in accordance with the clock signal. The data driver421outputs data signals specifying brightness of individual pixels in each pixel row according to the video data to the data lines at the times and for the period in accordance with the STH signal. The timing controller403sends the clock signal and two start pulse signals (STV1signal and STV3signal) to the scanning driver131. The scanning driver131includes two shift register circuits431and432. For example, the shift register circuit431outputs S1selection signals in accordance with the received clock signal and STV1signal and the shift register circuit432outputs S2selection signals in accordance with the received clock signal and STV3signal. The STV1signal and the STV3signal define the length of a Low-state period of the S1selection signals and the S2selection signals, respectively. As will be described later, the pulse width controller400changes the length of the threshold compensation period by changing the length of the Low-state period of the S1selection signals. The timing controller403sends the clock signal and a start pulse signal (STV2signal) to the emission driver132. The emission driver132includes a shift register circuit and outputs emission control signals (Em signals) in accordance with the received clock signal and STV2signal. The STV2signal defines the length of a High-state period of the Em signals. In an embodiment of this specification, the pulse width controller400changes the length of the threshold compensation period by changing the length of the Low-state period of the S1selection signals. The length of the emission control signals Em is kept uniform.FIG.7provides timing charts of S1selection signals and the control signals for generating the S1selection signals. The timing chart601illustrates temporal variation of the signals for a video frame1and the timing chart602illustrates temporal variation of the signals for a video frame2different from the video frame1. FIG.7illustrates temporal variation of two clock signals (CK signal and CKB signal), a start pulse signal (STV1signal), and S1selection signals.FIG.7includes temporal variation of S1selection signals S1_1, S1_2, S1_3, and S1_4for four consecutive pixel rows by way of example. An S1selection signal is generated based on two clock signals (the CK signal and the CKB signal) and the STV1signal. The CK signal and the CKB signal are generated by the scanning driver131based on the CLK signal from the timing controller403. The CK signal and the CKB signal have the same pulse width (the length of a Low-state period) and their phases are shifted by a half cycle. As illustrated inFIG.7, the pulse width of the start pulse signal (STV1signal) for the S1selection signals or the length of a Low-state period of the STV1signal defines the pulse width of the S1selection signals or the length of a Low state of the S1selection signals. When the pulse width of the S1selection signals is shorter, the threshold compensation period is shorter; when the pulse width of the S1selection signals is longer, the threshold compensation period is longer. In the timing chart601for the frame1, the width from the first falling edge to the last rising edge of the CKB signal in the period where the STV1signal is Low becomes the pulse width (the length of a Low-state period) of the S1selection signals. The pulse width of the S1selection signals for the frame1corresponds to one pulse of the CKB signal. The pulse of the S1selection signal S1_1for the first pixel row starts with the first rising edge of the CKB signal within the period where the STV1signal is Low. The scanning driver131starts outputting the S1selection signals for the second and the subsequent pixel rows in response to the falling edges of either the CK signal or the CKB signal. The pulse widths of the S1selection signals for all pixel rows are the same. As to the frame2, the width from the first falling edge to the last rising edge of the CKB signal within the period where the STV1signal is Low becomes the pulse width (the length of a Low-state period) of the S1selection signals, like in the frame1. In the timing chart602for the frame2, the pulse width of the STV1signal is longer than the pulse width of the STV1signal in the timing chart601for the frame1. Accordingly, the pulse width of the S1selection signals for the frame2is longer than the pulse width of the S1selection signals for the frame1. In the example ofFIG.7, the pulse width of the S1selection signals for the frame2corresponds to three pulses of the CKB signal. A pulse of the S1selection signal S1_1for the first pixel row starts in response to the first falling edge of the CKB signal within the period where the STV1signal is Low. The scanning driver131starts outputting the S1selection signals for the second and the subsequent pixel rows in response to the falling edges of either the CK signal or the CKB signal. The pulse widths of the S1selection signals for all pixel rows are the same. In the example ofFIG.7, the falling edge of the STV1signal is synchronized with the frame cycle. The pulse width controller400shifts the rising edge of the STV1signal depending on the brightness of the display region125specified by the video frame. The start of a threshold compensation period is synchronized with the frame cycle and the end of the threshold compensation period is shifted back or forth. In the configuration example described with reference toFIGS.6and7, the scanning driver131determines the pulse width of the S1selection signals or the threshold compensation period in accordance with the pulse width of the start pulse signal sent from the pulse width controller400. The pulse width controller400determines the pulse width of the start pulse signal based on a statistic of the pixels of the display region125specified by a video frame. This configuration enables more appropriate threshold compensation depending on the brightness of the pixels of the display region125and reduces the variations in brightness within the display region125. The above-described example fixes the rising edges and shifts the falling edges of the S1selection signals to change the period where the S1selection signal is Low. Another example can be configured to shift the rising edges and fix the falling edges of the S1selection signals. In that example, the rising edges of the emission control signals Em are shifted in accordance with the shift of the rising edges of the S1selection signals. Embodiment 2 FIG.8illustrates an example of the functional configuration of an OLED display device10in an embodiment of this specification that dynamically changes the threshold compensation period. In the following, differences from the configuration example illustrated inFIG.6are mainly described. The OLED display device10includes a pulse width controller450in place of the pulse width controller400inFIG.6. The brightness data calculator410works in the same manner as the one in the configuration example ofFIG.6. The pulse width controller450includes a timing controller (TCON)451, a trimming width calculator452, and a trimming controller453. Unlike the configuration example ofFIG.6, the timing controller (TCON)451generates an STV1start pulse signal without referencing the brightness statistic of the video frame calculated by the brightness data calculator410. The pulse width of the STV1start pulse signal is fixed. The other control signals are generated in the same manner as those in the configuration example ofFIG.6. The trimming width calculator452acquires the brightness statistic of the video frame from the brightness calculator410. The trimming width calculator452determines a threshold compensation period based on the brightness statistic. Specifically, the trimming width calculator452determines a trimming width for determining the threshold compensation period. The trimming width defines the threshold compensation period. The trimming width calculator452sends a trimming signal specifying the trimming width to the trimming controller453. The trimming width can be determined by consulting a lookup table or calculating with a predefined function. The trimming controller453acquires the STV1start pulse signal from the timing controller451and acquires the trimming signal from the trimming width calculator452. The trimming controller453trims the pulse of the STV1start pulse signal in accordance with the trimming signal. As a result, the pulse width of the STV1start pulse signal is shortened. FIG.9is a diagram schematically illustrating the operation of trimming the STV1start pulse signal by the trimming controller453. An STV1start pulse signal having a pulse width W1is input to the trimming controller453. The trimming controller453reduces the pulse width of the STV1start pulse signal by the trimming width specified by the trimming signal. The STV1start pulse signal output from the trimming controller453has a pulse width W2. The pulse width W2is shorter than the pulse width W1by the specified trimming width. As described above, this configuration example adjusts the pulse width of the STV1start pulse signal by trimming the STV1start pulse signal from the timing controller. Hence, the timing controller does not need to have a function to adjust the pulse width of the start pulse and therefore, a conventional timing controller can be used. The pulse width controller450can include a function to extend the pulse width of the start pulse signal, instead of or in addition to the function to trim the pulse width of the start pulse signal. Embodiment 3 FIG.10illustrates an example of the functional configuration of an OLED display device10in an embodiment of this specification that dynamically changes the threshold compensation period. In the following, differences from the configuration example illustrated inFIG.6are mainly described. The OLED display device10includes a scanning driver475in place of the scanning driver131inFIG.6. The scanning driver475includes a shift register circuit432, a selector circuit476, and a latch circuit478. The details of the scanning driver475will be described later with reference toFIG.11. The OLED display device10includes a pulse width controller470in place of the pulse width controller400inFIG.6. The brightness data calculator410works in the same manner as the one in the configuration example ofFIG.6. The pulse width controller470includes a timing controller (TCON)471and a pulse width calculator472. The OLED display device10includes an emission driver137in place of the emission driver132inFIG.6. As described above, the scanning driver475does not include the shift register circuit431in the scanning driver131inFIG.6. For this reason, the control signals generated and output by the timing controller (TCON)471does not include the STV1start pulse signal inFIG.6. The other control signals (CLK, STV2, STV3, and STH) generated by the timing controller471are the same as those in the configuration example ofFIG.6. The pulse width calculator472acquires the brightness statistic of a video frame from the brightness data calculator410. The pulse width calculator472determines a threshold compensation period based on the acquired brightness statistic. Specifically, the pulse width calculator472determines the pulse width for the STV1start pulse signal that defines the threshold compensation period. As will be described later, the scanning driver475outputs a control signal to the selector circuit476. This specification refers to this control signal as selector signal. As will be described later, the selector signal specifies one of the control terminals of the selector circuit476. The scanning driver475outputs S1selection signals having a pulse width associated with the selected control terminal. Selecting a different control terminal leads to generation of S1selection signals having a different pulse width. The pulse width calculator472can determine a threshold compensation period appropriate for the acquired brightness statistic by determining the control terminal to be selected from the selector circuit476based on the acquired brightness statistic. The scanning driver475sends SET signals for individual pixel circuit rows to the emission driver137. The SET signal will be described later. The emission driver137generates emission control signals Em for individual pixel circuit rows based on the STV2signal and the SET signals for individual pixel circuit rows. FIG.11is a configuration diagram schematically illustrating the internal circuit configuration of the scanning driver475. The scanning driver475includes a shift register circuit (SR circuit)432of the first stage, a selector circuit476of the next stage, and a latch circuit478of the final stage. The shift register circuit432includes a plurality of shift register units481connected in series. InFIG.11, only one of the shift register units is provided with a reference sign481. FIG.11illustrates the (N−4)th to the (N+2)th shift register units481(N is an integer). These shift register units481are associated with the (N−4)th to the (N+2)th pixel circuit rows. Each shift register unit481outputs an S2selection signal to the S2selection signal line232of the associated pixel circuit row and further, outputs the same signal to the selector circuit476and a latch unit300associated with the shift register unit481. A data bit is transferred from a shift register unit481to the next shift register unit481in accordance with the clock signals CK and CKB. The shift register unit481holding the data bit outputs a signal pulse. The latch circuit478includes a plurality of latch units300. InFIG.11, only one of the latch units is provided with a reference sign300.FIG.11illustrates the (N−2)th to the (N+2)th latch units300. These latch units300are associated with the (N−2)th to the (N+2)th pixel circuit rows and each of them outputs an S1selection signal to the S1selection signal line231of the associated pixel circuit row. The selector circuit476is disposed between the shift register circuit432and the latch circuit478to change connection between the shift register units481and the latch units300. The selector circuit476has a switch matrix structure including a plurality of switching transistors483. InFIG.11, one of the switching transistors is provided with a reference sign483by way of example. Although the switching transistors in the example ofFIG.11are p-type TFTs, the switching transistors can be of either type. The selector circuit476in the configuration example ofFIG.11includes three switch columns each including switching transistors disposed vertically. The gates of the switching transistors in one switch column are connected to a control terminal A0; the gates of the switching transistors in another switch column are connected to a control terminal A1; and the gates of the switching transistors in the remaining switch column are connected to a control terminal A2. All switching transistor in each switch column are turned ON/OFF together by a potential from one control terminal associated therewith. One end of the source/drain of each switching transistor483connected to the control terminal A0is connected to the k-th latch unit300and the other end is connected to the (k−2)th shift register unit481(k is an integer). One end of the source/drain of each switching transistor483connected to the control terminal A1is connected to the k-th latch unit300and the other end is connected to the (k−3)th shift register unit481. One end of the source/drain of each switching transistor483connected to the control terminal A2is connected to the k-th latch unit300and the other end is connected to the (k−4)th shift register unit481. The switching transistors483in a switch column are connected to different latch units300and different shift register units481. Each latch unit300is connected from three switching transistors483belonging to different switch columns. Each shift register unit481is connected from three switching transistors483belonging to different switch columns. Each shift register unit481is also connected from the associated latch unit300. The connected shift register unit481and latch unit300are assigned the same number. Furthermore, each shift register unit481is connected from the S2selection signal line232for the associated pixel circuit row. The shift register circuit432includes shift register units481each associated with a pixel circuit row. A shift register unit481associated with a pixel circuit row outputs a signal pulse to the associated pixel circuit row and two latch units300. The number of shift register units481is larger than the number of pixel circuit rows. Some of the shift register units481are not connected to a pixel circuit row; they output a signal to latch circuits300only. Two input terminals of each latch unit300receive output signals of different shift register units481. Specifically, a signal from the associated (same-numbered) shift register unit481is input to an RST terminal. A signal from the shift register unit481of an earlier stage selected by the selector circuit476is input to a SET terminal. The RST terminal is a first terminal and the SET terminal is a second terminal. In the configuration example ofFIG.11, the output of the N-th shift register unit481is input to the RST terminal of the N-th latch unit300. The output of the (N−L)th shift register unit481selected by the selector circuit476(L is an integer greater than 1 and in the example ofFIG.11,2,3, or4) is input to the SET terminal of the N-th latch unit300. FIG.12illustrates a configuration example of a latch unit300. The latch unit300outputs an S1selection signal to the N-th pixel circuit row. The latch unit300includes a SET terminal301and a RST terminal302for receiving a signal and a Q terminal303for outputting a signal. The selection signal S2_N−L for the (N−L)th pixel circuit row is input from the shift register circuit432to the SET terminal301. The selection signal S2_N for the N-th pixel circuit row is input to the RST terminal302. The latch unit300outputs a selection signal S1_N from the Q terminal303to the S1selection signal line231for the N-th pixel circuit row. FIG.13is a truth table for a latch unit300. In the truth table ofFIG.13, L represents a logical Low level and H represents a logical High level. In the configuration described with reference toFIGS.3and7, the High potential levels of the S1selection signal and the S2selection signal correspond to the logical Low and the Low potential levels of those signals correspond to the logical High. When a SET input is L and an RST input is L, the Q output is L. When a SET input is H and an RST input is L, the Q output is H and the Q output is held to be H even if the SET input changes afterwards. When a SET input is L and an RST input is H, the Q output is L. The state where both a SET input and an RST input are H is not allowed. FIG.14illustrates an example of the circuit configuration of a latch unit300. In the configuration example ofFIG.14, the latch unit300includes four transistors and one capacitive element. The four transistors M21to M24are p-type transistors. The transistor M21is diode-connected and receives an input from the SET terminal301at the drain. The transistor M22is connected between the transistor M21and the power supply for supplying the power supply potential PVEE and receives an input from the RST terminal302at the gate. The transistor M23is connected between the power supply for supplying the power supply potential PVDD and the Q terminal303and its gate is connected to an intermediate node between the transistors M21and M22. The transistor M24is connected between the transistor M23and the power supply for supplying the power supply potential PVEE and receives an input from the RST terminal at the gate. The capacitive element Cb is connected between the gate of the transistor M23and the Q terminal303. An intermediate node between the transistors M23and M24is connected to the Q terminal303. Returning toFIG.11, the N-th shift register unit481outputs signal pulses simultaneously to the RST terminal of the N-th latch unit300, the SET terminal of the (N+L)th latch unit300selected by the selector circuit476, and the S2selection signal line232for the N-th pixel circuit row. The N-th latch unit300outputs an S1selection signal to the N-th pixel circuit row. The N-th latch unit300starts a pulse of the S1selection signal in response to a signal pulse from the (N−L)th shift register unit481and ends the pulse in response to a signal pulse from the N-th shift register unit481. When one of the control terminals A0, A1and A2is selected for the N-th S2selection signal line232and latch unit300, the associated (N−2)th, (N−3)th, or (N−4)th shift register unit481is selected. More generally, the output of the N-th latch unit300is set by a signal pulse from the K-th shift register unit and is reset by a signal pulse from the (K+p)th shift register unit (K is an integer and p is an integer greater than 1). The length of a threshold compensation period is (p−1)*H. The output of the (N+q)th latch unit300is set by a signal pulse from the (K+q)th shift register unit and is reset by a signal pulse from the (K+q+p)th shift register unit (q is an integer greater than 0). The pulse from the latch unit300has a pulse width of p*H. The pulse from the (N+q)th latch unit300is delayed from the pulse from the N-th latch unit300by a time of q*H. The length of a threshold compensation period is (p−1)*H. The above-described example selects a value for p from 2, 3, and 4 by selecting one of the control terminals A0, A1, and A2with a selector signal. In other words, the output from the shift register corresponding to the selected p is selected. As described above, an S1selection signal having a different pulse width or a threshold compensation period having a different length is generated for a different value of p. The combination of selectable values for p is determined depending on the design; it does not need to consist of consecutive natural numbers. FIG.15is an example of the timing chart of the signals for controlling a pixel circuit107described with reference toFIGS.10to14.FIG.15is a timing chart to write a threshold compensation voltage for the driving transistor M11and a data signal Vdata to a pixel circuit107in the N-th pixel circuit row. Specifically,FIG.15illustrates temporal variation in one frame period of the selection signals S1_N and S2_N for the N-th pixel circuit row to write the data signal Vdata, the emission control signal Em_N for the N-th pixel circuit row, and the selection signal S2_N−4 for the (N−4)th pixel circuit row. The selection signal S2_N−4 is an example of the output of the shift register unit selected by setting the selector bits as A0=0, A1=0 and A2=1 in the selector circuit476. The emission control signal Em_N rises synchronously (simultaneously) with the rise of the SET signal of the N-th latch unit300. As described above, the emission driver137generates an emission control signal based on the input SET signal. In the example ofFIG.15, the selection signal S2_N−4 is the SET signal for the N-th latch unit300. In the timing chart ofFIG.15, the period of 1H is a period to write a data signal Vdata to a pixel circuit or a period where the S2selection signal is Low. A threshold compensation period is not shorter than 1H and in the example ofFIG.15, 3H. At a time T1, the selection signal S2_N−4 changes from High to Low. In response to the change of the selection signal S2_N−4, the selection signal S1_N changes from High to Low. In response to the change of the selection signal S1_N, the transistors M12, M14, and M16turn ON. Since the emission control signal Em_N is Low at the time T1, the transistor M15is ON. Since the transistors M12, and M14to M16are ON, the reset potential Vrst is supplied to the anode of the OLED element E1and in addition, to the gate of the driving transistor M11. At a time T2, the emission control signal Em_N changes from Low to High. The period from the time T1to the time T2is a period to reset the gate voltage of the driving transistor M11. Furthermore, the selection signal S2_N−4 changes from Low to High at the time T2. The period from the time T1to the time T2is a period to write a data signal to the (N−4)th pixel circuit row. The period from the time T1to the time T2has a length of 1H. The potential levels of the signals S1_N, S2_N, Em_N, and S2_N−4 are maintained from the time T2to a time T3. The transistors M12, M14, and M16are ON and the other transistors including the transistor M15are OFF. A threshold compensation voltage is written to the storage capacitor C10during this period from the time T2to the time T3. The period from the time T2to the time T3is a threshold compensation period and has a length of 3H. At the time T3, the selection signal S2_N changes from High to Low. As will be described later, the selection signal S1_N changes from Low to High in response to the change of the selection signal S2_N. The transistors M12, M14, and M16turn OFF in response to the change of the selection signal S1_N. The selection signal S1_N is maintained to be High after the time T3. In response to the change of the selection signal S2_N, the transistor M13turns from OFF to ON. As a result, a data signal Vdata starts being written to the storage capacitor C10. At a time T4, the selection signal S2_N changes from Low to High. In response, the transistor M13turns from ON to OFF to end the data write to the N-th pixel circuit row. The period from the time T3to the time T4is a period to write data to the N-th pixel circuit row and has a length of 1H. The selection signal S2_N is maintained to be High after the time T4. At the time T4, the emission control signal Em_N changes from High to Low. In response, the transistor M15turns from OFF to ON. As a result, driving current is supplied to the OLED element E1and the OLED element E1starts emitting light. As understood from the description with reference toFIGS.10to15, the shift register circuit432outputs pulses of the selection signals S2serially to the pixel circuit rows. The pulse width is 1H. Each latch unit300outputs an S1signal to the associated pixel circuit row. As described above, when the N-th latch unit300receives a pulse of a Low potential level (logical H-level) of the selection signal S2_N-q for the selected preceding row at the SET terminal301, it alters the selection signal S1_N to be output from the Q terminal303into a Low potential level. Although the selection signal S2_N-q subsequently changes to a High potential level (logical L-level), the input S2_N to the RST terminal302is at a High potential level and therefore, the selection signal S1_N to be output from the Q terminal303is maintained at the Low potential level. Subsequently, the latch circuit300receives a pulse at a Low potential level (logical H-level) of the selection signal S2_N for the N-th pixel circuit row at the RST terminal302. In response, the latch circuit300alters the selection signal S1_N to be output from the Q terminal303to a High potential level (logical L-level). The pulse width of the S1_N signal output from the latch circuit300is qH. As described above, using a selector circuit and a latch circuit enables one shift register circuit to generate S1selection signals and S2selection signals. Hence, the area required for the circuit for generating the S1selection signals and the S2selection signals can be made smaller. In the foregoing example, the emission driver137receives a SET signal generated within the scanning driver475and an STV2signal and generates an emission control signal Em. In another example, the emission driver137can include a selector circuit and a latch circuit like the scanning driver475and generate emission control signals Em with these circuits. The above-described example changes the length of the threshold compensation period by shifting the fall of the S1selection signal and the rise of the emission control signal Em. Another configuration example can change the length of the threshold compensation period by shifting the rise of the S1selection signal but fixing the emission control signal Em. The rise of the S1selection signal is controlled by the RST signal to the N-th latch unit. Embodiment 4 FIG.16illustrates a configuration example of a trimming controller453. The trimming controller453can be implemented with a latch circuit illustrated inFIG.14. The latch unit300inFIG.16and the latch unit300inFIG.14are the same in circuit configuration and different in input and output signals. As illustrated inFIG.16, the STV1signal is input to the SET terminal301. The trimming signal (STRIM signal) is input to the RST terminal302. The trimmed STV1signal is output from the Q terminal303. FIG.17is a truth table for the trimming controller453inFIG.16. Since the SET terminal301and the RST terminal302respectively receive the STV1signal and the STRIM signal as described above,FIG.17shows those signals in place of the names of the terminals. In the truth table, L represents a logical Low level and H represents a logical High level. The High potential levels of the STV1signal and the STRIM signal correspond to the logical Low and the Low potential levels of those signals correspond to the logical High. FIG.18is a timing chart of the input and output signals to and from the trimming controller453. At a time T11, the STRIM signal changes from a logical H-level (Low potential level) to a logical L-level (High potential level). At a subsequent time T12, the STV1signal and the trimmed STV1signal change from the logical L-level (High potential level) to a logical H-level (Low potential level). At a subsequent time T13, the STRIM signal changes from the logical L-level (High potential level) to a logical H-level (Low potential level). In response, the trimmed STV1signal changes from the logical H-level (Low potential level) to a logical L-level (High potential level). At a subsequent time T14, the STV1signal changes from the logical H-level (L potential level) to a logical L-level (H potential level). After passage of an STV1pulse or later than the time T14, the STRIM signal can be either H or L (Don't Care). The required is that the STRIM signal be set to a logical L-level at any time prior to the time T12at which the STV1signal for the next frame is input. Utilizing a simple latch circuits as the trimming controller, the circuits area required for the trimming controller453can be minimized. Embodiment 5 Hereinafter, a technique to reduce the variation in brightness caused by changing the threshold compensation period (Vth compensation period) is described.FIG.19schematically illustrates relations between data voltage to a pixel circuit and driving current (Ioled) for an OLED element under different threshold compensation periods. The horizontal axis of the graph ofFIG.19represents data voltage and the vertical axis represents the log value of the driving current. As a general characteristic of an OLED pixel circuit having a Vth compensation function, the brightness (the level of the driving current) corresponding to the supplied data voltage varies with the length of the threshold compensation period. Specifically, the brightness decreases as the threshold compensation period becomes longer. This tendency is stronger especially in displaying at lower brightness levels. Accordingly, the variation in data voltage-brightness characteristic caused by changing the threshold compensation period can be reduced by adjusting the data voltage. An embodiment of this specification prepares a data voltage adjustment table specifying scaled voltage for each selectable threshold compensation period. FIG.20provides a configuration example of a data voltage adjustment table. The data voltage adjustment table includes different threshold compensation periods and scaled voltage for each threshold compensation period. The scaled voltage consists of data voltages corresponding to individual luminous intensity levels. In the configuration example ofFIG.20, luminous intensity levels of 1 to 255 are defined and data voltages are provided for individual combinations of a luminous intensity level and a threshold compensation period. The control circuit of an OLED display device10determines a data voltage to be output from the data driver421with reference to the data voltage adjustment table, based on the luminous intensity level specified in video data and the threshold compensation period. As a result, the variation in brightness caused by changing the threshold compensation period can be made small. FIG.21illustrates an example of the functional configuration of an OLED display device having a function of adjusting data voltage depending on the threshold compensation period. The following mainly describes differences from the configuration example inFIG.8. A control flag signal for selecting the optimum threshold compensation period determined by the trimming width calculator452is transferred to the data driver421through the timing controller451. The data driver421has the data voltage adjustment table described with reference toFIG.20. The data driver421selects a scaled voltage curve associated with the threshold compensation period indicated by the control flag signal from a plurality of scaled voltage curves in the data voltage adjustment table. The data driver421determines a data voltage corresponding to the luminous intensity level calculated from the video data in accordance with the selected scaled voltage curve. As set forth above, embodiments of this disclosure have been described; however, this disclosure is not limited to the foregoing embodiments. Those skilled in the art can easily modify, add, or convert each element in the foregoing embodiments within the scope of this disclosure. A part of the configuration of one embodiment can be replaced with a configuration of another embodiment or a configuration of an embodiment can be incorporated into a configuration of another embodiment. | 55,638 |
11862088 | DETAILED DESCRIPTION The invention will now be described more fully hereinafter with reference to the accompanying drawings, in which various embodiments of the invention are shown. This invention may, however, be embodied in different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will filly convey the scope of the invention to those skilled in the art. It will also be understood that when a layer is referred to as being “on” another layer or substrate, it can be directly on the other layer or substrate, or intervening layers may also be present. The same reference numbers indicate the same components throughout the specification. It will be understood that, although the terms “first,” “second,” etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another element. For instance, a first element discussed below could be termed a second element without departing from the teachings of the invention. Similarly, the second element could also be termed the first element. The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting. As used herein, “a”, “an,” “the,” and “at least one” do not denote a limitation of quantity, and are intended to include both the singular and plural, unless the context clearly indicates otherwise. For example, “an element” has the same meaning as “at least one element,” unless the context clearly indicates otherwise. “At least one” is not to be construed as limiting “a” or “an.” “Or” means “and/or.” As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items. It will be further understood that the terms “comprises” and/or “comprising,” or “includes” and/or “including” when used in this specification, specify the presence of stated features, regions, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, regions, integers, steps, operations, elements, components, and/or groups thereof. Furthermore, relative terms, such as “lower” or “bottom” and “upper” or “top,” may be used herein to describe one element's relationship to another element as illustrated in the Figures. It will be understood that relative terms are intended to encompass different orientations of the device in addition to the orientation depicted in the Figures. For example, if the device in one of the figures is turned over, elements described as being on the “lower” side of other elements would then be oriented on “upper” sides of the other elements. The term “lower,” can therefore, encompasses both an orientation of “lower” and “upper,” depending on the particular orientation of the figure. Similarly, if the device in one of the figures is turned over, elements described as “below” or “beneath” other elements would then be oriented “above” the other elements. The terms “below” or “beneath” can, therefore, encompass both an orientation of above and below. Each of the features of the various embodiments of the disclosure may be combined or combined with each other, in part or in whole, and technically various interlocking and driving are possible. Each embodiment may be implemented independently of each other or may be implemented together in an association. Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and the disclosure, and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein. Embodiments are described herein with reference to cross section illustrations that are schematic illustrations of idealized embodiments. As such, variations from the shapes of the illustrations as a result, for example, of manufacturing techniques and/or tolerances, are to be expected. Thus, embodiments described herein should not be construed as limited to the particular shapes of regions as illustrated herein but are to include deviations in shapes that result, for example, from manufacturing. For example, a region illustrated or described as flat may, typically, have rough and/or nonlinear features. Moreover, sharp angles that are illustrated may be rounded. Thus, the regions illustrated in the figures are schematic in nature and their shapes are not intended to illustrate the precise shape of a region and are not intended to limit the scope of the claims. Hereinafter, embodiments of the invention will be described in detail with reference to the accompanying drawings. FIG.1is a plan view illustrating a display device according to an embodiment. Referring toFIG.1, a display device10is a device that displays a moving image or a still image, and may be used as or define a display screen of each of various products such as televisions, laptop computers, monitors, billboards, and Internet of Things (“IOT”) devices as well as portable electronic devices such as mobile phones, smartphones, tablet personal computers (“PC”s), smart watches, watch phones, mobile communication terminals, electronic notebooks, electronic books, portable multimedia players (“PMP”s), navigation devices, and ultra mobile PCs (“UMPC”s). An embodiment of the display device10may be a light emitting display device such as an organic light emitting display device using an organic light emitting diode, a quantum dot light emitting display device including a quantum dot light emitting layer, an inorganic light emitting display device including an inorganic semiconductor, or a micro light emitting display device using a micro or nano light emitting diode (micro LED or nano LED). Hereinafter, for convenience of description, embodiments where the display device10is the organic light emitting display device will be described in detail, but the disclosure is not limited thereto. In an embodiment, the display device10includes a display panel100, a display driving circuit200, and a circuit board300. The display panel100may be in a rectangular shape, in a plan view, having short sides in a first direction DR1and long sides in a second direction DR2crossing the first direction DR1. A corner where the short side in the first direction DR1and the long side in the second direction DR2meet may be right-angled or rounded with a predetermined curvature. The shape of the display panel100in a plan view is not limited to the rectangular shape, and may be a polygonal shape, a circular shape, or an elliptical shape. The display panel100may be flat, but is not limited thereto. In an embodiment, for example, the display panel100may include curved surface parts formed at left and right distal ends thereof and having a constant curvature or a variable curvature. In an embodiment, the display panel100may be flexible or flexibly formed to be bent, folded, or rolled. The display panel100may include a display area DA that displays an image and a non-display area NDA that does not display an image. The display area DA may include display pixels SPX (seeFIG.2) that display the image. The display area DA may occupy most of an area of the display panel100. The display area DA may be disposed at a center of the display panel100. The non-display area NDA may be disposed adjacent to the display area DA. The non-display area NDA may be disposed to surround the display area DA. The non-display area NDA may be an edge area of the display panel100. The non-display area NDA may include a first dummy area DMA1, a second dummy area DMA2, a first scan driver SDC1, a second scan driver SDC2, and a pad area PDA. The first dummy area DMA1may be disposed on one side (e.g., the left side) of the display area DA, and the second dummy area DMA2may be disposed on another side (e.g., an opposing side or the right side) of the display area DA. The first dummy area DMA1and the second dummy area DMA2may be areas defined or designed in consideration of a deposition quality of display emission layers of display pixels SPX (seeFIG.2) disposed at edges of a fine metal mask when the display emission layers of the display pixels SPX (seeFIG.2) of the display area DA are formed using the fine metal mask. Therefore, dummy emission layers DEL including substantially a same material as the display emission layers of the display pixels SPX (seeFIG.2) may be disposed in the first dummy area DMA1and the second dummy area DMA2. The first scan driver SDC1may be disposed adjacent to the first dummy area DMA1. In an embodiment, for example, the first scan driver SDC1may be disposed on the left side of the first dummy area DMA1. The second scan driver SDC2may be disposed adjacent to the second dummy area DMA2. In an embodiment, for example, the second scan driver SDC2may be disposed on the right side of the second dummy area DMA2. Each of the first scan driver SDC1and the second scan driver SDC2may include a scan signal output unit connected to scan lines of the display area DA. The scan lines may include scan write lines GWLk (seeFIG.2) and scan initialization lines GILk (seeFIG.2). Each of the first scan driver SDC1and the second scan driver SDC2may further include an emission signal output unit connected to emission lines EMLk (seeFIG.2) of the display area DA. The first scan driver SDC1may be connected to display pads PD of the pad area PDA through first scan control lines GCL1. The second scan driver SDC2may be connected to the display pads PD of the pad area PDA through second scan control lines GCL2. Accordingly, each of the first scan driver SDC1and the second scan driver SDC2may be electrically connected to the display driving circuit200through the display pads PD of the pad area PDA and the circuit board300. The scan signal output unit of each of the first scan driver SDC1and the second scan driver SDC2may receive scan control signals from the display driving circuit200, generate scan signals based on the scan control signals, and output the scan signals to the scan lines. In addition, the emission signal output unit of each of the first scan driver SDC1and the second scan driver SDC2may receive emission control signals from the display driving circuit200, generate emission signals based on the emission control signals, and output the emission signals to the emission lines. A first power line VSL may be disposed in the non-display area NDA. The first power line VSL may be disposed to surround the display area DA. The first power line VSL may be disposed on the lower side and the upper side of the display area DA, in the first dummy area DMA1, and in the second dummy area DMA2. The first power line VSL may be connected to the display pads PD of the pad area PDA. That is, the first power line VSL may be electrically connected to the circuit board300through the display pads PD of the pad area PDA. Therefore, the first power line VSL may receive a first power voltage from the circuit board300. First power connection lines VSCL may be disposed in the display area DA, the first dummy area DMA1, and the second dummy area DMA2. The first power connection lines VSCL may extend in the first direction DR1. The first power connection lines VSCL may be connected to the first power line VSL in each of the first dummy area DMA1and the second dummy area DMA2. Accordingly, the first power voltage of the first power line VSL may be supplied to the first power connection lines VSCL. The pad area PDA may be disposed on one side (e.g., the lower side) of the display panel100. The pad area PDA may include a plurality of display pads PD. The display driving circuit200may generate signals and voltages for driving the display panel100. The display driving circuit200may be formed as an integrated circuit (“IC”) and attached onto the circuit board300in a chip on film (“COF”) manner. The circuit board300may be attached to the pad area PDA disposed at one end of the display panel100. Accordingly, the circuit board300may be electrically connected to the display panel100. The display panel100may receive power voltages through the circuit board300, and may receive the scan control signals and data voltages of the display driving circuit200. The circuit board300may be a flexible film such as a chip on film. In an embodiment, as illustrated inFIG.1, the first power line VSL is not disposed at edges of the display panel100, but may be disposed in the first dummy area DMA1and the second dummy area DMA2in which the dummy emission layers DEL are disposed. Therefore, in such an embodiment, first power lines disposed on the left side of the first scan driver SDC1and on the right side of the second scan driver SDC2may be omitted, and thus, a width of the non-display area may be decreased. FIG.2is a circuit diagram illustrating a display pixel of a display area according to an embodiment. Referring toFIG.2, an embodiment of the display pixel SPX may be connected to a k-th (k is a positive integer) scan initialization line GILk, a k-th scan write line GWLk, and a k-th emission line EMLk. In such an embodiment, the display pixel SPX may be connected to a first power line VSL to which a first power voltage is supplied, a second power line VDL to which a second power voltage is applied, and an initialization voltage line VIL to which an initialization voltage is supplied. The display pixel SPX may include a light emitting unit ELU and a pixel driver DDU. The light emitting unit ELU may include a light emitting element LE. The pixel driver DDU may supply a driving voltage for driving the light emitting element LE to a pixel electrode of the light emitting element LE. The pixel driver DDU may include a driving transistor DT, switch elements, and a capacitor CST1. The switch elements include first to sixth transistors ST1, ST2, ST3, ST4, ST5, and ST6. The driving transistor DT may include a gate electrode, a first electrode, and a second electrode. The driving transistor DT controls a drain-source current (hereinafter, referred to as a “driving current”) flowing between the first electrode and the second electrode in response to a data voltage applied to the gate electrode. The light emitting element LE emits light corresponding to the driving current. The larger the driving current, the larger the amount of light emitted from the light emitting element LE. In an embodiment, the light emitting element LE may be an organic light emitting diode including an organic light emitting layer disposed between an anode electrode and a cathode electrode. Alternatively, the light emitting element LE may be an inorganic light emitting element including an inorganic semiconductor disposed between an anode electrode and a cathode electrode. Alternatively, the light emitting element LE may be a quantum dot light emitting element including a quantum dot light emitting layer disposed between an anode electrode and a cathode electrode. Alternatively, the light emitting element LE may be a micro light emitting element including a micro LED disposed between an anode electrode and a cathode electrode. An anode electrode of the light emitting element LE may be connected to a first electrode of the fourth transistor ST4and a second electrode of the sixth transistor ST6, and a cathode electrode of the light emitting element EL may be connected to the first power line VSL. A parasitic capacitance Cel may be formed or connected between the anode electrode and the cathode electrode of the light emitting element LE. The anode electrode of the light emitting element LE may be the first pixel electrode PXE1, the second pixel electrode PXE2, and the third pixel electrode PXE3illustrated inFIGS.4and5. In addition, the cathode electrode of the light emitting element LE may be an island common electrode ICE illustrated inFIGS.4and5. The first transistor ST1is turned on by a scan initialization signal of the k-th scan initialization line GILk to connect the gate electrode of the driving transistor DT to the initialization voltage line VIL. Accordingly, a third power voltage of the initialization voltage line VIL may be applied to the gate electrode of the driving transistor DT. A gate electrode of the first transistor ST1may be connected to the k-th scan initialization line GILk, a first electrode of the first transistor ST1may be connected to the gate electrode of the driving transistor DT, and a second electrode of the first transistor ST1may be connected to the initialization voltage line VIL. The second transistor ST2is turned on by a scan write signal of the k-th scan write line GWLk to connect the first electrode of the driving transistor DT to a j-th (j is a positive integer) data line Dj. Accordingly, a data voltage of the j-th data line Dj may be applied to the first electrode of the driving transistor DT. A gate electrode of the second transistor ST2may be connected to the k-th scan write line GWLk, a first electrode of the second transistor ST2may be connected to the first electrode of the driving transistor DT, and a second electrode of the second transistor ST2may be connected to the j-th data line Dj. The j-th data line Dj may be connected to the display pad PD. The third transistor ST3is turned on by the scan write signal of the k-th scan write line GWLk to connect the gate electrode and a second electrode of the driving transistor DT to each other. When the gate electrode and the second electrode of the driving transistor DT are connected to each other, the driving transistor DT is driven as a diode. A gate electrode of the third transistor ST3may be connected to the k-th scan write line GWLk, a first electrode of the third transistor ST3may be connected to the second electrode of the driving transistor DT, and a second electrode of the third transistor ST3may be connected to the gate electrode of the driving transistor DT. The fourth transistor ST4is turned on by the scan write signal of the k-th scan write line GWLk to connect the anode electrode of the light emitting element LE to the initialization voltage line VIL. The third power voltage of the initialization voltage line VIL may be applied to the anode electrode of the light emitting element LE. A gate electrode of the fourth transistor ST4may be connected to the k-th scan write line GWLk, a first electrode of the fourth transistor ST4may be connected to the anode electrode of the light emitting element LE, and a second electrode of the fourth transistor ST4may be connected to the initialization voltage line VIL. Alternatively, the third transistor ST3may be turned on by the scan initialization signal of the k-th scan initialization line GILk. In such an embodiment, the gate electrode of the third transistor ST3may be connected to the k-th scan initialization line GILk. The fifth transistor ST5is turned on by an emission signal of the k-th emission line EMLk to connect the first electrode of the driving transistor DT to the second power line VDL. A gate electrode of the fifth transistor ST5may be connected to the k-th emission line EMLk, a first electrode of the fifth transistor ST5may be connected to the second power line VDL, and a second electrode of the fifth transistor ST5may be connected to the first electrode of the driving transistor DT. The sixth transistor ST6is disposed between the second electrode of the driving transistor DT and the anode electrode of the light emitting element LE. The sixth transistor ST6is turned on by an emission control signal of the k-th emission line EMLk to connect the second electrode of the driving transistor DT to the anode electrode of the light emitting element LE. A gate electrode of the sixth transistor ST6may be connected to the k-th emission line EMLk, a first electrode of the sixth transistor ST6may be connected to the second electrode of the driving transistor DT, and the second electrode of the sixth transistor ST6may be the anode electrode of the light emitting element LE. When both the fifth transistor ST5and the sixth transistor ST6are turned on, the driving current of the driving transistor DT corresponding to the data voltage applied to the gate electrode of the driving transistor DT may flow to the light emitting element LE. The capacitor CST1is formed or connected between the gate electrode of the driving transistor DT and the second power line VDL. A first capacitor electrode of the capacitor CST1may be connected to the gate electrode of the driving transistor DT, and a second capacitor electrode of the capacitor CST1may be connected to the second power line VDL. In an embodiment, the first electrode of each of the first to sixth transistors ST1, ST2, ST3, ST4, ST5, and ST6and the driving transistor DT is a source electrode, and the second electrode of each of the first to sixth transistors ST1, ST2, ST3, ST4, ST5, and ST6and the driving transistor DT is a drain electrode. Alternatively, the first electrode of each of the first to sixth transistors ST1, ST2, ST3, ST4, ST5, and ST6and the driving transistor DT is a drain electrode, and the second electrode of each of the first to sixth transistors ST1, ST2, ST3, ST4, ST5, and ST6and the driving transistor DT is a source electrode. An active layer of each of the first to sixth transistors ST1, ST2, ST3, ST4, ST5, and ST6and the driving transistor DT may include or be formed of at least one selected from polysilicon, amorphous silicon, and an oxide semiconductor. In an embodiment, as shown inFIG.2, the first to sixth transistors ST1, ST2, ST3, ST4, ST5, and ST6, and the driving transistor DT may be P-type metal oxide semiconductor field effect transistors (“MOSFET”s), but the disclosure is not limited thereto. In an alternative embodiment, for example, the first to sixth transistors ST1, ST2, ST3, ST4, ST5, and ST6, and the driving transistor DT may also be N-type MOSFETs. Alternatively, at least one of the first to sixth transistors ST1, ST2, ST3, ST4, ST5, and ST6may be an N-type MOSFET. FIG.3is an illustrative view illustrating a scan signal output unit of a first scan driver according to an embodiment. Referring toFIG.3, a scan signal output element SOU of a first scan driver SDC1may include a plurality of stages STA1, STA2, STA3, STA4, . . . , STAm−1, STAm, and STAm+1 (m is a positive integer). Each of the plurality of stages STA1, STA2, STA3, STA4, . . . , STAm−1, STAm, and STAm+1 may include a start signal input unit ST, a reset signal input unit RT, a clock signal input unit CKT, a scan signal output unit SOUT, and a carry signal output unit COUT. The start signal input unit ST of each of the plurality of stages STA1, STA2, STA3, STA4, . . . , STAm−1, STAm, and STAm+1 may be connected to a start line STRL or a carry signal output unit COUT of the previous stage. In an embodiment, for example, a start signal input unit ST of a first stage STA1may be connected to a scan start line STRL to which a scan start signal is input. In addition, the start signal input unit ST of each of the plurality of stages STA2, STA3, STA4, . . . , STAm−1, STAm, and STAm+1 except for the first stage STA1may be connected to the carry signal output unit COUT of the previous stage. In an embodiment, for example, a start signal input unit ST of a second stage STA2may be connected to the carry signal output unit COUT of the first stage STA1, and a start signal input unit ST of a third stage STA3may be connected to the carry signal output unit COUT of the second stage STA2. The reset signal input unit RT of each of the plurality of stages STA1, STA2, STA3, STA4, . . . , STAm−1, STAm, and STAm+1 may be connected to a carry signal output unit COUT of the subsequent stage. In an embodiment, for example, a reset signal input unit RT of the first stage STA1may be connected to a carry signal output unit COUT of a fifth stage STA5. The clock signal input unit CKT of each of the plurality of stages STA1, STA2, STA3, STA4, . . . , STAm−1, STAm, and STAm+1 may be connected to a corresponding or predetermined one of clock lines CKL1, CKL2, CKL3, and CKL4. The plurality of stages STA1, STA2, STA3, STA4, . . . , STAm−1, STAm, and STAm+1 may be alternately connected to the clock lines CKL1, CKL2, CKL3, and CKL4. In an embodiment, for example, a clock signal input unit CKT of the first stage STA1may be connected to a first clock line CKL1, and a clock signal input unit CKT of the second stage STA2may be connected to a second clock line CKL2. In such an embodiment, a clock signal input unit CKT of the third stage STA3may be connected to a third clock line CKL3, and a clock signal input unit CKT of a fourth stage STA4may be connected to a fourth clock line CKL4. The scan signal output unit SOUT of each of the plurality of stages STA1, STA2, STA3, STA4, . . . , STAm−1, STAm, and STAm+1 may be connected to a scan write line and a scan initialization line corresponding thereto. In an embodiment, for example, a scan signal output unit SOUT of the first stage STA1may be connected to a first scan initialization line GIL1. In such an embodiment, a scan signal output unit SOUT of the second stage STA2may be connected to a second scan initialization line GIL2and a first scan write line GWL1. In such an embodiment, a scan signal output unit SOUT of the third stage STA3may be connected to a third scan initialization line GIL3and a second scan write line GWL2. In such an embodiment, a scan signal output unit SOUT of the fourth stage STA4may be connected to a fourth scan initialization line GIL4and a third scan write line GWL3. In such an embodiment, a scan signal output unit SOUT of the fourth stage STA5may be connected to a fifth scan initialization line GIL5and a fourth scan write line GWL4. In such an embodiment, a scan signal output unit SOUT of an (m−3)-th stage STAm−3 may be connected to an (m−3)-th scan initialization line GILm−3 and an (m−4)-th scan write line GWLm−4. In such an embodiment, a scan signal output unit SOUT of an (m−2)-th stage STAm−2 may be connected to an (m−2)-th scan initialization line GILm−2 and an (m−3)-th scan write line GWLm−3. In such an embodiment, a scan signal output unit SOUT of an (m−1)-th stage STAm−1 may be connected to an (m−1)-th scan initialization line GILm−1 and an (m−2)-th scan write line GWLm−2. In such an embodiment, a scan signal output unit SOUT of an m-th stage STAm may be connected to an m-th scan initialization line GILm and an (m−1)-th scan write line GWLm−1. such an embodiment, a scan signal output unit SOUT of an (m+1)-th stage STAm+1 may be connected to an m-th scan write line GWLm. The carry signal output unit COUT of each of the plurality of stages STA1, STA2, STA3, STA4, . . . , STAm−1, STAm, and STAm+1 may be connected to a reset signal input unit RT of the previous stage and a start signal input unit ST of the subsequent stage. However, the carry signal output unit COUT of each of the first stage STA1, the second stage STA2, the third stage STA3, and the fourth stage STA4may be connected only to the start signal input unit ST of the subsequent stage. In an embodiment, a scan signal output unit of a second scan driver SDC2may be substantially the same as the scan signal output unit of the first scan driver SDC1described with reference toFIG.3, and thus, any repetitive detailed description thereof will be omitted. FIGS.4and5are plan views illustrating a display area according to an embodiment in detail.FIG.6is a plan view illustrating a first island pattern ofFIG.4in detail.FIGS.7and8are perspective views illustrating first and second island patterns and a second connection pattern according to an embodiment. InFIG.4, island patterns ISP1to ISP4, connection patterns CNP1to CNP8, and cutout parts CUP1to CUP4of the display area DA when the display device10is contracted (or unstretched) are illustrated. InFIG.5, island patterns ISP1to ISP4, connection patterns CNP1to CNP8, and cutout parts CUP1to CUP4of the display area DA when the display device10is stretched are illustrated. Referring toFIGS.4to6, a substrate SUB may include island patterns ISP and connection patterns CNP defined by the cutout parts CUP1to CUP4which are cut out in the display area DA. The cutout parts CUPT to CUP4may be areas in which the substrate SUB is removed by a patterning process such as a laser process or a dry etching process. The display area DA may include first to fourth island patterns ISP1to ISP4, first to eighth connection patterns CNP1to CNP8, and first to fourth cutout parts CUP1to CUP4. The first to fourth island patterns ISP1to ISP4may be spaced apart from each other. The first island patterns ISP1and the second island patterns ISP2may be alternately disposed in the first direction DR1. The third island patterns ISP3and the fourth island patterns ISP4may be alternately disposed in the first direction DR1. The first island patterns ISP1and the third island patterns ISP3may be alternately disposed in the second direction DR2. The second island patterns ISP2and the fourth island patterns ISP4may be alternately disposed in the second direction DR2. The first connection pattern CNP1may connect the first island pattern ISP1and the second island pattern ISP2disposed on the left side of the first island pattern ISP1to each other. The first connection pattern CNP1may be connected to the upper side of the first island pattern ISP1. The first connection pattern CNP1may extend in the first direction DR1. The second connection pattern CNP2may connect the first island pattern ISP1and the second island pattern ISP2disposed on the right side of the first island pattern ISP1to each other. The second connection pattern CNP2may be connected to the lower side of the first island pattern ISP1. The second connection pattern CNP2may extend in the first direction DR1. The third connection pattern CNP3may connect the first island pattern ISP1and the third island pattern ISP3disposed on the upper side of the first island pattern ISP1to each other. The third connection pattern CNP3may be connected to the right side of the first island pattern ISP1. The third connection pattern CNP3may extend in the second direction DR2. The fourth connection pattern CNP4may connect the first island pattern ISP1and the third island pattern ISP3disposed on the lower side of the first island pattern ISP1to each other. The fourth connection pattern CNP4may be connected to the left side of the first island pattern ISP1. The fourth connection pattern CNP4may extend in the second direction DR2. The fifth connection pattern CNP5may connect the second island pattern IPS2and the fourth island pattern IPS4disposed on the upper side of the second island pattern IPS2to each other. The fifth connection pattern CNP5may be connected to the left side of the second island pattern ISP2. The fifth connection pattern CNP5may extend in the second direction DR2. The sixth connection pattern CNP6may connect the second island pattern IPS2and the fourth island pattern IPS4disposed on the lower side of the second island pattern IPS2to each other. The sixth connection pattern CNP6may be connected to the right side of the second island pattern ISP2. The sixth connection pattern CNP6may extend in the second direction DR2. The seventh connection pattern CNP7may connect the third island pattern IPS3and the fourth island pattern IPS4disposed on the left side of the third island pattern IPS3to each other. The seventh connection pattern CNP7may be connected to the lower side of the third island pattern ISP3. The seventh connection pattern CNP7may extend in the first direction DR1. The eighth connection pattern CNP8may connect the third island pattern IPS3and the fourth island pattern IPS4disposed on the right side of the third island pattern IPS3to each other. The eighth connection pattern CNP8may be connected to the upper side of the fourth island pattern ISP4. The eighth connection pattern CNP8may extend in the first direction DR1. The first cutout part CUP1may be defined between the first island pattern ISP1and the second island pattern ISP2, between the third connection pattern CNP3and the fifth connection pattern CNP5, between the first island pattern ISP1and the second connection pattern CNP2, and between the second island pattern ISP2and the second connection pattern CNP2. In addition, the first cutout part CUP1may be defined between the third island pattern ISP3and the fourth island pattern ISP4, between the third island pattern ISP3and the eighth connection pattern CNP8, and between the fourth island pattern ISP4and the eighth connection pattern CNP8. The second cutout part CUP2may be defined between the first island pattern ISP1and the third island pattern ISP3, between the first island pattern ISP1and the fourth connection pattern CNP4, between the third island pattern ISP3and the fourth connection pattern CNP4, and between the second connection pattern CNP2and the eighth connection pattern CNP8. In addition, the second cutout part CUP2may be defined between the second island pattern ISP2and the fourth island pattern ISP4, between the second island pattern ISP2and the sixth connection pattern CNP6, and between the fourth island pattern ISP4and the sixth connection pattern CNP6. The third cutout part CUP3may be defined between the first island pattern ISP1and the third island pattern ISP3, between the first island pattern ISP1and the third connection pattern CNP3, between the third island pattern ISP3and the third connection pattern CNP3, and between the first connection pattern CNP1and the seventh connection pattern CNP7. In addition, the third cutout part CUP3may be defined between the second island pattern ISP2and the fourth island pattern ISP4, between the second island pattern ISP2and the fifth connection pattern CNP5, and between the fourth island pattern ISP4and the fifth connection pattern CNP5. The fourth cutout part CUP4may be defined between the first island pattern ISP1and the second island pattern ISP2, between the first island pattern ISP1and the first connection pattern CNP1, between the second island pattern ISP2and the first connection pattern CNP1, and between the fourth connection pattern CNP4and the sixth connection pattern CNP6. In addition, the fourth cutout part CUP4may be defined between the third island pattern ISP3and the fourth island pattern ISP4, between the third island pattern ISP3and the seventh connection pattern CNP7, and between the fourth island pattern ISP4and the seventh connection pattern CNP7. When the display device10is contracted as illustrated inFIG.7, the second connection pattern CNP2may be bent in a third direction DR3. When the display device10is stretched as illustrated inFIG.8, the second connection pattern CNP2may be unbent. Therefore, a length of the second connection pattern CNP2in the first direction DR1when the display device10is stretched in the first direction DR1may be greater than that when the display device10is contracted. Similarly, when the display device10is contracted, each of the first and third to eighth connection patterns CNP1and CNP3to CNP8may be bent in the third direction DR3. When the display device10is stretched, each of the first and third to eighth connection patterns CNP1and CNP3to CNP8may be unbent. Therefore, a length of the third connection pattern CNP3in the second direction DR2, a length of the fourth connection pattern CNP4in the second direction DR2, a length of the fifth connection pattern CNP5in the second direction DR2, and a length of the sixth connection pattern CNP6in the second direction DR2when the display device10is stretched in the second direction DR2may be greater than those when the display device10is contracted. In addition, a length of the first connection pattern CNP1in the first direction DR1, a length of the seventh connection pattern CNP7in the first direction DR1, and a length of the eighth connection pattern CNP8in the first direction DR1when the display device10is stretched in the first direction DR1may be greater than those when the display device10is contracted. Therefore, a width of each of the cutout parts CUP1to CUP4when the display device10is stretched may be greater than a width of each of the cutout parts CUP1to CUP4when the display device10is contracted. Scan initialization lines GILk/GILk+1, scan write lines GWLk/GWLk+1, emission lines EMLk/EMLk+1, red data lines RDLj/RDLj+1/RDLj+2/RDLj+3, blue data lines BDLj/BDLj+1/BDLj+2/BDLj+3, green data lines GDLj/GDLj+1/GDLj+2/GDLj+3, a first power connection line VSCL, and a second power line VDL/VDL+1/VDL+2/VDL+3 may be disposed in each of the first to fourth island patterns ISP1, ISP2, ISP3, and ISP4in the display area DA. A k-th scan initialization line GILk, a k-th scan write line GWLk, a k-th emission line EMLk, and the first power connection line VSCL may sequentially pass (or extend or linearly disposed) through the first connection pattern CNP1, the first island pattern ISP1, the second connection pattern CNP2, and the second island pattern ISP2. In addition, a (k+1)-th scan initialization line GIL, a (k+1)-th scan write line GWLk+1, a (k+1)-th emission line EMLk+1, and the first power connection line VSCL may sequentially pass through the seventh connection pattern CNP7, the third island pattern ISP3, the eighth connection pattern CNP8, and the fourth island pattern ISP4. That is, in the display area DA, the scan initialization lines GILk/GILk+1, the scan write lines GWLk/GWLk+1, and the emission lines EMLk/EMLk+1 may extend in a zigzag or winding form along the first direction DR1and be connected to the first scan driver SDC1and the second scan driver SDC2. In addition, the first power connection line VSCL may extend in a zigzag or winding form along the first direction DR1in the display area DA and be connected to the first power line VSL in each of the first dummy area DMA1and the second dummy area DMA2. Therefore, the first power voltage of the first power line VSL may be applied to the first power connection line VSCL. A j-th red data line RDLj, a j-th blue data line BDLj, and a j-th green data line GDLj, a (j+2)-th red data line RDLj+2, a (j+2)-th blue data line BDLj+2, and a (j+2)-th green data line GDLj+2 may sequentially pass through the third connection pattern CNP3, the first island pattern ISP1, the fourth connection pattern CNP4, and the third island pattern ISP3. In addition, a (j+1)-th red data line RDLj+1, a (j+1)-th blue data line BDLj+1, a (j+1)-th green data line GDLj+1, a (j+3)-th red data line RDLj+3, a (j+3)-th blue data line BDLj+3, and a (j+3)-th green data line GDLj+3 may sequentially pass through the fifth connection pattern CNP5, the second island pattern ISP2, and the sixth connection pattern CNP6, and the fourth island pattern ISP4. The second power line VDL/VDL+1/VDL+2/VDL+3 may receive a second power voltage. The second power line VDL/VDL+1/VDL+2/VDL+3 may sequentially pass through the third connection pattern CNP3, the first island pattern ISP1, the fourth connection pattern CNP4, and the third island pattern ISP3. In addition, the second power line VDL/VDL+1/VDL+2/VDL+3 may sequentially pass through the fifth connection pattern CNP5, the second island pattern ISP2, the sixth connection pattern CNP6, and the fourth island pattern ISP4. That is, in the display area DA, the red data lines RDLj/RDLj+1/RDLj+2/RDLj+3, the blue data lines BDLj/BDLj+1/BDLj+2/BDLj+3), the green data lines GDLj/GDLj+1/GDLj+2/GDLj+3, and the second power line VDL/VDL+1/VDL+2/VDL+3 may extend in a zigzag or winding form along the second direction DR2. A first light emitting unit ELU1of a first display pixel SPX1, a second light emitting unit ELU2of a second display pixel SPX2, and a third light emitting unit ELU3of a third display pixel SPX3may be disposed in each of the first to fourth island patterns ISP1, ISP2, ISP3, and ISP4in the display area DA. The first light emitting unit ELU1may be an area that emits light of a first color, for example, light of a red wavelength band. The second light emitting unit ELU2may be an area that emits light of a second color, for example, light of a blue wavelength band. The third light emitting unit ELU3may be an area that emits light of a third color, for example, light of a green wavelength band. The first light emitting unit ELU1, the second light emitting unit ELU2, and the third light emitting unit ELU3may be arranged in the first direction DR1. The first light emitting unit ELU1may be disposed on one side of the second light emitting unit ELU2, and the third light emitting unit ELU3may be disposed on another side (or an opposing side) of the second light emitting unit ELU2. An area of the second light emitting unit ELU2may be greater than an area of the first light emitting unit ELU1and an area of the third light emitting unit ELU3. The first pixel electrode PXE1of the first light emitting unit ELU1, the second pixel electrode PXE2of the second light emitting unit ELU2, and the third pixel electrode PXE3of the third light emitting unit ELU3may be arranged in the first direction DR1. The first pixel electrode PXE1may be disposed on one side of the second pixel electrode PXE2, and the third pixel electrode PXE3may be disposed on another side (or an opposing side) of the second pixel electrode PXE2. An area of the second pixel electrode PXE2may be greater than an area of the first pixel electrode PXE1and an area of the third pixel electrode PXE3. In addition, the area of the first pixel electrode PXE1may be greater than the area of the first light emitting unit ELU1, the area of the second pixel electrode PXE2may be greater than the area of the second light emitting unit ELU2, and the area of the third pixel electrode PXE3may be greater than the area of the third light emitting unit ELU3. The first pixel electrode PXE1may be connected to a first pixel driver through a first pixel contact hole PCNT1, and may thus receive a driving voltage of the first pixel driver. The second pixel electrode PXE2may be connected to a second pixel driver through a second pixel contact hole PCNT2, and may thus receive a driving voltage of the second pixel driver. The third pixel electrode PXE3may be connected to a third pixel driver through a third pixel contact hole PCNT3, and may thus receive a driving voltage of the third pixel driver. In an embodiment, as illustrated inFIGS.4and5, the second pixel electrode PXE2of the second light emitting unit ELU2overlaps the scan initialization lines GILk/GILk+1, the scan write lines GWLk/GWLk+1, the emission lines EMLk/EMLk+1, and the first power connection line VSCL and the first pixel electrode PXE1of the first light emitting unit ELU1, the second pixel electrode PXE2of the second light emitting unit ELU2, and the third pixel electrode PXE3of the third light emitting unit ELU3overlap the red data lines RDLj/RDLj+1/RDLj+2/RDLj+3, the blue data lines BDLj/BDLj+1/BDLj+2/BDLj+3, the green data lines GDLj/GDLj+1/GDLj+2/GDLj+3, and the second power line VDL/VDL+1/VDL+2/VDL+3, but an embodiment of the disclosure is not limited thereto. In an embodiment, at least one selected from the first pixel electrode PXE1of the first light emitting unit ELU1, the second pixel electrode PXE2of the second light emitting unit ELU2, and the third pixel electrode PXE3of the third light emitting unit ELU3may overlap at least one selected from the scan initialization lines GILk/GILk+1, the scan write lines GWLk/GWLk+1, the emission lines EMLk/EMLk+1, the first power connection line VSCL, the red data lines RDLj/RDLj+1/RDLj+2/RDLj+3, the blue data lines BDLj/BDLj+1/BDLj+2/BDLj+3, the green data lines GDLj/GDLj+1/GDLj+2/GDLj+3, and the second power line VDL/VDL+1/VDL+2/VDL+3. The island common electrode ICE may be disposed in each of the first to fourth island patterns ISP1, ISP2, ISP3, and ISP4in the display area DA. In each of the first to fourth island patterns ISP1, ISP2, ISP3, and ISP4, the island common electrode ICE may be connected to a first common connection electrode CCU1through a first common contact hole CCNT1. In an embodiment, the island common electrode ICE may not be disposed in the first to eighth connection patterns CNP1to CNP8to prevent the island common electrode ICE from being damaged according to a change in length of each of the first to eighth connection patterns CNP1to CNP8. The first common connection electrode CCU1may be disposed in each of the first to fourth island patterns ISP1, ISP2, ISP3, and ISP4in the display area DA. The first common connection electrode CCU1may be connected to the first power connection line VSCL through a second common contact hole CCNT2. Although not illustrated inFIGS.4to6, the first power connection line VSCL may overlap the first common connection electrode CCU1and the second common contact hole CCNT2in the third direction DR3. Therefore, the first power voltage of the first power line VSL may be supplied to the island common electrode ICE through the first power connection line VSCL and the first common connection electrode CCU1. The first common connection electrode CCU1may be disposed in or directly on a same layer as the first pixel electrode PXE1, the second pixel electrode PXE2, and the third pixel electrode PXE3. Therefore, the first common connection electrode CCU1may not overlap the first pixel electrode PXE1, the second pixel electrode PXE2, and the third pixel electrode PXE3. In an embodiment, as illustrated inFIGS.4to6, the first to fourth island patterns ISP1to ISP4partitioned by the cutout parts CUP1to CUP4may be connected to each other by the first to eighth connection patterns CNP1to CNP8, and the first to eighth connection patterns CNP1to CNP8may be in a bent state when the display device10is contracted, but may be in an unbent state when the display device10is stretched. Therefore, shapes of the first to fourth island patterns ISP1to ISP4do not change and lengths of the first to eighth connection patterns CNP1to CNP8change, such that a width of each of the cutout parts CUP1to CUP4may be increased or decreased. Accordingly, the display area DA may be effectively stretched and contracted. FIGS.9and10are plan views illustrating a first dummy area according to an embodiment in detail.FIG.11is a plan view illustrating a first island pattern ofFIG.9in detail. InFIG.9, island patterns ISP1to ISP4, connection patterns CNP1to CNP8, and cutout parts CUP1to CUP4of the first dummy area DMA1when the display device10is contracted are illustrated. InFIG.10, island patterns ISP1to ISP4, connection patterns CNP1to CNP8, and cutout parts CUP1to CUP4of the first dummy area DMA1when the display device10is stretched are illustrated. Referring toFIGS.9to11, the substrate SUB may include island patterns ISP and connection patterns CNP defined by the cutout parts CUP1to CUP4which are cut out in the first dummy area DMA1. The island patterns ISP and the connection patterns CNP of the first dummy area DMA1may be substantially the same as the island patterns ISP and the connection patterns CNP of the display area DA described above with reference toFIGS.4to6. Therefore, any repetitive detailed description of the island patterns ISP and the connection patterns CNP of the first dummy area DMA1will be omitted. Scan initialization lines GILk/GILk+1, scan write lines GWLk/GWLk+1, emission lines EMLk/EMLk+1, a first power connection line VSCL, and a first power line VSL may be disposed in each of first to fourth island patterns ISP1, ISP2, ISP3, and ISP4in the first dummy area DMA1. A k-th scan initialization line GILk, a k-th scan write line GWLk, and a k-th emission line EMLk may sequentially pass through the first connection pattern CNP1, the first island pattern ISP1, the second connection pattern CNP2, and the second island pattern ISP2. In addition, a (k+1)-th scan initialization line GIL, a (k+1)-th scan write line GWLk+1, and a (k+1)-th emission line EMLk+1 may sequentially pass through the seventh connection pattern CNP7, the third island pattern ISP3, the eighth connection pattern CNP8, and the fourth island pattern ISP4. That is, in the first dummy area DA, the scan initialization lines GILk/GILk+1, the scan write lines GWLk/GWLk+1, and the emission lines EMLk/EMLk+1 may extend in a zigzag or winding form along the first direction DR1and be connected to the first scan driver SDC1and the second scan driver SDC2. The first power line VSL may sequentially pass through the third connection pattern CNP3, the first island pattern ISP1, the fourth connection pattern CNP4, and the second island pattern ISP2. In addition, the first power line VSL may sequentially pass through the fifth connection pattern CNP5, the second island pattern ISP2, the sixth connection pattern CNP6, and the fourth island pattern ISP4. The first power line VSL may be connected to a second common connection electrode CCU2through a fourth common contact hole CCNT4in each of the first to fourth island patterns ISP1, ISP2, ISP3, and ISP4. That is, in the first dummy area DMA1, the first power line VSL may extend in a zigzag or winding form along the second direction DR2. In each of the first to fourth island patterns ISP1, ISP2, ISP3, and ISP4, the first power connection line VSCL may be branched from the first power line VSL. In the first dummy area DMA1, the first power connection line VSCL may extend in a zigzag or winding form along the first direction DR1. In an embodiment, for example, as illustrated inFIG.9, the first power connection line VSCL branched from the first power line VSL in the first island pattern ISP1may extend to the second connection pattern CNP2, the second island pattern ISP2, and the first connection pattern CNP1. In such an embodiment, the first power connection line VSCL branched from the first power line VSL in the first island pattern ISP1and extending to the second island pattern ISP2may be connected to the first power line VSL in the second island pattern ISP2. In an embodiment, as illustrated inFIG.9, the first power connection line VSCL branched from the first power line VSL in the third island pattern ISP3may extend to the eighth connection pattern CNP8, the fourth island pattern ISP4, and the seventh connection pattern CNP7. In such an embodiment, the first power connection line VSCL branched from the first power line VSL in the third island pattern ISP3and extending to the fourth island pattern ISP4may be connected to the first power line VSL in the fourth island pattern ISP4. A first dummy emission layer DEL1, a second dummy emission layer DEL2, and a third dummy emission layer DEL3may be disposed in each of the first to fourth island patterns ISP1, ISP2, ISP3, and ISP4in the first dummy area DMA1. The first dummy emission layer DEL1may include substantially a same material as a first display emission layer EL1of the first light emitting unit ELU1. The second dummy emission layer DEL2may include substantially a same material as a second display emission layer of the second light emitting unit ELU2. The third dummy emission layer DEL3may include substantially a same material as a third display emission layer of the third light emitting unit ELU3. The first dummy emission layer DEL1, the second dummy emission layer DEL2, and the third dummy emission layer DEL3may be arranged in the first direction DR1. The first dummy emission layer DEL1may be disposed on one side of the second dummy emission layer DEL2, and the third dummy emission layer DEL3may be disposed on another side (or an opposing side) of the second dummy emission layer DEL2. An area of the second dummy emission layer DEL2may be greater than an area of the first dummy emission layer DEL1and an area of the third dummy emission layer DEL3. In an embodiment, as illustrated inFIGS.9and10, the second dummy emission layer DEL2overlaps the scan initialization lines GILk/GILk+1, the scan write lines GWLk/GWLk+1, the emission lines EMLk/EMLk+1, and the first power connection line VSCL and the first dummy emission layer DEL1, the second dummy emission layer DEL2, and the third dummy emission layer DEL3overlap the first power line VSL, but an embodiment of the disclosure is not limited thereto. At least one selected from the first dummy emission layer DEL1, the second dummy emission layer DEL2, and the third dummy emission layer DEL3may overlap at least one selected from the scan initialization lines GILk/GILk+1, the scan write lines GWLk/GWLk+1, the emission lines EMLk/EMLk+1, the first power connection line VSCL, the red data lines RDLj/RDLj+1/RDLj+2/RDLj+3, the blue data lines BDLj/BDLj+1/BDLj+2/BDLj+3, and the green data lines GDLj/GDLj+1/GDLj+2/GDLj+3. The second common connection electrode CCU2may be disposed in each of the first to fourth island patterns ISP1, ISP2, ISP3, and ISP4in the first dummy area DMA1. In the first dummy area DMA1, a dummy common electrode DCE of each of the first to fourth island patterns ISP1, ISP2, ISP3, and ISP4may be connected to the second common connection electrode CCU2through a third common contact hole CCNT3. In the first dummy area DMA1, the second common connection electrode CCU2of each of the first to fourth island patterns ISP1, ISP2, ISP3, and ISP4may be connected to the first power line VSL through the fourth common contact hole CCNT4. Although not illustrated inFIGS.9to11, the first power connection line VSCL may overlap the second common connection electrode CCU2and the fourth common contact hole CCNT4in the third direction DR3. Therefore, the first power voltage of the first power line VSL may be supplied to the dummy common electrode DCE through the first power connection line VSCL and the second common connection electrode CCU2. The second common connection electrode CCU2may be disposed in or directly on a same layer as the first pixel electrode PXE1, the second pixel electrode PXE2, the third pixel electrode PXE3, and the first common connection electrode CCU1. The second common connection electrode CCU2may not overlap the first dummy emission layer DEL1, the second dummy emission layer DEL2, and the third dummy emission layer DEL3. In an embodiment, as illustrated inFIGS.9to11, in the first dummy area DMA1as well as the display area DA, shapes of the first to fourth island patterns ISP1to ISP4do not change and lengths of the first to eighth connection patterns CNP1to CNP8change, such that a width of each of the cutout parts CUP1to CUP4may be adjusted. Therefore, the first dummy area DMA1may be effectively stretched and contracted. In such an embodiment, as illustrated inFIGS.9to11, the first power line VSL may overlap the first dummy emission layer DEL1, the second dummy emission layer DEL2, and the third dummy emission layer DEL3and extend in a zigzag or winding form along the second direction DR2, in the first dummy area DMA1. Since the first power line VSL is not disposed at the edge of the display panel100, a width of the non-display area NDA may be decreased as compared with a case where the first power line VSL is disposed at the edge of the display panel100. In such an embodiment, the second dummy area DMA2is substantially the same as the first dummy area DMA1described above with reference toFIGS.9to11, and thus, any repetitive detailed description of the second dummy area DMA2will be omitted. FIGS.12and13are plan views illustrating the first scan driver according to an embodiment in detail. InFIG.12, island patterns ISP1to ISP4, connection patterns CNP1to CNP8, and cutout parts CUP1to CUP4of the first scan driver SDC1when the display device10is contracted are illustrated. InFIG.13, island patterns ISP1to ISP4, connection patterns CNP1to CNP8, and cutout parts CUP1to CUP4of the first scan driver SDC1when the display device10is stretched are illustrated. Referring toFIGS.12and13, the substrate SUB may include island patterns ISP and connection patterns CNP defined by the cutout parts CUP1to CUP4which are cut out in the first scan driver SDC1. The island patterns ISP and the connection patterns CNP of the first scan driver SDC1may be substantially the same as the island patterns ISP and the connection patterns CNP of the display area DA described above with reference toFIGS.4and5. Therefore, any repetitive detailed description of the island patterns ISP and the connection patterns CNP of the first scan driver SDC1will be omitted. A scan stage circuit unit STC may be disposed in each of the first to fourth island patterns ISP1, ISP2, ISP3, and ISP4in the first scan driver SDC1. The scan stage circuit unit STC may include at least one of a plurality of thin film transistors of each of the plurality of stages STA1, STA2, STA3, STA4, . . . , STAm−1, STAm, and STAm+1 of the scan signal output unit SOU. A first driving voltage line VGHL, a second driving voltage line VGLL, the first clock line CKL1, the second clock line CKL2, and stage connection lines STCL1and STCL2, and the like, may be disposed in each of the first to fourth island patterns ISP1, ISP2, ISP3, and ISP4in the first scan driver SDC1. The first scan control lines GCL1(seeFIG.1) and the second scan control lines GCL2(seeFIG.1) may include the first driving voltage line VGHL, the second driving voltage line VGLL, the first clock line CKL1, and the second clock line CKL2. The first driving voltage line VGHL, the second driving voltage line VGLL, the first clock line CKL1, and the second clock line CKL2may be connected to the scan stage circuit unit STC in each of the first to fourth island patterns ISP1, ISP2, ISP3, and ISP4. The first driving voltage line VGHL, the second driving voltage line VGLL, the first clock line CKL1, and the second clock line CKL2may sequentially pass through the third connection pattern CNP3, the first island pattern ISP1, the fourth connection pattern CNP4, and the third island pattern ISP3. The first driving voltage line VGHL, the second driving voltage line VGLL, the first clock line CKL1, and the second clock line CKL2may sequentially pass through the fifth connection pattern CNP5, the second island pattern ISP2, the sixth connection pattern CNP6, and the fourth island pattern ISP4. That is, the first driving voltage line VGHL, the second driving voltage line VGLL, the first clock line CKL1, and the second clock line CKL2may extend in a zigzag or winding form along the second direction DR2. The stage connection lines STCL1and STCL2may connect the scan stage circuit units STC adjacent to each other in the first direction DR1to each other. The stage connection lines STCL1and STCL2may sequentially pass through the first connection pattern CNP1, the first island pattern ISP1, the second connection pattern CNP2, and the second island pattern ISP2. The stage connection lines STCL1and STCL2may sequentially pass through the seventh connection pattern CNP7, the third island pattern ISP3, the eighth connection pattern CNP8, and the fourth island pattern ISP4. That is, the stage connecting lines STCL1and STCL2may extend in a zigzag or winding form along the first direction DR1. In addition, at least one of the scan initialization lines GILk/GILk+1, the scan writing lines GWLk/GWLk+1, and the emission lines EMLk/EMLk+1 as well as the stage connection lines STCL1and STCL2may sequentially pass through the first connection pattern CNP1, the first island pattern ISP1, the second connection pattern CNP2, and the second island pattern ISP2, and sequentially pass through the seventh connection pattern CNP7, the third island pattern ISP3, the eighth connection pattern CNP8, and the fourth island pattern ISP4. That is, at least one of the scan initialization lines GILk/GILk+1, the scan write lines GWLk/GWLk+1, and the emission lines EMLk/EMLk+1 may extend in a zigzag or winding form along the first direction DR1. In an embodiment, as illustrated inFIGS.12and13, in the first scan driver SDC1as well as the display area DA and the first dummy area DMA1, shapes of the first to fourth island patterns ISP1to ISP4do not change and lengths of the first to eighth connection patterns CNP1to CNP8change, such that a width of each of the cutout parts CUP1to CUP4may be adjusted. Therefore, the first scan driver SDC1may be effectively stretched and contracted. In such an embodiment, the second scan driver SDC2is substantially the same as the first scan driver SDC1described with reference toFIGS.12and13, and thus, any repetitive detailed description of the second scan driver SDC2will be omitted. FIG.14is a cross-sectional view illustrating an embodiment of a display panel taken along line A-A′ ofFIG.6.FIG.15is a cross-sectional view illustrating an embodiment of the display panel taken along line B-B′ ofFIG.11.FIG.16is a cross-sectional view illustrating an embodiment of the display panel taken along line C-C′ ofFIG.12. Referring toFIGS.14to16, the substrate SUB may include or be made of an insulating material such as a polymer resin. In an embodiment, for example, the substrate SUB may include or be made of polyimide. The substrate SUB may be a flexible substrate that may be bent, folded, and rolled. A barrier layer BR may be disposed on the substrate SUB. The barrier layer BR is a film for protecting transistors of a thin film transistor layer TFTL and display emission layers EL1of a light emitting element layer EML from moisture permeating through the substrate SUB vulnerable to moisture permeation. In an embodiment, for example, the barrier layer BR may include an inorganic insulating material such as silicon oxide (SiO2), silicon nitride (SiNx), silicon oxynitride (SiON), aluminum oxide (Al2O3), titanium oxide (TiO2), tantalum oxide (Ta2O5), hafnium oxide (HfO2), or zinc oxide (ZnOx). The zinc oxide (ZnOx) may be zinc oxide (ZnO) and/or zinc peroxide (ZnO2). The barrier layer BR may include or defined by a plurality of inorganic films. A first thin film transistor TFT1and a second thin film transistor TFT2may be disposed on the barrier layer BR. The first thin film transistor TFT1may be one of the fourth transistor ST4and the sixth transistor ST6illustrated inFIG.2. The second thin film transistor TFT2may be one of the plurality of thin film transistors of the scan stage circuit unit STC illustrated inFIG.13. The first thin film transistor TFT1may include a first active layer ACT1and a first gate electrode G1. The second thin film transistor TFT2may include the second active layer ACT2and the second gate electrode G2. The first active layer ACT1of the first thin film transistor TFT1and the second active layer ACT2of the second thin film transistor TFT2may be disposed on the barrier layer BR. The first active layer ACT1and the second active layer ACT2may include polycrystalline silicon, single crystal silicon, low-temperature polycrystalline silicon, amorphous silicon, or an oxide semiconductor. The first active layer ACT1may include a first channel region CHA1, a first source region S1, and a first drain region D1. The first channel region CHA1may be a region overlapping the first gate electrode G1in the third direction DR3, which is a thickness direction of the substrate SUB. The first source region S1may be disposed on one side of the first channel region CHAT, and the first drain region D1may be disposed on another side (or an opposing side) of the first channel region CHA1. The first source region S1and the first drain region D1may be regions that do not overlap the first gate electrode G1in the third direction DR3. The first source region S1and the first drain region D1may be regions having conductivity by doping a silicon semiconductor or an oxide semiconductor with ions or impurities. The second active layer ACT2may include a second channel region CHA2, a second source region S2, and a second drain region D2. The second channel region CHA1may be a region overlapping the second gate electrode G2in the third direction DR3, which is the thickness direction of the substrate SUB. The second source region S2may be disposed on one side of the second channel region CHA2, and the second drain region D2may be disposed on another side (or an opposing side) of the second channel region CHA2. The second source region S2and the second drain region D2may be regions that do not overlap the second gate electrode G2in the third direction DR3. The second source region S2and the second drain region D2may be regions having conductivity by doping a silicon semiconductor or an oxide semiconductor with ions or impurities. A gate insulating layer130may be disposed on the first active layer ACT1of the first thin film transistor TFT1and the second active layer TFT2of the second thin film transistor ACT2. The gate insulating layer130may include an inorganic insulating material such as silicon oxide (SiO2), silicon nitride (SiNx), silicon oxynitride (SiON), aluminum oxide (Al2O3), titanium oxide (TiO2), tantalum oxide (Ta2O5), hafnium oxide (HfO2), or zinc oxide (ZnOx). The first gate electrode G1of the first thin film transistor TFT1, the second gate electrode G2of the second thin film transistor TFT2, and a first capacitor electrode CAE1may be disposed on the gate insulating layer130. The first gate electrode G1may overlap the first active layer ACT1in the third direction DR3. The second gate electrode G2may overlap the second active layer ACT2in the third direction DR3. It has been illustrated inFIG.14that the first gate electrode G1and the first capacitor electrode CAE1are disposed to be spaced apart from each other, but the first gate electrode G1and the first capacitor electrode CAE1may be connected to each other. Each of the first gate electrode G1, the second gate electrode G2, and the first capacitor electrode CAE1may be formed as or defined by a single layer or multiple layers, each layer including or made of at least one selected from molybdenum (Mo), aluminum (Al), chromium (Cr), gold (Au), titanium (Ti), nickel (Ni), neodymium (Nd), and copper (Cu), or alloys thereof. A first interlayer insulating layer141of an interlayer insulating layer140may be disposed on the first gate electrode G1of the first thin film transistor TFT1, the second gate electrode G2of the second thin film transistor TFT2, and the first capacitor electrode CAE1. The first interlayer insulating layer141may include an inorganic insulating material such as silicon oxide (SiO2), silicon nitride (SiNx), silicon oxynitride (SiON), aluminum oxide (Al2O3), titanium oxide (TiO2), tantalum oxide (Ta2O5), hafnium oxide (HfO2), or zinc oxide (ZnOx). The first interlayer insulating layer141may include or defined by a plurality of inorganic films. A second capacitor electrode CAE2may be disposed on the first interlayer insulating layer141. The second capacitor electrode CAE2may overlap the first capacitor electrode CAE1in the third direction DR3. In an embodiment where the first capacitor electrode CAE1is connected to the first gate electrode G1, the second capacitor electrode CAE2may overlap the first gate electrode G1in the third direction DR3. Since the first interlayer insulating layer141has a predetermined dielectric constant, a capacitor may be formed by the first capacitor electrode CAE1, the second capacitor electrode CAE2, and the first interlayer insulating layer141disposed between the first capacitor electrode CAE1and the second capacitor electrode CAE2. The second capacitor electrode CAE2may be formed as or defined by a single layer or multiple layers, each layer including or made of at least one selected from molybdenum (Mo), aluminum (Al), chromium (Cr), gold (Au), titanium (Ti), nickel (Ni), neodymium (Nd), and copper (Cu), or alloys thereof. A second interlayer insulating layer142of the interlayer insulating layer140may be disposed on the second capacitor electrode CAE2. The second interlayer insulating layer142may include an inorganic insulating material such as silicon oxide (SiO2), silicon nitride (SiNx), silicon oxynitride (SiON), aluminum oxide (Al2O3), titanium oxide (TiO2), tantalum oxide (Ta2O5), hafnium oxide (HfO2), or zinc oxide (ZnOx). The second interlayer insulating layer142may include or defined by a plurality of inorganic films. A first anode connection electrode ANDE1, the data lines RDLj, BDLj, and GDLj, the first power line VSL, the first power connection line VSCL, the first driving voltage line VGHL, the second driving voltage line VGLL, the first clock line CKL1, and the second clock line CKL2may be disposed on the second interlayer insulating layer142. The first anode connection electrode ANDE1may be connected to the first drain region D1of the first thin film transistor TFT1through a first connection contact hole ANCT1defined through the gate insulating layer130, the first interlayer insulating layer141, and the second interlayer insulating layer142. The first anode connection electrode ANDE1, the data lines RDLj, BDLj, and GDLj, the first power line VSL, the first driving voltage line VGHL, the second driving voltage line VGLL, the first clock line CKL1, and the second clock line CKL2may be formed as or defined by a single layer or multiple layers, each layer including or made of at least one selected from molybdenum (Mo), aluminum (Al), chromium (Cr), gold (Au), titanium (Ti), nickel (Ni), neodymium (Nd), and copper (Cu), or alloys thereof. A first planarization layer160for planarizing a step due to the first thin film transistor TFT1and the second thin film transistor TFT2may be disposed on the first anode connection electrode ANDE1, the data lines RDLj, BDLj, and GDLj, the first power line VSL, the first power connection line VSCL, the first driving voltage line VGHL, the second driving voltage line VGLL, the first clock line CKL1, and the second clock line CKL2. The first planarization layer160may be formed as or defined by an organic film including or made of an acryl resin, an epoxy resin, a phenolic resin, a polyamide resin, a polyimide resin, or the like. First light emitting elements LEL1, the first common connection electrode CCU1, the second common connection electrode CCU2, and a bank190may be disposed on the first planarization layer160. The first light emitting element LEL1includes the first pixel electrode PXE1, the first display emission layer EL1, and the island common electrode ICE. The first pixel electrode PXE1, the first common connection electrode CCU1, and the second common connection electrode CCU2may be disposed on the first planarization layer160. The first pixel electrode PXE1may be connected to the first anode connection electrode ANDE1through a first pixel contact hole PCT1defined through the first planarization layer160. The first common connection electrode CCU1may be connected to the first power connection line VSCL through the second common contact hole CCNT2defined through the first planarization layer160. The second common connection electrode CCU2may be connected to the first power line VSL through the fourth common contact hole CCNT4defined through the first planarization layer160. Each of the first pixel electrode PXE1, the first common connection electrode CCU1, and the second common connection electrode CCU2may include or be formed of a metal material having high reflectivity, such as a stacked structure (Ti/Al/Ti) of aluminum and titanium, a stacked structure (ITO/Al/ITO) of aluminum and indium tin oxide (“ITO”), an APC alloy, and a stacked structure (ITO/APC/ITO) of an APC alloy and ITO. The APC alloy is an alloy of silver (Ag), palladium (Pd), and copper (Cu). The bank190may be formed to partition the first pixel electrodes PXE1on the first planarization layer160, to define the first light emitting unit ELU1, the second light emitting unit ELU2, and the third light emitting unit ELU3. The bank190may be disposed to cover an edge of each of the pixel electrodes PXE1. The bank190may be formed as or defined by an organic film including or made of an acryl resin, an epoxy resin, a phenolic resin, a polyamide resin, a polyimide resin, or the like. The first light emitting unit ELU1refers to an area in which the first pixel electrode PXE1, the first display emission layer EL1, and the island common electrode ICE are sequentially stacked one on another and holes from the first pixel electrode PXE1and electrons from the island common electrode ICE are recombined with each other in the first display emission layer EL1to emit light. The first display emission layer EL1may be disposed on the first pixel electrode PXE1. The display emission layer EL1may include an organic material to emit light of a predetermined color. In an embodiment, for example, the display emission layer EL1includes a hole transporting layer, an organic material layer, and an electron transporting layer. The second light emitting unit ELU2may be an area in which the second pixel electrode, the second display emission layer, and the island common electrode are sequentially stacked one on another, and the third light emitting unit ELU3may be an area in which the third pixel electrode, the third display emission layer, and the island common electrode areas sequentially stacked one on another. The second light emitting unit ELU2and the third light emitting unit ELU3may be formed to be substantially the same as the first light emitting unit ELU1. The second dummy emission layer DEL2is not covered by the bank190, and may be disposed on the exposed surface of the first planarization layer160. The first dummy emission layer DEL1and the third dummy emission layer DEL3are also not covered by the bank190, and may be disposed on the exposed surface of the first planarization layer160. The first dummy emission layer DEL1may include substantially the same material as the first display emission layer EL1of the first light emitting unit ELU1. The second dummy emission layer DEL2may include substantially the same material as the second display emission layer of the second light emitting unit ELU2. The third dummy emission layer DEL3may include substantially a same material as the third display emission layer of the third light emitting unit ELU3. The island common electrode ICE may be disposed on the first display emission layer EL1, the second display emission layer, and the third display emission layer. The island common electrode ICE may be disposed to cover the first display emission layer EL1, the second display emission layer, and the third display emission layer. The island common electrode ICE may be a common layer commonly disposed on the first display emission layer EL1, the second display emission layer, and the third display emission layer. A capping layer may be disposed on the island common electrode ICE. The island common electrode ICE may be connected to the first common connection electrode CCU1through the first common contact hole CCNT1defined through the bank190. Accordingly, the first power voltage may be applied to the island common electrode ICE. The dummy common electrode DCE may be disposed on the first dummy emission layer DEL1, the second dummy emission layer DEL2, and the third dummy emission layer DEL3. The dummy common electrode DCE may be disposed to cover the first dummy emission layer DEL1, the second dummy emission layer DEL2, and the third dummy emission layer DEL3. The dummy common electrode DCE may be a common layer commonly disposed on the first dummy emission layer DEL1, the second dummy emission layer DEL2, and the third dummy emission layer DEL3. A capping layer may be disposed on the dummy common electrode DCE. The dummy common electrode DCE may be connected to the second common connection electrode CCU2through the third common contact hole CCNT3defined through the bank190. Accordingly, the first power voltage may be applied to the dummy common electrode DCE. The island common electrode ICE and the dummy common electrode DCE may include or be formed of a transparent conductive material (“TCO”) such as ITO or indium zinc oxide (“IZO”) capable of transmitting light therethrough or a semi-transmissive conductive material such as magnesium (Mg), silver (Ag), or an alloy of magnesium (Mg) and silver (Ag). In an embodiment where the island common electrode ICE includes or is formed of the semi-transmissive conductive material, emission efficiency may be increased by a micro cavity. An encapsulation layer TFEL may be disposed on the island common electrode ICE and the dummy common electrode DCE. The encapsulation layer TFEL includes at least one inorganic film to prevent oxygen or moisture from permeating into the light emitting element layer EML. In addition, the encapsulation layer TFEL includes at least one organic film to protect the light emitting element layer EML from foreign materials such as dust. In an embodiment, for example, the encapsulation layer TFEL includes a first encapsulation inorganic layer TFE1, an encapsulation organic film layer TFE2, and a second encapsulation inorganic layer TFE3. The first encapsulation inorganic layer TFE1may be disposed on the island common electrode ICE and the dummy common electrode DCE, the encapsulation organic film layer TFE2may be disposed on the first encapsulation inorganic layer TFE1, and the second encapsulation inorganic layer TFE3may be disposed on the encapsulation organic film layer TFE2. The first encapsulation inorganic layer TFE1and the second encapsulation inorganic layer TFE3may include an inorganic insulating material such as silicon oxide (SiO2), silicon nitride (SiNx), silicon oxynitride (SiON), aluminum oxide (Al2O3), titanium oxide (TiO2), tantalum oxide (Ta2O5), hafnium oxide (HfO2), or zinc oxide (ZnOx). The encapsulation organic film layer TFE2may be an organic film including or made of an acryl resin, an epoxy resin, a phenolic resin, a polyamide resin, a polyimide resin, or the like. In an embodiment, although not illustrated inFIGS.14to16, each of the first to eighth connection patterns CNP1to CNP8may be bent in the third direction DR3as illustrated inFIG.7when the display device10is contracted. Therefore, the scan initialization lines GILk/GILk+1, the scan write lines GWLk/GWLk+1, the emission lines EMLk/EMLk+1, the first power connection line VSCL, the red data lines RDLj/RDLj+1/RDLj+2/RDLj+3, the blue data lines BDLj/BDLj+1/BDLj+2/BDLj+3, the green data lines GDLj/GDLj+1/GDLj+2/GDLj+3, and the second power line VDL disposed in each of the first to eighth connection patterns CNP1to CNP8may be disposed in or directly on a same layer as, for example, the second interlayer insulating layer142not to be damaged when the first to eighth connection patterns CNP1to CNP8are bent. In such an embodiment, an upper surface of the second interlayer insulating layer142on which the scan initialization lines GILk/GILk+1, the scan write lines GWLk/GWLk+1, the emission lines EMLk/EMLk+1, the first power connection line VSCL, the red data lines RDLj/RDLj+1/RDLj+2/RDLj+3, the blue data lines BDLj/BDLj+1/BDLj+2/BDLj+3, the green data lines GDLj/GDLj+1/GDLj+2/GDLj+3, and the second power line VDL/VDL+1/VDL+2/VDL+3 are disposed may be designed to be a neutral plane. FIG.17is a plan view illustrating an embodiment of area A ofFIG.11in detail.FIG.18is a cross-sectional view illustrating an embodiment of the display panel taken along line D-D′ ofFIG.17. Referring toFIGS.17and18, the first power connection line VSCL may include a first sub-power connection line VSCL1and a second sub-power connection line VSCL2. The first sub-power connection line VSCL1may be disposed in the first island pattern ISP1and the first connection pattern CNP1, and the second sub-power connection line VSCL2may be disposed in the first island pattern ISP1. The first sub-power connection line VSCL1may be disposed on the second interlayer insulating layer142. In such an embodiment, the first sub-power connection line VSCL1may be disposed in or directly on a same layer as the first anode connection electrode ANDE1, the data lines RDLj, BDLj, and GDLj, the first power line VSL, the first power connection line VSCL, the first driving voltage line VGHL, the second driving voltage line VGLL, the first clock line CKL1, and the second clock line CKL2. In addition, the first sub-power connection line VSCL1may include the same material as the first anode connection electrode ANDE1, the data lines RDLj, BDLj, and GDLj, the first power line VSL, the first power connection line VSCL, the first driving voltage line VGHL, the second driving voltage line VGLL, the first clock line CKL1, and the second clock line CKL2. The second sub-power connection line VSCL2may be disposed on the gate insulating layer130. In such an embodiment, the second sub-power connection line VSCL2may be disposed in or directly on a same layer as the first gate electrode G1of the first thin film transistor TFT1, the second gate electrode G2of the second thin film transistor TFT2, and the first capacitor electrode CAE1. In addition, the second sub-power connection line VSCL2may include the same material as the first gate electrode G1of the first thin film transistor TFT1, the second gate electrode G2of the second thin film transistor TFT2, and the first capacitor electrode CAE1. The first sub-power connection line VSCL1may be connected to the second sub-power connection line VSCL2through a first contact hole CNT1defined through the first interlayer insulating layer141and the second interlayer insulating layer142in the first island pattern ISP1. Since the second sub-power connection line VSCL2is disposed on the gate insulating layer130and the first power line VSL is disposed on the second interlayer insulating layer142, the second sub-power connection line VSCL2and the first power line VSL may cross each other. Since a space of the first island pattern ISP1is wider than a space of the first connection pattern CNP1, a width of the second sub-power connection line VSCL2may be greater than a width of the first sub-power connection line VSCL1. The k-th emission line EMLk may include a (k−1)-th emission line EMLk_1and a (k−2)-th emission line EMLk_2. The (k−1)-th emission line EMLk_1may be disposed in the first island pattern ISP1and the first connection pattern CNP1, and the (k−2)-th emission line EMLk_2may be disposed in the first island pattern ISP1. The (k−1)-th emission line EMLk_1may be disposed on the second interlayer insulating layer142. In such an embodiment, the (k−1)-th emission line EMLk_1may be disposed in or directly on a same layer as the first anode connection electrode ANDE1, the data lines RDLj, BDLj, and GDLj, the first power line VSL, the first power connection line VSCL, the first driving voltage line VGHL, the second driving voltage line VGLL, the first clock line CKL1, and the second clock line CKL2. In addition, the (k−1)-th emission line EMLk_1may include a same material as the first anode connection electrode ANDE1, the data lines RDLj, BDLj, and GDLj, the first power line VSL, the first power connection line VSCL, the first driving voltage line VGHL, the second driving voltage line VGLL, the first clock line CKL1, and the second clock line CKL2. The (k−2)-th emission line EMLk_2may be disposed on the gate insulating layer130. In such an embodiment, the (k−2)-th emission line EMLk_2may be disposed in or directly on a same layer as the first gate electrode G1of the first thin film transistor TFT1, the second gate electrode G2of the second thin film transistor TFT2, and the first capacitor electrode CAE1. In addition, the (k−2)-th emission line EMLk_2may include a same material as the first gate electrode G1of the first thin film transistor TFT1, the second gate electrode G2of the second thin film transistor TFT2, and the first capacitor electrode CAE1. The (k−1)-th emission line EMLk_1may be connected to the (k−2)-th emission line EMLk_2through a second contact hole CNT2defined through the first interlayer insulating layer141and the second interlayer insulating layer142in the first island pattern ISP1. Since the (k−2)-th emission line EMLk_2is disposed on the gate insulating layer130and the first power line VSL is disposed on the second interlayer insulating layer142, the (k−2)-th emission line EMLk_2and the first power line VSL may cross each other. Since the space of the first island pattern ISP1is wider than the space of the first connection pattern CNP1, a width of the (k−2)-th emission line EMLk_2may be greater than a width of the (k−1)-th emission line EMLk_1. The k-th scan write line GWLk may include a (k−1)-th scan write line GWLk_1and a (k−2)-th scan write line GWLk_2. The (k−1)-th scan write line GWLk_1may be disposed in the first island pattern ISP1and the first connection pattern CNP1, and the (k−2)-th scan write line GWLk_2may be disposed in the first island pattern ISP1. The (k−1)-th scan write line GWLk_1may be disposed on the second interlayer insulating layer142. In such an embodiment, the (k−1)-th scan write line GWLk_1may be disposed in or directly on a same layer as the first anode connection electrode ANDE1, the data lines RDLj, BDLj, and GDLj, the first power line VSL, the first power connection line VSCL, the first driving voltage line VGHL, the second driving voltage line VGLL, the first clock line CKL1, and the second clock line CKL2. In addition, the (k−1)-th scan write line GWLk_1may include a same material as the first anode connection electrode ANDE1, the data lines RDLj, BDLj, and GDLj, the first power line VSL, the first power connection line VSCL, the first driving voltage line VGHL, the second driving voltage line VGLL, the first clock line CKL1, and the second clock line CKL2. The (k−2)-th scan write line GWLk_2may be disposed on the gate insulating layer130. In such an embodiment, the (k−2)-th scan write line GWLk_2may be disposed in or directly on a same layer as the first gate electrode G1of the first thin film transistor TFT1, the second gate electrode G2of the second thin film transistor TFT2, and the first capacitor electrode CAE1. In addition, the (k−2)-th scan write line GWLk_2may include a same material as the first gate electrode G1of the first thin film transistor TFT1, the second gate electrode G2of the second thin film transistor TFT2, and the first capacitor electrode CAE1. The (k−1)-th scan write line GWLk_1may be connected to the (k−2)-th scan write line GWLk_2through a third contact hole CNT3defined through the first interlayer insulating layer141and the second interlayer insulating layer142in the first island pattern ISP1. Since the (k−2)-th scan write line GWLk_2is disposed on the gate insulating layer130and the first power line VSL is disposed on the second interlayer insulating layer142, the (k−2)-th scan write line GWLk_2and the first power line VSL may cross each other. Since the space of the first island pattern ISP1is wider than the space of the first connection pattern CNP1, a width of the (k−2)-th scan write line GWLk_2may be greater than a width of the (k−1)-th scan write line GWLk_1. The k-th scan initialization line GILk may include a (k−1)-th scan initialization line GILk_1and a (k−2)-th scan initialization line GILk_2. The (k−1)-th scan initialization line GILk_1may be disposed in the first island pattern ISP1and the first connection pattern CNP1, and the (k−2)-th scan initialization line GILk_2may be disposed in the first island pattern ISP1. The (k−1)-th scan initialization line GILk_1may be disposed on the second interlayer insulating layer142. In such an embodiment, the (k−1)-th scan initialization line GILk_1may be disposed in or directly on a same layer as the first anode connection electrode ANDE1, the data lines RDLj, BDLj, and GDLj, the first power line VSL, the first power connection line VSCL, the first driving voltage line VGHL, the second driving voltage line VGLL, the first clock line CKL1, and the second clock line CKL2. In addition, the (k−1)-th scan initialization line GILk_1may include a same material as the first anode connection electrode ANDE1, the data lines RDLj, BDLj, and GDLj, the first power line VSL, the first power connection line VSCL, the first driving voltage line VGHL, the second driving voltage line VGLL, the first clock line CKL1, and the second clock line CKL2. The (k−2)-th scan initialization line GILk_2may be disposed on the gate insulating layer130. In such an embodiment, the (k−2)-th scan initialization line GILk_2may be disposed in or directly on a same layer as the first gate electrode G1of the first thin film transistor TFT1, the second gate electrode G2of the second thin film transistor TFT2, and the first capacitor electrode CAE1. In addition, the (k−2)-th scan initialization line GILk_2may include a same material as the first gate electrode G1of the first thin film transistor TFT1, the second gate electrode G2of the second thin film transistor TFT2, and the first capacitor electrode CAE1. The (k−1)-th scan initialization line GILk_1may be connected to the (k−2)-th scan initialization line GILk_2through a fourth contact hole CNT4defined through the first interlayer insulating layer141and the second interlayer insulating layer142in the first island pattern ISP1. Since the (k−2)-th scan initialization line GILk_2is disposed on the gate insulating layer130and the first power line VSL is disposed on the second interlayer insulating layer142, the (k−2)-th scan initialization line GILk_2and the first power line VSL may cross each other. Since the space of the first island pattern ISP1is wider than the space of the first connection pattern CNP1, a width of the (k−2)-th scan initialization line GILk_2may be greater than a width of the (k−1)-th scan initialization line GILk_1. In such an embodiment, each of the stage connection lines STCL1and STCL2may be formed to be substantially the same as the first power connection line VSCL, the k-th emission line EMLk, the k-th scan write line GWLk, and the k-scan initialization line GILk described in connection withFIGS.17and18. Therefore, any repetitive detailed description of the stage connecting lines STCL1and STCL2will be omitted. FIG.19is a plan view illustrating an alternative embodiment of area A ofFIG.11in detail.FIG.20is a cross-sectional view illustrating an embodiment of the display panel taken along line E-E′ ofFIG.19. An embodiment ofFIGS.19and20is substantially the same as the embodiment ofFIGS.17and18except that the first power connection line VSCL further includes a third sub-power connection line VSCL3, the k-th emission line EMLk further includes a (k−3)-th emission line EMLk_3, the k-th scan write line GWLk further includes a (k−3)-th scan write line GWLk_3, and the k-th initialization line GILk further includes a (k−3)-th scan initialization line GILk_3. Referring toFIGS.19and20, the third sub-power connection line VSCL3, the (k−3)-th emission line EMLk_3, the (k−3)-th scan write line GWLk_3, and the (k−3)-th scan initialization line GILk_3may be disposed in the first island pattern ISP1. The third sub-power connection line VSCL3, the (k−3)-th emission line EMLk_3, the (k−3)-th scan write line GWLk_3, and the (k−3)-th scan initialization line GILk_3may be disposed on the first interlayer insulating layer141. In such an embodiment, the third sub-power connection line VSCL3, the (k−3)-th emission line EMLk_3, the (k−3)-th scan write line GWLk_3, and the (k−3)-th scan initialization line GILk_3may be disposed in or directly on a same layer as the second capacitor electrode CAE2. In addition, the third sub-power connection line VSCL3, the (k−3)-th emission line EMLk_3, the (k−3)-th scan write line GWLk_3, and the (k−3)-th scan initialization line GILk_3may include a same material as the second capacitor electrode CAE2. The first sub-power connection line VSCL1may be connected to the third sub-power connection line VSCL3through a fifth contact hole CNT5defined through the second interlayer insulating layer142in the first island pattern ISP1. Since the third sub-power connection line VSCL3is disposed on first interlayer insulating layer141and the first power line VSL is disposed on the second interlayer insulating layer142, the third sub-power connection line VSCL3and the first power line VSL may cross each other. Since the space of the first island pattern ISP1is wider than the space of the first connection pattern CNP1, a width of the third sub-power connection line VSCL3may be greater than a width of the first sub-power connection line VSCL1. In addition, since the space of the first island pattern ISP1is wider than the space of the first connection pattern CNP1, the first sub-power connection line VSCL1may be divided into two lines, that is, the second sub-power connection line VSCL2and the third sub-power connection line VSCL3, and such two lines may cross the first power line VSL, in the first island pattern ISP1. Accordingly, resistance of the first power connection line VSCL may be decreased. The (k−1)-th emission line EMLk_1may be connected to the (k−3)-th emission line EMLk_3through a sixth contact hole CNT6defined through the second interlayer insulating layer142in the first island pattern ISP1. Since the (k−3)-th emission line EMLk_3is disposed on the first interlayer insulating layer141and the first power line VSL is disposed on the second interlayer insulating layer142, the (k−3)-th emission line EMLk_3and the first power line VSL may cross each other. Since the space of the first island pattern ISP1is wider than the space of the first connection pattern CNP1, a width of the (k−3)-th emission line EMLk_3may be greater than a width of the (k−1)-th emission line EMLk_1. In addition, since the space of the first island pattern ISP1is wider than the space of the first connection pattern CNP1, the (k−1)-th emission line EMLk_1may be divided into two lines, that is, the (k−2)-th emission line EMLk_2and the (k−3)-th emission line EMLk_3and such two lines may cross the first power line VSL, in the first island pattern ISP1. Accordingly, resistance of the k-th emission line EMLk may be decreased. The (k−1)-th scan write line GWLk_1may be connected to the (k−3)-th scan write line GWLk_3through a seventh contact hole CNT7defined through the second interlayer insulating layer142in the first island pattern ISP1. Since the (k−3)-th scan write line GWLk_3is disposed on the first interlayer insulating layer141and the first power line VSL is disposed on the second interlayer insulating layer142, the (k−3)-th scan write line GWLk_3and the first power line VSL may cross each other. Since the space of the first island pattern ISP1is wider than the space of the first connection pattern CNP1, a width of the (k−3)-th scan write line GWLk_3may be greater than a width of the (k−1)-th scan write line GWLk_1. In addition, since the space of the first island pattern ISP1is wider than the space of the first connection pattern CNP1, the (k−1)-th scan write line GWLk_1may be divided into two lines, that is, the (k−2)-th scan write line GWLk_2and the (k−3)-th scan write line GWLk_3and such two lines may cross the first power line VSL, in the first island pattern ISP1. Accordingly, resistance of the k-th scan write line GWLk may be decreased. The (k−1)-th scan initialization line GILk_1may be connected to the (k−3)-th scan initialization line GILk_3through an eighth contact hole CNT8defined through the second interlayer insulating layer142in the first island pattern ISP1. Since the (k−3)-th scan initialization line GILk_3is disposed on the first interlayer insulating layer141and the first power line VSL is disposed on the second interlayer insulating layer142, the (k−3)-th scan initialization line GILk_3and the first power line VSL may cross each other. Since the space of the first island pattern ISP1is wider than the space of the first connection pattern CNP1, a width of the (k−3)-th scan initialization line GILk_3may be greater than a width of the (k−1)-th scan initialization line GILk_1. In addition, since the space of the first island pattern ISP1is wider than the space of the first connection pattern CNP1, the (k−1)-th scan initialization line GILk_1may be divided into two lines, that is, the (k−2)-th scan initialization line GILk_2and the (k−3)-th scan initialization line GILk_3and such two lines may cross the first power line VSL, in the first island pattern ISP1. Accordingly, resistance of the k-th scan initialization line GILk may be decreased. In such an embodiment, each of the stage connection lines STCL1and STCL2may be formed to be substantially the same as the first power connection line VSCL, the k-th emission line EMLk, the k-th scan write line GWLk, and the k-scan initialization line GILk described in connection withFIGS.19and20. Therefore, any repetitive detailed description of the stage connecting lines STCL1and STCL2will be omitted. FIG.21is a plan view illustrating a first island pattern of a first dummy area according to an alternative embodiment in detail.FIG.22is a cross-sectional view illustrating an embodiment of a display panel taken along line F-F′ ofFIG.21. An embodiment ofFIGS.21and22is substantially the same as the embodiment ofFIGS.11and15except that in the first dummy area DMA1, the first power line VSL includes a first sub-power line VSL1and a second sub-power line VSL2. Referring toFIGS.21and22, the first sub-power line VSL1may be disposed on the second interlayer insulating layer142. In such an embodiment, the first sub-power line VSL1may be disposed in or directly on a same layer as the first anode connection electrode ANDE1, the data lines RDLj, BDLj, and GDLj, the first power line VSL, and the first power connection line VSCL. In addition, the first sub-power line VSL1may include a same material as the first anode connection electrode ANDE1, the data lines RDLj, BDLj, and GDLj, the first power line VSL, and the first power connection line VSCL. The second sub-power line VSL2may be disposed on the first planarization layer160. The second sub-power line VSL2may be formed as or defined by a single layer or multiple layers, each layer including or made of at least one selected from molybdenum (Mo), aluminum (Al), chromium (Cr), gold (Au), titanium (Ti), nickel (Ni), neodymium (Nd), and copper (Cu), or alloys thereof. A second planarization layer180may be disposed on the second sub-power line VSL2. The second sub-power line VSL2may be connected to the first sub-power line VSL1through a first power contact hole VCNT1defined through the first planarization layer160in the first island pattern ISP1. The second planarization layer180may be formed as or defined by an organic film including or made of an acryl resin, an epoxy resin, a phenolic resin, a polyamide resin, a polyimide resin, or the like. In such an embodiment, the first light emitting element LEL1, the first common connection electrode CCU1, the second common connection electrode CCU2, and the bank190illustrated inFIG.14may be disposed on the second planarization layer180. In addition, a second anode connection electrode may be disposed on the first planarization layer160, and may be connected to the first anode connection electrode ANDE1through a second connection contact hole defined through the first planarization layer160. In addition, the first pixel electrode PXE1of the first light emitting element LEL1may be connected to the first anode connection electrode ANDE1through a third connection contact hole defined through the second planarization layer180. In an embodiment, as illustrated inFIGS.21and22, the second sub-power line VSL2is disposed in the first island pattern ISP1, the first connection pattern CNP1, and the second connection pattern CNP2, but an embodiment of the disclosure is not limited thereto. In an alternative embodiment, for example, in the first connection pattern CNP1and the second connection pattern CNP2, the second sub-power line VSL2is disposed on a surface other than a neutral plane, and thus, a crack may occur in the second sub-power line VSL2when the first connection pattern CNP1and the second connection pattern CNP2are bent. Therefore, the second sub-power line VSL2may be disposed on the first island pattern ISP1of which a change in shape does not exist or is minimized even though the display device10is contracted and stretched, and may not be disposed on the first connection pattern CNP1and the second connection pattern CNP2. In an embodiment, as illustrated inFIGS.21and22, the first power line VSL includes the first sub-power line VSL1and the second sub-power line VSL2overlapping each other in the third direction DR3, such that an area of the first power line VSL may be increased as a compared with a length of the first power line VSL, and thus, resistance of the first power line VSL may be decreased. In such an embodiment, a first power line VSL of the second dummy area DMA2may be substantially the same as the first power line VSL illustrated inFIGS.21and22, and thus, any repetitive detailed description of the first power line VSL of the second dummy area DMA2will be omitted. FIG.23is a plan view illustrating a first island pattern of a first dummy area according to an alternative embodiment in detail.FIG.24is a cross-sectional view illustrating an embodiment of a display panel taken along line G-G′ ofFIG.23. Still another embodiment ofFIGS.23and24is different from another embodiment ofFIGS.21and22in that in the first dummy area DMA1, the first power line VSL further includes a third sub-power line VSL3. Referring toFIGS.23and24, the third sub-power line VSL3may be disposed on the second planarization layer180. The third sub-power line VSL3may be formed as or defined by a single layer or multiple layers, each layer including or made of at least one selected from molybdenum (Mo), aluminum (Al), chromium (Cr), gold (Au), titanium (Ti), nickel (Ni), neodymium (Nd), and copper (Cu), or alloys thereof. A third planarization layer181may be disposed on the third sub-power line VSL3. The third sub-power line VSL3may be connected to the second sub-power line VSL2through a second power contact hole VCNT2defined through the second planarization layer180in the first island pattern ISP1. The third planarization layer181may be formed as or defined by an organic film including or made of an acryl resin, an epoxy resin, a phenolic resin, a polyamide resin, a polyimide resin, or the like. In such an embodiment, the first light emitting element LEL1, the first common connection electrode CCU1, the second common connection electrode CCU2, and the bank190illustrated inFIG.14may be disposed on the third planarization layer181. In addition, a third anode connection electrode may be disposed on the second planarization layer180, and may be connected to the second anode connection electrode through a third connection contact hole defined through the second planarization layer180. In addition, the first pixel electrode PXE1of the first light emitting element LEL1may be connected to the third anode connection electrode through a fourth connection contact hole defined through the third planarization layer181. In an embodiment, as illustrated inFIGS.23and24, the second sub-power line VSL2and the third sub-power line VSL3are disposed in the first island pattern ISP1, the first connection pattern CNP1, and the second connection pattern CNP2, but an embodiment of the disclosure is not limited thereto. In the first connection pattern CNP1and the second connection pattern CNP2, the second sub-power line VSL2and the third sub-power line VSL3are disposed on a surface other than a neutral plane, and thus, cracks may occur in the second sub-power line VSL2and the third sub-power line VSL3when the first connection pattern CNP1and the second connection pattern CNP2are bent. Therefore, the second sub-power line VSL2and the third sub-power line VSL3may be disposed on the first island pattern ISP1of which a change in length is minimized even though the display device10is contracted and stretched, and may not be disposed on the first connection pattern CNP1and the second connection pattern CNP2. In an embodiment, as illustrated inFIGS.23and24, the first power line VSL includes the first sub-power line VSL1, the second sub-power line VSL2, and the third sub-power line VSL3overlapping each other in the third direction DR3, such that an area of the first power line VSL may be increased as a compared with a length of the first power line VSL, and thus, resistance of the first power line VSL may be decreased. In such an embodiment, a first power line VSL of the second dummy area DMA2may be substantially the same as the first power line VSL illustrated inFIGS.23and24, and thus, any repetitive detailed description of the first power line VSL of the second dummy area DMA2will be omitted. FIG.25is a plan view illustrating a display area according to an alternative embodiment in detail.FIG.26is a plan view illustrating a first dummy area according to an alternative embodiment in detail.FIG.27is a plan view illustrating a first scan driver according to an alternative embodiment in detail. InFIG.25, island patterns ISP1to ISP4, connection patterns CNP1to CNP8, and cutout parts CUP1to CUP4of the display area DA when the display device10is stretched are illustrated. InFIG.26, island patterns ISP1to ISP4, connection patterns CNP1to CNP8, and cutout parts CUP1to CUP4of the first dummy area DMA1when the display device10is stretched are illustrated. InFIG.27, island patterns ISP1to ISP4, connection patterns CNP1to CNP8, and cutout parts CUP1to CUP4of the first scan driver SDC1when the display device10is stretched are illustrated. An embodiment ofFIGS.25to27is substantially the same as the embodiment ofFIGS.5,10, and13except that when the display device10is contracted and stretched, the first to eighth connection patterns CNP1to CNP8are not bent or unbent, and the cutout parts CUP1to CUP4become wide. The same or like elements shown inFIGS.25to27have been labeled with the same reference characters as used above to describe the embodiment shown inFIGS.5,10, and13, and any repetitive detailed description thereof will hereinafter be omitted or simplified. Referring toFIGS.25to27, a width of the first cutout part CUP1in the first direction DR1when the display device10is stretched may be greater than that when the display device10is contracted. The width of the first cutout part CUP1when the display device10is stretched may become greater toward the center of the first cutout part CUP1. That is, a width of the center of the first cutout part CUP1when the display device10is stretched may be greater than a width of an edge of the first cutout part CUP1. A width of the second cutout part CUP2in the second direction DR2when the display device10is stretched may be greater than that when the display device10is contracted. The width of the second cutout part CUP2when the display device10is stretched may become greater toward the center of the second cutout part CUP2. That is, a width of the center of the second cutout part CUP2when the display device10is stretched may be greater than a width of an edge of the second cutout part CUP2. A width of the third cutout part CUP3in the first direction DR1when the display device10is stretched may be greater than that when the display device10is contracted. The width of the third cutout part CUP3when the display device10is stretched may become greater toward the center of the third cutout part CUP3. That is, a width of the center of the third cutout part CUP3when the display device10is stretched may be greater than a width of an edge of the third cutout part CUP3. A width of the fourth cutout part CUP4in the first direction DR1when the display device10is stretched may be greater than that when the display device10is contracted. The width of the fourth cutout part CUP4when the display device10is stretched may become greater toward the center of the fourth cutout part CUP4. That is, a width of the center of the fourth cutout part CUP4when the display device10is stretched may be greater than a width of an edge of the fourth cutout part CUP4. Since at least portions of the thin film transistor layer TFTL are removed by a laser in the cutout parts CUP1to CUP4, stretchability of the cutout parts CUP may be higher than that of the connection patterns CNP. Shapes of the island patterns ISP and the connection patterns CNP do not change, and the first cutout part CUP1, the second cutout part CUP2, the third cutout part CUP3, and the fourth cutout part CUP4may become wide. In an embodiment, as illustrated inFIGS.25to27, the first to fourth island patterns ISP1to ISP4partitioned by the cutout parts CUP1to CUP4are connected to each other by the first to eighth connection patterns CNP1to CNP8, and thus, the widths of the cutout parts CUP1to CUP4when the display device10is stretched may be greater than those when the display device10is contracted. Therefore, shapes of the first to fourth island patterns ISP1to ISP4and the first to eighth connection patterns CNP1to CNP8do not change and the width of each of the cutout parts CUP1to CUP4may be increased or decreased. Accordingly, the display area DA may be stretched and contracted. The invention should not be construed as being limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete and will fully convey the concept of the invention to those skilled in the art. While the invention has been particularly shown and described with reference to embodiments thereof, it will be understood by those of ordinary skill in the art that various changes in form and details may be made therein without departing from the spirit or scope of the invention as defined by the following claims. | 110,413 |
11862089 | DETAILED DESCRIPTION Advantages and features of the present disclosure, and implementation methods thereof will be clarified through following example embodiments described with reference to the accompanying drawings. The present disclosure may, however, be embodied in different forms and should not be construed as limited to the example embodiments set forth herein. Rather, these example embodiments are provided so that this disclosure may be sufficiently thorough and complete to assist those skilled in the art to fully understand the scope of the present disclosure. Further, the protected scope of the present disclosure is defined by claims and their equivalents. Like reference numerals designate like elements throughout, unless otherwise specified. Names of the respective elements used in the following explanations are selected only for convenience of writing the specification and may thus be different from those used in actual products. In the following description, where a detailed description of relevant known function or configuration may unnecessarily obscure an aspect of example embodiments of the present disclosure, a detailed description of such known function of configuration may be omitted. Where the terms “comprise,” “have,” “include,” “contain,” “constitute,” “made up of,” “formed of,” and the like are used, one or more other elements may be added unless the terms are used with a more limiting term, such as “only.” An element described in a singular form is intended to include plural forms, and vice versa, unless the context clearly indicates otherwise. Although the terms “first,” “second,” A, B, (a), (b), and the like may be used herein to describe various elements, these elements should not be interpreted to be limited by these terms as they are not used to define a particular order or precedence. These terms are used only to distinguish one element from another. For example, a first element could be termed a second element, and, similarly, a second element could be termed a first element, without departing from the scope of the present disclosure. Where an expression that an element or layer “is connected to,” “is coupled to,” “is adhered to,” “contacts,” or “overlaps” another element or layer is used, the element or layer can not only be directly connected, coupled, or adhered to or directly contact or overlap another element or layer, but also be indirectly connected, coupled, or adhered or indirectly contact or overlap another element or layer with one or more intervening elements or layers “disposed,” or “interposed” between the elements or layers, unless otherwise specified. Where a temporal relationship between processes, operations, flows, steps, events, or the like is described as, for example, “after,” “subsequent,” “next,” or “before,” the relationship encompasses not only a continuous or sequential order but also a non-continuous or non-sequential relationship unless a more limiting term, such as “just,” “immediate(ly),” or “direct(ly),” is used. The shapes, sizes, ratios, angles, numbers, and the like, which are illustrated in the drawings to describe various example embodiments of the present disclosure, are merely given by way of example. Therefore, the present disclosure is not limited to the illustrations in the drawings. In construing an element, the element (including its dimensions and relative size) is to be construed as including an ordinary error or tolerance range even where no explicit description of such an error or tolerance range is provided. A tolerance or error range may be caused by various factors, such as process factors, internal or external impact, noise, and the like. Further, the term “may” fully encompasses all the meanings of the term “can.” Reference will now be made in detail to embodiments of the present disclosure, examples of which may be illustrated in the accompanying drawings. FIG.1is a diagram schematically illustrating a configuration of a display device according to various example embodiments of the present disclosure. As illustrated inFIG.1, a display device100according to an example embodiment of the present disclosure may include a display panel110where a plurality of gate lines GL and data lines DL are connected, and a plurality of subpixels SP are arranged in a matrix form. The display device100may further include a gate driving circuit120for driving the plurality of gate lines GL, a data driving circuit130for supplying a data voltage through the plurality of data lines DL, a timing controller140for controlling the gate driving circuit120and the data driving circuit130, and a power management circuit150. The display panel110may display an image based on a scan signal transferred from the gate driving circuit120through the plurality of gate line GLs GL and the data voltage transferred from the data driving circuit130through the plurality of data lines DL. In the case of a liquid crystal display, the display panel110may include a liquid crystal layer formed between two substrates and may be operated in any known mode, such as a twisted nematic (TN) mode, a vertical alignment (VA) mode, an in-plane switching (IPS) mode, or a fringe field switching (FFS) mode. In the case of an organic light emitting display, the display panel110may be implemented in a top emission scheme, a bottom emission scheme, or a dual-emission scheme. In the display panel110, a plurality of pixels may be arranged in a matrix form. Each pixel may include subpixels SP having different colors, e.g., a white subpixel, a red subpixel, a green subpixel, and a blue subpixel. The subpixels SP may be defined respectively by the plurality of data lines DL and the plurality of gate lines GL. One subpixel SP may include, e.g., a thin film transistor (TFT) formed at the intersection between one data line DL and one gate line GL, a light emitting element, such as an organic light emitting diode, charged with the data voltage, and a storage capacitor electrically connected to the light emitting element to maintain the voltage. For example, if the display device100having a resolution of 2,160×3,840 includes four subpixels SP of white (W), red (R), green (G), and blue (B), 3,840 data lines DL may respectively be connected to 2,160 gate lines GL and four subpixels WRGB. Thus, 3,840×4=15,360 data lines DL may be provided in the display device100. Each subpixel SP may be disposed at the intersection between the corresponding gate line GL and the corresponding data line DL. The gate driving circuit120may be controlled by the timing controller140to sequentially output scan signals to the plurality of gate lines GL disposed in the display panel110, controlling the driving timing of the plurality of subpixels SP. In the display device100having a resolution of, e.g., 2,160×3,840, sequentially outputting the scan signal to the 2,160 gate lines GL from the first gate line to the 2,160th gate line may be referred to as 2,160-phase driving operation. Sequentially outputting the scan signal to each unit of four gate lines GL, e.g., sequentially outputting the scan signal to the fifth gate line to the eighth gate line after sequentially outputting the scan signal to the first gate line to the fourth gate line, is referred to as 4-phase driving operation. In other words, sequentially outputting the scan signal to every N gate lines GL may be referred to as N-phase driving. The gate driving circuit120may include one or more gate driving integrated circuits (GDICs). Depending on the driving schemes implemented, the gate driving circuit120may be positioned on only one side, or on each of two opposite sides, of the display panel110. The gate driving circuit120may be implemented in a gate-in-panel (GIP) form and be embedded in the bezel area of the display panel110. The data driving circuit130may receive image data DATA from the timing controller140and convert the received image data DATA into an analog data voltage. Then, as the data voltage may be output to each data line DL according to the timing of the scan signal being applied to the corresponding gate line GL, each subpixel SP connected to the data line DL may display a light emitting signal having the brightness corresponding to the data voltage. Likewise, the data driving circuit130may include one or more source driving integrated circuits SDIC. The source driving integrated circuit SDIC may be connected to the bonding pad of the display panel110in a tape automated bonding (TAB) type or a chip-on-glass (COG) type or may be disposed directly on the display panel110. In some cases, each source driving integrated circuit SDIC may be integrated and disposed on the display panel110. Further, each source driving integrated circuit SDIC may be implemented in a chip-on-film (COF) type. In this case, each source driving integrated circuit SDIC may be mounted on a circuit film and may be electrically connected to the corresponding data lines DL of the display panel110through the circuit film. The timing controller140may supply various control signals to the gate driving circuit120and the data driving circuit130and may control the operation of the gate driving circuit120and the data driving circuit130. In other words, the timing controller140may control the gate driving circuit120to output a scan signal according to the timing implemented in each frame and, on the other hand, may transfer the image data DATA received from an external device (e.g., via a host system200) to the data driving circuit130. In this case, the timing controller140may receive, from an external host system200, several timing signals including, e.g., a vertical synchronization signal Vsync, a horizontal synchronization signal Hsync, a data enable signal DE, and a main clock MCLK, together with the image data DATA. The host system200may be any one of a television (TV) system, a set-top box, a navigation system, a personal computer (PC), a home theater system, a mobile device, and a wearable device, but the present disclosure is not limited thereto. Accordingly, the timing controller140may generate a control signal according to various timing signals received from the host system200and may transfer the control signal to the gate driving circuit120and the data driving circuit130. For example, the timing controller140may output several gate control signals including, e.g., a gate start pulse GSP, a gate clock GCLK, and a gate output enable signal GOE, to control the gate driving circuit120. The gate start pulse GSP may control the timing at which one or more gate driving integrated circuits GDIC constituting the gate driving circuit120start operation. The gate clock GCLK is a clock signal commonly input to one or more gate driving integrated circuits GDIC and may control the shift timing of the scan signal. The gate output enable signal GOE may designate timing information about one or more gate driving integrated circuits GDICs. The timing controller140may output various data control signals including, e.g., a source start pulse SSP, a source sampling clock SCLK, and a source output enable signal SOE, to control the data driving circuit130. The source start pulse SSP may control the timing at which one or more source driving integrated circuits SDIC constituting the data driving circuit130start data sampling. The source sampling clock SCLK is a clock signal that may control the timing of sampling data in the source driving integrated circuit(s) SDIC. The source output enable signal SOE may control the output timing of the data driving circuit130. The display device100may further include a power management circuit150that supplies various voltages or currents to, e.g., the display panel110, the gate driving circuit120, and the data driving circuit130or controls various voltages or currents to be supplied. The power management circuit150may adjust the direct current (DC) input voltage Vin supplied from the host system200to generate power required to drive the display panel100, the gate driving circuit120, and the data driving circuit130. A subpixel SP may be positioned at the intersection between the corresponding gate line GL and the corresponding data line DL, and a light emitting element may be disposed in each subpixel SP. For example, the organic light emitting diode display may include a light emitting element, such as an organic light emitting diode, in each subpixel SP and may display an image by controlling the current flowing to the light emitting element according to the data voltage. The display device100may be one of various types of devices, such as a liquid crystal display, an organic light emitting diode display, or a plasma display panel. FIG.2is a view illustrating an example of a system of a display device according to example embodiments of the present disclosure. As illustrated inFIG.2, in the display device100according to example embodiments of the present disclosure, the source driving integrated circuit(s) SDIC included in the data driving circuit130may be implemented in a chip-on-film (COF) type among various types (e.g., TAB, COG, or COF), and the gate driving circuit120may be implemented in a gate-in-panel (GIP) type among various types (e.g., TAB, COG, COF, or GIP). Where the gate driving circuit120is implemented in the GIP type, the plurality of gate driving integrated circuits GDIC included in the gate driving circuit120may be directly formed in the bezel area of the display panel110. In this case, the gate driving integrated circuits GDIC may receive various signals (e.g., a clock signal, a gate high signal, a gate low signal, etc.) for generating scan signals through gate driving-related signal lines disposed in the bezel area. Likewise, one or more source driving integrated circuits SDIC included in the data driving circuit130each may be mounted on a source film SF, and one side of the source film SF may be electrically connected with the display panel110. Lines for electrically connecting the source driver integrated circuit SDIC and the display panel110may be disposed on the source film SF. The display device100may include at least one source printed circuit board SPCB for circuit connection between a plurality of source driving integrated circuits SDIC and other devices and may include a control printed circuit board CPCB for mounting control components and various electric devices. The other side of the source film SF where the source driving integrated circuit SDIC is mounted may be connected to at least one source printed circuit board SPCB. In other words, one side of the source film SF where the source driving integrated circuit SDIC is mounted may be electrically connected with the display panel110, and the other side thereof may be electrically connected with the source printed circuit board SPCB. The timing controller140and the power management circuit (power management IC)150may be mounted on the control printed circuit board CPCB. The timing controller140may control the operation of the data driving circuit130and the gate driving circuit120. The power management circuit150may supply power voltage or current to the display panel110, the data driving circuit130, and the gate driving circuit120and may control the supplied voltage or current. At least one source printed circuit board SPCB and control printed circuit board CPCB may be circuit-connected through at least one connection member. The connection member may include, e.g., a flexible printed circuit FPC or a flexible flat cable FFC. The at least one source printed circuit board SPCB and control printed circuit board CPCB may be integrated into a single printed circuit board. The display device100may further include a set board170electrically connected to the control printed circuit board CPCB. In this case, the set board170may also be referred to as a power board. A main power management circuit160for managing the overall power of the display device100may be disposed on the set board170. The main power management circuit160may interwork with the power management circuit150. In the so-configured example display device100, the power voltage may be generated in the set board170and be transferred to the power management circuit150in the control printed circuit board CPCB. The power management circuit150may transfer a power voltage for display driving or characteristic value sensing to the source printed circuit board SPCB through the flexible printed circuit FPC or flexible flat cable FFC. The power voltage transferred to the source printed circuit board SPCB may be supplied to emit light or sense a specific subpixel SP in the display panel110through the source driving integrated circuit SDIC. Each of the subpixels SP arranged in the display panel110in the display device100may include a light emitting element and a circuit element, e.g., a driving transistor, for driving the light emitting element, e.g., an organic light emitting diode. The type and number of circuit elements constituting each subpixel SP may be varied depending on functions to be provided and design schemes. FIG.3is a diagram illustrating an example of a subpixel circuit of a display device. As illustrated inFIG.3, an example subpixel circuit may include one or more transistors and a capacitor and may have a light emitting element disposed therein. For example, the subpixel circuit may include a driving transistor DRT, a scan transistor SCT, a sensing transistor SENT, a storage capacitor Cst, and a light emitting diode ED. The driving transistor DRT may include the first node N1, second node N2, and third node N3. The first node N1of the driving transistor DRT may be a gate node to which the data voltage Vdata is applied from the data driving circuit130through the corresponding data line DL when the scan transistor SCT is turned on. The second node N2of the driving transistor DRT may be electrically connected with the anode electrode of the light emitting diode ED and may be one of the source node and drain node. The third node N3of the driving transistor DRT may be electrically connected with the driving voltage line DVL to which a high-potential voltage EVDD is applied and may be the other of the drain node and the source node. In this case, during a display driving period, a high-potential voltage EVDD for displaying an image may be supplied to the driving voltage line DVL. For example, the high-potential voltage EVDD for displaying an image may be 27V. The scan transistor SCT may be electrically connected between the first node N1of the driving transistor DRT and the data line DL, and a corresponding gate line GL may be connected to the gate node of the scan transistor SCT. Thus, the scan transistor SCT may be operated according to the first scan signal SCAN1supplied through the gate line GL. When turned on, the scan transistor SCT may transfer the data voltage Vdata supplied through the data line DL to the gate node (i.e., the first node N1) of the driving transistor DRT, thereby controlling the operation of the driving transistor DRT. The sensing transistor SENT may be electrically connected between the second node N2of the driving transistor DRT and the reference voltage line RVL, and a corresponding gate line GL may be connected to the gate node of the sensing transistor SENT. The sensing transistor SENT may be operated according to the second scan signal SCAN2supplied through this gate line GL. When the sensing transistor SENT is turned on, a reference voltage Vref supplied through the reference voltage line RVL may be transferred to the second node N2of the driving transistor DRT. In other words, as the scan transistor SCT and the sensing transistor SENT are controlled, the voltage of the first node N1and the voltage of the second node N2of the driving transistor DRT may be controlled, so that the current for driving the light emitting diode ED may be supplied. The gate nodes of the scan transistor SCT and the sensing transistor SENT may be commonly connected to one gate line GL or may be connected to different gate lines GL. An example is shown in which the scan transistor SCT and the sensing transistor SENT are connected to different gate lines GL. In this example case, the scan transistor SCT and the sensing transistor SENT may be independently controlled, respectively, by the first scan signal SCAN1and the second scan signal SCAN2transferred through different gate lines GL. On the other hand, if the scan transistor SCT and the sensing transistor SENT are connected commonly to one gate line GL, the scan transistor SCT and the sensing transistor SENT may be simultaneously controlled by the first scan signal SCAN1or by the second scan signal SCAN2transferred through one gate line GL, and the aperture ratio of the subpixel SP may increase. Each transistor disposed in the subpixel circuit may be an N-type transistor or a P-type transistor. In the example shown inFIG.3, the transistors are N-type transistors. The storage capacitor Cst may be electrically connected between the first node N1and second node N2of the driving transistor DRT and may maintain the data voltage Vdata during one frame. The storage capacitor Cst may also be connected between the first node N1and third node N3of the driving transistor DRT depending on the type of the driving transistor DRT. The anode electrode of the light emitting diode ED may be electrically connected with the second node N2of the driving transistor DRT, and a low-potential voltage EVSS may be applied to the cathode electrode of the light emitting diode ED. The low-potential voltage EVSS may be a ground voltage or a voltage higher or lower than the ground voltage. The low-potential voltage EVSS may be varied depending on the driving state. For example, the low-potential voltage EVSS at the time of display driving and the low-potential voltage EVSS at the time of sensing driving may be set to differ from each other. The scan transistor SCT and the sensing transistor SENT may be referred to as scan transistors controlled through scan signals SCAN1and SCAN2, respectively. The structure of the subpixel SP may further include one or more additional transistors or, in some cases, further include one or more additional capacitors. In this case, to effectively sense a characteristic value, e.g., a threshold voltage or mobility, of the driving transistor DRT, the display device100may use a method for measuring the current flow by the voltage charged to the storage capacitor Cst during a characteristic value sensing period of the driving transistor DRT. This is referred to as current sensing. In other words, it is possible to figure out the characteristic value, or a variation in characteristic value, of the driving transistor DRT in the subpixel SP by measuring the current flow by the voltage charged to the storage capacitor Cst during the characteristic value sensing period of the driving transistor DRT. In this case, the reference voltage line RVL may serve not only to transfer the reference voltage Vref but also as a sensing line for sensing the characteristic value of the driving transistor DRT in the subpixel. Thus, the reference voltage line RVL may also be referred to as a sensing line or a sensing channel. More specifically, the characteristic value or a change in the characteristic value of the driving transistor DRT may correspond to a difference between the gate node voltage and the source node voltage of the driving transistor DRT. The compensation for the characteristic value of the driving transistor DRT may be performed by external compensation that senses and compensates for the characteristic value of the driving transistor DRT using an external compensation circuit. Alternatively, the compensation may be performed by internal compensation that senses and compensates for the characteristic value of the driving transistor DRT inside the subpixel SP, rather than using an additional external configuration. In this case, the external compensation may be performed before the display device100is shipped out, and the internal compensation may be performed after the display device100is shipped out. However, internal compensation and external compensation may be performed together even after the display device100is shipped out. FIG.4is a signal timing diagram illustrating an example of external compensation for a threshold voltage of a driving transistor in a display device. As shown inFIG.4, the sensing of the threshold voltage Vth of the driving transistor DRT in the example display device100may be performed in an initialization phase INITIAL, a tracking phase TRACKING, and a sampling phase SAMPLING. In this case, since the scan transistor SCT and the sensing transistor SENT are simultaneously turned on and turned off for sensing the threshold voltage Vth of the driving transistor DRT, the first scan signal SCAN1and the second scan signal SCAN2together may be applied through one gate line GL, or the first scan signal SCAN1and the second scan signal SCAN2may respectively be applied at the same time through different gate lines GL. The initialization phase INITIAL is a period in which the second node N2of the driving transistor DRT may be charged with the reference voltage Vref for sensing the threshold voltage Vth of the driving transistor DRT, and the first scan signal SCAN1and the second scan signal SCAN2which have high levels may be applied through the gate line(s) GL. The tracking phase TRACKING is a period in which charges may be stored in the storage capacitor Cst after the charging of the second node N2of the driving transistor DRT is completed. The sampling phase SAMPLING is a period in which a current flow from the charge stored in the storage capacitor Cst is detected after the storage capacitor Cst of the driving transistor DRT is charged. If the first scan signal SCAN1and the second scan signal SCAN2at the turn-on level are simultaneously applied in the initialization phase INITIAL, the scan transistor SCT may be turned on. Accordingly, the first node N1of the driving transistor DRT may be initialized to the sensing data voltage Vdata_sen for sensing the threshold voltage Vth. The sensing transistor SENT may also be turned on by the first scan signal SCAN1and the second scan signal SCAN2at the turn-on level, and the reference voltage Vref may be applied through the reference voltage line RVL. Thus, the second node N2of the driving transistor DRT may be initialized to the reference voltage Vref. In the tracking phase TRACKING, the voltage of the second node N2of the driving transistor DRT reflecting the threshold voltage Vth of the driving transistor DRT may be tracked. To this end, in the tracking phase TRACKING, the scan transistor SCT and the sensing transistor SENT may remain in the turned-on state, and the reference voltage Vref applied through the reference voltage line RVL may be cut off. Accordingly, the second node N2of the driving transistor DRT may float, and the voltage at the second node N2of the driving transistor DRT may start to rise from the reference voltage Vref. In this case, since the sensing transistor SENT is on, the increase in the voltage at the second node N2of the driving transistor DRT may lead to an increase in the voltage on the reference voltage line RVL. In this process, the voltage at the second node N2of the driving transistor DRT may be increased and then saturated. The saturation voltage at the time when the second node N2of the driving transistor DRT reaches the saturated state may correspond to the difference (Vdata_sen−Vth) between the sensing data voltage Vdata_sen for sensing the threshold voltage Vth and the threshold voltage Vth of the driving transistor DRT. In the sampling phase SAMPLING, the high-level first scan signal SCAN1and second scan signal SCAN2to the gate line(s) GL may be maintained, and the charge stored in the storage capacitor Cst of the driving transistor DRT may be sensed by the characteristic value sensing circuit included in the data driving circuit130. FIG.5is a signal timing diagram illustrating an example of external compensation for a mobility of a driving transistor in a display device. As shown inFIG.5, like the sensing of the threshold voltage Vth, the sensing of the mobility of the driving transistor DRT in the example display device100may be performed in an initialization phase INITIAL, a tracking phase TRACKING, and a sampling phase SAMPLING. In the initialization phase INITIAL, the scan transistor SCT may be turned on by the first scan signal SCAN1at the turn-on level, so that the first node N1of the driving transistor DRT may be initialized to the data voltage Vdata_sen for mobility sensing. Further, the sensing transistor SENT may be turned on by the second scan signal SCAN2at the turn-on level and, in this state, the second node N2of the driving transistor DRT may be initialized to the reference voltage Vref. The tracking phase TRACKING is a phase for tracking the mobility of the driving transistor DRT. The mobility of the driving transistor DRT may indicate the current driving capability of the driving transistor DRT, and the mobility of the driving transistor DRT may be calculated by tracking the voltage at the second node N2of the driving transistor DRT through the tracking phase TRACKING. In the tracking phase TRACKING, the scan transistor SCT may be turned off by the first scan signal SCAN1at the turn-off level, and the switch through which the reference voltage Vref is applied to the reference voltage line RVL may be cut off. Accordingly, both the first node N1and the second node N2of the driving transistor DRT may float, and the voltages at the first node N1and the second node N2of the driving transistor DRT may both increase. In particular, since the voltage at the second node N2of the driving transistor DRT may be initialized to the reference voltage Vref, it may start to increase from the reference voltage Vref. In this case, since the sensing transistor SENT is on, the increase in the voltage at the second node N2of the driving transistor DRT may lead to an increase in the voltage on the reference voltage line RVL. In the sampling phase SAMPLING, the characteristic value sensing circuit may detect the voltage of the second node N2of the driving transistor DRT, a predetermined amount of time Δt after the voltage at the second node N2starts to increase. In this case, the sensing voltage detected by the characteristic value sensing circuit may indicate a voltage Vref+ΔV, which is the reference voltage Vref plus a predetermined voltage ΔV. The mobility of the driving transistor DRT may be calculated based on the so-detected sensing voltage Vref+ΔV, the reference voltage Vref which is already known, and the amount of time Δt for the voltage at the second node N2to increase by ΔV. In other words, the mobility of the driving transistor DRT is proportional to the voltage variation ΔV/Δt per unit time on the reference voltage line RVL through the tracking phase TRACKING and the sampling phase SAMPLING. Accordingly, the mobility of the driving transistor DRT may be proportional to the slope of the voltage waveform on the reference voltage line RVL. FIG.6is a signal timing diagram illustrating an example of internal compensation for a threshold voltage and mobility of a driving transistor in a display device. As shown inFIG.6, the internal compensation for the characteristic value of the driving transistor DRT in the display device100may proceed in an initialization phase INITIAL, a threshold voltage sensing phase Vth SENSING, a mobility compensation phase μ COMPENSATION, and a light emission phase EMISSION. In the initialization phase INITIAL, a high-level second scan signal SCAN2may be input to turn on the sensing transistor SENT, thereby initializing the voltage at the second node N2, that is, the source node voltage of the driving transistor DRT, to a reference voltage Vref. Thereafter, the high-level first scan signal SCAN1may be supplied to turn on the scan transistor SCT, and the data voltage Vdata may be supplied to the first node N1, i.e., the gate node of the driving transistor DRT, to turn on the driving transistor DRT. Subsequently, if the data voltage Vdata is lowered to the level of the offset voltage Vos, the voltage of the first node N1may become the level of the offset voltage Vos. If the low-level second scan signal SCAN2is applied to turn off the sensing transistor SENT in the threshold voltage sensing phase Vth SENSING, the voltage of the second node N2may rise to the voltage of the difference between the offset voltage Vos and the threshold voltage Vth of the driving transistor DRT through the driving transistor DRT, so that the storage capacitor Cst is charged with the voltage of the threshold voltage Vth level. In the mobility compensating phase μ COMPENSATION, the voltage of the first node N1may be raised to the level of the data voltage Vdata by applying the grayscale to be displayed through the display panel110, that is, the corresponding data voltage Vdata. Accordingly, the second node N2may be gradually charged according to the mobility (μ) characteristic of the driving transistor DRT. As a result, the storage capacitor Cst may store the difference voltage which is the sum of the data voltage Vdata and the threshold voltage Vth minus the voltage variation ΔV according to the offset voltage Vos and the mobility μ. In the light emission phase EMISSION, a low-level first scan signal SCAN1may be applied to turn off the scan transistor SCT, so that the driving transistor DRT applies the current where the threshold voltage Vth and mobility μ have been corrected to the light emitting diode EL by the voltage level stored in the storage capacitor Cst. Such internal compensation or external compensation may be performed after a power-on signal is generated in the display device100and before display driving starts. For example, if a power-on signal is applied to the display device100, the timing controller140may load various parameters for driving the display panel110and then may drive the display. In this case, the parameters for driving the display panel110may include information about the sensing and compensation for characteristic values previously performed on the display panel110. In the parameter loading process, the sensing and compensation of characteristic values (the threshold voltage and mobility) of the driving transistor DRT may be performed. As described above, a process in which the characteristic value is sensed in the parameter loading process after the power-on signal is generated may be referred to as an on-sensing process. Alternatively, a period in which the characteristic value(s) of the driving transistor DRT are sensed and compensated for may proceed after a power-off signal of the display device100is generated. For example, when a power-off signal is generated in the display device100, the timing controller140may cut off the data voltage Vdata supplied to the display panel110and may sense the characteristic value(s) of the driving transistor DRT for a predetermined time. As such, a sensing process for sensing a characteristic value in a state in which the data voltage is cut off as a power-off signal is generated may be referred to as an off-sensing process. Further, the sensing and compensation for the characteristic value(s) of the driving transistor DRT may be performed in real time while the display is driven. This sensing process is referred to as a real-time (RT) sensing process. In the real-time sensing process, the sensing process may be performed on one or more subpixels SP in one or more subpixel SP lines, in each blank period during the display driving period. In other words, during the display driving period when an image is displayed on the display panel110, a blank period in which the data voltage is not supplied to the subpixel SP may exist within one frame or between one frame and the next frame. In the blank period, characteristic value sensing and compensation for one or more subpixels SP may be performed. As such, when the sensing process is performed in the blank period, the line(s) of subpixels SP on which the sensing process is performed may be randomly selected. Accordingly, after the sensing process in the blank period is performed, an abnormality that may appear in the display driving period may be alleviated. During the display driving period after the sensing process is performed during the blank period, a recovery data voltage may be supplied to the subpixels SP where the sensing process has been performed. Accordingly, in the display driving period after the sensing process in the blank period, abnormalities in the line(s) of subpixel SP where the sensing process has been completed may be further alleviated. In this case, since the threshold voltage sensing of the driving transistor DRT may take a long time as saturation of the voltage at the second node N2of the driving transistor DRT may take a relatively long time, the sensing and compensation of the threshold voltage Vth may be performed primarily as an off-sensing process. In contrast, since the mobility sensing of the driving transistor DRT may take a relatively short time as compared to the threshold voltage sensing process, the mobility sensing and compensation may be performed as a real-time sensing process. However, in the display device100, the light emitting element ED constituting the subpixel may also deteriorate according to the driving time. The above-described internal compensation and external compensation may not compensate for both the deterioration of the light emitting element ED and the characteristic value(s) of the driving transistor DRT. Accordingly, embodiments of the present disclosure provide a subpixel circuit, a display panel, and a display device, capable of compensating for both the deterioration of the light emitting element ED and the deterioration of the driving transistor DRT by presenting a new subpixel circuit controlled so that the driving current flowing through the light emitting element ED may be proportional to the data voltage Vdata. As a result, there may be provided a subpixel circuit, a display panel, and a display device which may maintain the driving current flowing through the light emitting element ED constant although the characteristic value(s) of the driving transistor DRT are varied. FIG.7is a block diagram illustrating a subpixel circuit according to example embodiments of the present disclosure. As shown inFIG.7, a subpixel circuit300according to example embodiments of the present disclosure may include a reference circuit310, a light emitting circuit320, an amplification circuit330, and an input circuit340. The reference circuit310may receive the high-potential voltage EVDD and may control variations in the driving current Id flowing through the light emitting circuit320. For example, when the control voltage Vc at the input node of the light emitting circuit320and the data voltage Vdata have the same potential, the current I3applied to the amplification circuit330becomes 0 so that the reference current Iref flowing through the reference circuit310and the driving current Id flowing through the light emitting circuit320have the same value. The high-potential voltage EVDD may have a level required to display an image during the display driving period. For example, the high-potential voltage EVDD to display an image may be 27V, but the present disclosure is not limited thereto. The light emitting circuit320may be positioned between the control voltage Vc and the low-potential voltage EVSS and may control the operation of the light emitting element ED according to the driving voltage Vd at the output node of the amplification circuit330. When the light emitting element ED is turned on, the driving current Id may flow through the light emitting circuit320. The low-potential voltage EVSS may be a ground voltage or a voltage higher or lower than the ground voltage. The low-potential voltage EVSS may be varied depending on the driving state. For example, the low-potential voltage EVSS at the time of display driving and the low-potential voltage EVSS at the time of sensing driving may be set to differ from each other. The amplification circuit330may compare the control voltage Vc and the data voltage Vdata to generate a driving voltage Vd for controlling the operation of the light emitting circuit320. For example, the amplification circuit330may be formed of an operational amplifier that has an inverting input terminal to which the control voltage Vc is applied and a non-inverting input terminal (+) to which the output voltage from the input circuit340is applied. The resistance value of the light emitting circuit320may be reduced in inverse proportion to the driving voltage Vd of the amplification circuit330. When the control voltage Vc is larger than the data voltage Vdata, the driving voltage Vd corresponding to the output node of the amplification circuit330may be reduced. Accordingly, when the control voltage Vc and the data voltage Vdata have the same level, the operation of the amplification circuit330may be stopped, and the control voltage Vc may remain at the same level as the data voltage Vdata. The input circuit340may determine the time when the data voltage Vdata is applied to the non-inverting input terminal (+) of the amplification circuit330by the scan signal SCAN. In other words, the example subpixel circuit300of the present disclosure may be controlled to allow the control voltage Vc to remain at the level proportional to the data voltage Vdata, so that the driving current Id flowing through the light emitting element ED is proportional to the level of the data voltage Vdata. As a result, regardless of degradation of the light emitting element ED or the characteristic value(s) of the driving transistor, a current proportional to the data voltage Vdata may flow through the light emitting element ED, keeping the luminance of the display device100constant. FIG.8is a diagram illustrating a detailed configuration of a subpixel circuit according to example embodiments of the present disclosure. As shown inFIG.8, a subpixel circuit300according to example embodiments of the present disclosure may include a reference circuit310, a light emitting circuit320, an amplification circuit330, and an input circuit340. Described below is the example subpixel circuit300to which the n-th scan signal SCAN(n) is applied among the plurality of subpixels constituting the display panel110, for example. The reference circuit310may include a reference transistor Tref having a drain node and a gate node, at which the control voltage Vc may be provided, and a source node to which a high-potential voltage EVDD may be applied. The light emitting circuit320may include a light emitting element ED having a cathode electrode, to which a low-potential voltage EVSS may be applied, and a driving transistor Td having a drain node connected to the anode electrode of the light emitting element ED, a source node to which the control voltage Vc may be applied, and a gate node to which the driving voltage Vd of the amplification circuit330may be applied. The reference transistor Tref may be turned on while the high-potential voltage EVDD is applied to the source node and, when the driving transistor Td is turned on by the driving voltage Vd of the amplification circuit330, the driving current Id may flow through the light emitting circuit320. In this case, when the control voltage Vc and the data voltage Vdata have the same level of potential, the entire reference current Iref flowing through the reference circuit310may flow through the light emitting circuit320, and the driving current Id may have the same value as the reference current Iref. The amplification circuit330may include a control transistor Tc, a reset transistor Trst, and a first capacitor C1. The control transistor Tc may have a gate node, to which the control voltage Vc may be applied, and a drain node connected to the gate node of the driving transistor Td. The reset transistor Trst may have a source node to receive a reset voltage Vrst, a gate node to which the (n−1)-th scan signal SCAN(n−1) may be applied, and a drain node shared with the control transistor Tc. The first capacitor C1may be connected to the drain node of the control transistor Tc to transfer a power voltage Vp for driving the driving transistor Td. The reset voltage Vrst may be applied at a voltage level configured to turn off the driving transistor Td. The power voltage Vp may be applied at a level capable of driving the driving transistor Td at a certain point in time, and the level may be changed by the charge stored in the first capacitor C1. In other words, the power voltage Vp may not continuously maintain a constant level of voltage. The input circuit340may include a switching transistor Tsw and a second capacitor C2. The switching transistor Tsw may have a gate node to which the n-th scan signal SCAN(n) may be applied, a source node to which the data voltage Vdata may be applied, and a drain node connected to the source node of the control transistor Tc. The second capacitor C2may be connected between the drain node of the switching transistor Tsw and the low-potential voltage EVSS. Accordingly, the input circuit340may supply the data voltage Vdata to the amplification circuit330by the n-th scan signal SCAN(n). The second capacitor C2may serve to stably transfer the data voltage Vdata. The transistors Td, Tref, Tc, Trst, and Tsw constituting the example subpixel circuit300may be P-type transistors or N-type transistors. The P-type transistor is relatively more reliable than the N-type transistor. In the case of the P-type transistor, since the driving transistor Td may be fixed to the high-potential voltage EVDD during the period when the light emitting element ED emits light, the current flowing through the light emitting element ED may be supplied stably without significant fluctuation. When operating in the saturation area, the P-type transistor may flow a constant current regardless of a change in the threshold voltage, providing relatively high reliability. On the other hand, since the N-type transistor uses electrons, not holes, as carriers, it has higher mobility than the P-type transistor so that the switching speed may be increased. The N-type transistor may be an oxide transistor formed of an oxide semiconductor (e.g., a transistor having a channel formed from an oxide semiconductor, such as indium, gallium, zinc oxide, or IGZO). The P-type transistor may be a silicon transistor formed from a semiconductor, such as silicon (e.g., a transistor having a polysilicon channel formed by a low temperature process referred to as LTPS or low temperature polysilicon). Described here is an example in which the transistors Td, Tref, Tc, Trst, and Tsw constituting the subpixel circuit300are P-type transistors. The terms “source node” and “drain node” for the transistors may be interchangeably used depending on the input voltage. FIG.9is an example signal waveform view illustrating operations of a subpixel circuit according to example embodiments of the present disclosure. With reference toFIG.9, an operation for the subpixel circuit300driven by the n-th scan signal SCAN(n) in the display device100according to example embodiments of the present disclosure is described below. If the reset transistor Trst is turned on by the (n−1)-th scan signal SCAN(n−1) prior to the n-th scan signal SCAN(n), the reset voltage Vrst may be applied to the gate node of the driving transistor Td to turn off the driving transistor Td. The power voltage Vp may increase to the level of the reset voltage Vrst. Thereafter, if the n-th scan signal SCAN(n) is applied to turn on the switching transistor Tsw, the data voltage Vdata may be applied to the second capacitor C2. In this case, the power voltage Vp may decrease with a constant slope. If the power voltage Vp reaches the threshold voltage level of the driving transistor Td, the driving transistor Td may be turned on, and the reference current Iref flowing through the reference circuit310may be transferred to the light emitting circuit320through the driving transistor Td. The control voltage Vc corresponding to the output voltage of the reference circuit310may be decreased by the reference current Iref and the driving current Id flowing from the reference circuit310through the light emitting circuit320. If the control voltage Vc decreases and reaches the sum Vdata+Vth(Tc) of the data voltage Vdata and the threshold voltage Vth(Tc) of the control transistor Tc, the control transistor Tc may be turned on. If the control transistor Tc is turned on, charges stored in the first capacitor C1may move to the second capacitor C2, so that the driving current Id flowing through the driving transistor Td may decrease. Accordingly, the control voltage Vc may increase, and the control transistor Tc may be turned off. As the control transistor Tc repeats being turned on and off for short periods of time, the control voltage Vc may maintain the level of the sum Vdata+Vth(Tc) of the data voltage Vdata and the threshold voltage Vth(Tc) of the control transistor Tc. In this state, the reference current Iref flowing through the reference transistor Tref may be expressed as follows in the saturation area: Iref=K*[(Vc−Vth(Tref)]2=K*[(Vdata+Vth(Tc)−Vth(Tref)]2 Here, K=Cox*(W/L)*μ, W and L, respectively, denote the channel width and length of the reference transistor Tref, Cox denotes the capacitance of the gate insulation film, and μ denotes the mobility of the reference transistor Tref. In this case, if the deposition conditions for the control transistor Tc and the reference transistor Tref positioned adjacent to each other are maintained the same, the threshold voltage Vth(Tc) of the control transistor Tc and the threshold voltage Vth(Tref) of the reference transistor Tref may have the same value. In other words, the control transistor Tc and the reference transistor Tref may be formed to have the same threshold voltage Vth by maintaining the thickness and composition ratio of the gate node, the source node, the drain node, and the insulation film positioned between them under the same conditions in the process of depositing the control transistor Tc and the reference transistor Tref. If the threshold voltage Vth(Tc) of the control transistor Tc and the threshold voltage Vth(Tref) of the reference transistor Tref have the same value, the reference current Iref flowing through the reference transistor Tref may be expressed as: Iref=K*Vdata2 In other words, as the driving current Id flowing through the light emitting element ED and the reference current Iref flowing through the reference transistor Tref are each proportional to the data voltage Vdata, the driving current Id for driving the light emitting element ED may be adjusted by the data voltage Vdata regardless of the characteristics of the light emitting element ED or the characteristic values of the driving transistor Td. On the other hand, if the driving transistor Td is an oxide transistor, the threshold voltage Vth may be shifted by positive bias temperature stress (PBTS). But in this case, it is possible to minimize changes in threshold voltage Vth by increasing the magnitude of the high-potential voltage EVDD to increase the driving current Id flowing through the light emitting element ED and decreasing the gate-source node voltage of the driving transistor Td. For example, the high-potential voltage EVDD may be set to 28V or higher to reduce the shift of the threshold voltage Vth of the driving transistor Td due to positive bias temperature stress (PBTS). As a result, the example subpixel circuit300of the present disclosure may control to allow the driving current Id flowing through the light emitting element ED to be proportional to the level of the data voltage Vdata by allowing the control voltage Vc corresponding to the output voltage of the reference circuit310to remain at the level corresponding to the sum Vdata+Vth(Tc) of the threshold voltage Vth(Tc) of the control transistor Tc and the data voltage Vdata. Accordingly, in the example subpixel circuit300of the present disclosure, a current proportional to the data voltage Vdata may flow through the light emitting element ED regardless of deterioration of the light emitting element ED or the characteristic value of the driving transistor Td. Thus, there may be provided a display panel110and a display device100having uniform luminance. FIG.10is a signal waveform view illustrating a variation in a current flowing through a reference circuit depending on a data voltage in a subpixel circuit according to example embodiments of the present disclosure. As illustrated inFIG.10, the subpixel circuit300according to example embodiments of the present disclosure may be controlled so that the driving current Id flowing through the light emitting circuit320and the reference current Iref flowing through the reference circuit310to be proportional to the level of the data voltage Vdata by allowing the control voltage Vc at the output node of the reference circuit310to remain at the level corresponding to the sum Vdata+Vth(Tc) of the threshold voltage Vth(Tc) of the control transistor Tc and the data voltage Vdata. For example, as the control voltage Vc remains at the level corresponding to the sum Vdata+Vth(Tc) of the threshold voltage Vth(Tc) of the control transistor Tc and the data voltage Vdata, the driving current Id flowing through the light emitting circuit320and the reference current Iref flowing through the reference circuit310may maintain the same value. In this case, it may be identified that, when the data voltage Vdata is sequentially changed to the levels of 22V, 21V 20V 19V, and 18V, the driving current Id flowing through the light emitting circuit320and the reference current Iref flowing through the reference circuit310each have a value substantially proportional to the data voltage Vdata. FIGS.11A,11B, and11Care signal waveform views illustrating variations in a current and voltage of a subpixel circuit when a driving transistor has a different threshold voltage in a subpixel circuit according to example embodiments of the present disclosure. As illustrated inFIGS.11A,11B, and11C, in the subpixel circuit300according to example embodiments of the present disclosure, a characteristic value, such as the threshold voltage of the driving transistor Td, may be changed as the driving time increases. In consideration of this context, in a case where the threshold voltage of the driving transistor Td has a reference voltage and is increased by 1V from the reference voltage, variations in the driving voltage Vd corresponding to the output voltage of the amplification circuit330, the control voltage Vc corresponding to the output voltage of the reference circuit310, and the driving current Id flowing through the light emitting circuit320were measured. It could be identified that, when the threshold voltage of the driving transistor Td increased, the level of the driving voltage Vd corresponding to the output voltage of the amplification circuit330was varied (case ofFIG.11A). However, although the threshold voltage of the driving transistor Td increases, the control voltage Vc corresponding to the output voltage of the reference circuit310constantly remains at the level corresponding to the sum Vdata+Vth(Tc) of the threshold voltage Vth(Tc) of the control transistor Tc and the data voltage Vdata (case ofFIG.11B). As a result, the driving current Id flowing through the light emitting circuit320and the reference current Iref flowing through the reference circuit310may maintain a constant value although the threshold voltage of the driving transistor Td is changed (case ofFIG.11C). As such, since the driving current Id flowing through the light emitting element ED has a value proportional to the data voltage Vdata regardless of the deterioration of the light emitting element ED or the characteristic value of the driving transistor Td in the subpixel circuit300of the disclosure, the display device100may maintain uniform luminance although the driving time increases. In the example subpixel circuit300of the present disclosure, the amplification circuit330may alternatively reset the driving transistor Td by controlling the power voltage Vp, instead of implementing the reset transistor Trst. FIG.12is a diagram illustrating a detailed configuration of another subpixel circuit according to example embodiments of the present disclosure. As shown inFIG.12, a subpixel circuit300according to example embodiments of the present disclosure may include a reference circuit310, a light emitting circuit320, an amplification circuit330, and an input circuit340. Described below is an example in which the n-th scan signal SCAN(n) is applied among the plurality of subpixels constituting the display panel110. The reference circuit310may include a reference transistor Tref having a drain node and a gate node, at which the control voltage Vc may be provided, and a source node to which a high-potential voltage EVDD may be applied. The light emitting circuit320may include a light emitting element ED having a cathode electrode, to which a low-potential voltage EVSS may be applied, and a driving transistor Td having a drain node connected to the anode electrode of the light emitting element ED, a source node to which the control voltage Vc may be applied, and a gate node to which the driving voltage Vd of the amplification circuit330may be applied. The reference transistor Tref may be turned on by the high-potential voltage EVDD and, when the driving transistor Td is turned on by the driving voltage Vd of the amplification circuit330, the driving current Id may flow through the light emitting circuit320. In this case, when the control voltage Vc and the data voltage Vdata have the same level of potential, the entire reference current Iref flowing through the reference circuit310may flow through the light emitting circuit320, and the driving current Id may have the same value as the reference current Iref. The amplification circuit330may include a control transistor Tc and a first capacitor C1. The control transistor Tc may have a gate node, to which the control voltage Vc may be applied, and a drain node connected to the gate node of the driving transistor Td. The first capacitor C1may be connected to the drain node of the control transistor Tc to transfer a power voltage Vp for driving the driving transistor Td. The power voltage Vp may have a level capable of driving the driving transistor Td. The input circuit340may include a switching transistor Tsw and a second capacitor C2. The switching transistor Tsw may have a gate node to which the n-th scan signal SCAN(n) may be applied, a source node to which the data voltage Vdata may be applied, and a drain node connected to the source node of the control transistor Tc. The second capacitor C2may be connected between the drain node of the switching transistor Tsw and the low-potential voltage EVSS. Accordingly, the input circuit340may supply the data voltage Vdata to the amplification circuit330by the n-th scan signal SCAN(n). The second capacitor C2may serve to stably transfer the data voltage Vdata. The transistors Td, Tref, Tc, and Tsw constituting the example subpixel circuit300may be P-type transistors or N-type transistors. The P-type transistor is relatively more reliable than the N-type transistor. In the case of the P-type transistor, since the driving transistor Td may be fixed to the high-potential voltage EVDD during the period when the light emitting element ED emits light, the current flowing through the light emitting element ED may be supplied stably without significant fluctuation. When operating in the saturation area, the P-type transistor may flow a constant current regardless of a change in the threshold voltage, providing relatively high reliability. On the other hand, since the N-type transistor uses electrons, not holes, as carriers, it has higher mobility than the P-type transistor, so that the switching speed may be increased. The N-type transistor may be an oxide transistor formed of an oxide semiconductor (e.g., a transistor having a channel formed from an oxide semiconductor, such as indium, gallium, zinc oxide, or IGZO). The P-type transistor may be a silicon transistor formed from a semiconductor, such as silicon (e.g., a transistor having a polysilicon channel formed by a low temperature process referred to as LTPS or low temperature polysilicon). Described here is an example in which the transistors Td, Tref, Tc, and Tsw constituting the subpixel circuit300are P-type transistors. The terms “source node” and “drain node” for the transistors may be interchangeably used depending on the input voltage. FIG.13is an example signal waveform view illustrating operations of another subpixel circuit according to example embodiments of the present disclosure. Described below are operations of the subpixel circuit300according to example embodiments of the present disclosure, with reference toFIG.13. The power voltage Vp may be applied in the form of a pulse, from the power management circuit150, according to one or more timing signals. If the power voltage Vp is applied at a high level before the n-th scan signal SCAN(n) is applied, the driving transistor Td may be turned off by the power voltage Vp. Thereafter, if the n-th scan signal SCAN(n) is applied to turn on the switching transistor Tsw, the data voltage Vdata may be applied to the second capacitor C2. After the n-th scan signal SCAN(n) is applied, the power voltage Vp may be switched to a low level. If the power voltage Vp reaches the threshold voltage level of the driving transistor Td, the driving transistor Td may be turned on, and the reference current Iref flowing through the reference circuit310may be transferred to the light emitting circuit320through the driving transistor Td. The control voltage Vc corresponding to the output voltage of the reference circuit310may be decreased by the reference current Iref and the driving current Id flowing from the reference circuit310though the light emitting circuit320. If the control voltage Vc reaches the sum Vdata+Vth(Tc) of the data voltage Vdata and the threshold voltage Vth(Tc) of the control transistor Tc, the control transistor Tc may be turned on. If the control transistor Tc is turned on, charges stored in the first capacitor C1may move to the second capacitor C2, so that the driving current Id flowing through the driving transistor Td may decrease. Accordingly, the control voltage Vc may increase, and the control transistor Tc may be turned off. As the control transistor Tc repeats being turned on and off for short periods of time, the control voltage Vc may maintain the level of the sum Vdata+Vth(Tc) of the data voltage Vdata and the threshold voltage Vth(Tc) of the control transistor Tc. In this case, if the deposition conditions for the control transistor Tc and the reference transistor Tref positioned adjacent to each other are the same, the threshold voltage Vth(Tc) of the control transistor Tc and the threshold voltage Vth(Tref) of the reference transistor Tref may have the same value. In other words, the control transistor Tc and the reference transistor Tref may be formed to have the same threshold voltage Vth by maintaining the thickness and composition ratio of the gate node, the source node, the drain node, and the insulation film positioned between them under the same conditions in the process of depositing the control transistor Tc and the reference transistor Tref. If the threshold voltage Vth(Tc) of the control transistor Tc and the threshold voltage Vth(Tref) of the reference transistor Tref have the same value, the reference current Iref flowing through the reference transistor Tref may be expressed as: Iref=K*Vdata2 In other words, as the driving current Id flowing through the light emitting element ED and the reference current Iref flowing through the reference transistor Tref are each proportional to the data voltage Vdata, the driving current Id for driving the light emitting element ED may be adjusted by the data voltage Vdata regardless of the characteristics of the light emitting element ED or the characteristic values of the driving transistor Td. As a result, the example subpixel circuit300of the present disclosure may control to allow the driving current Id flowing through the light emitting element ED to be proportional to the level of the data voltage Vdata by allowing the control voltage Vc corresponding to the output voltage of the reference circuit310to remain at the level corresponding to the sum Vdata+Vth(Tc) of the threshold voltage Vth(Tc) of the control transistor Tc and the data voltage Vdata. Accordingly, in the example subpixel circuit300of the present disclosure, a current proportional to the data voltage Vdata may flow through the light emitting element ED regardless of deterioration of the light emitting element ED or the characteristic value of the driving transistor Td. Thus, there may be provided a display panel110and a display device100having uniform luminance. The foregoing example embodiments are briefly described below. A subpixel circuit for operating at least one of a plurality of subpixels disposed on a display panel may include: a reference circuit configured to receive a high-potential voltage and to output a control voltage for controlling a driving current flowing through a light emitting element; a light emitting circuit including the light emitting element, the light emitting circuit being configured to receive the control voltage and a low-potential voltage and to control the light emitting element based on a driving voltage; an amplification circuit configured to compare the control voltage and a data voltage to generate the driving voltage for controlling the light emitting circuit; and an input circuit configured to receive the data voltage and a first scan signal and to control a timing of applying the data voltage to the amplification circuit based on the first scan signal. In some embodiments, the reference circuit may include a reference transistor having a drain node and a gate node to provide the control voltage and a source node to receive the high-potential voltage. In some embodiments, the light emitting circuit may include: the light emitting element having a cathode electrode to receive the low-potential voltage; and a driving transistor having a drain node connected to an anode electrode of the light emitting element and a gate node to receive the driving voltage. In some embodiments, the amplification circuit may include an operational amplifier having an inverting input terminal to receive the control voltage, a non-inverting input terminal to receive an output voltage of the input circuit, and an output terminal to output the driving voltage. In some embodiments, the amplification circuit may include: a control transistor having a gate node to receive the control voltage and a drain node to provide the driving voltage to the light emitting circuit; and a first capacitor connected to the drain node of the control transistor to transfer an input power voltage. In some embodiments, the reference circuit may include a reference transistor having a drain node and a gate node configured to provide the control voltage and a source node configured to receive the high-potential voltage; and the control transistor and the reference transistor may have a same threshold voltage. In some embodiments, the control transistor and the reference transistor may have at least one of a same thickness, a same composition ratio, and a same structure of a gate node, a source node, a drain node, and an insulation film positioned between the gate node and the source and drain nodes. In some embodiments, the amplification circuit may further include a reset transistor having a source node to receive a reset voltage, a gate node to receive a second scan signal prior to the input circuit receiving the first scan signal, and a drain node shared with the control transistor. In some embodiments, the driving transistor may be configured to be reset by the second scan signal and be turned on by the first scan signal. In some embodiments, with the control voltage at a level corresponding to a sum of the data voltage and a threshold voltage of the control transistor, the driving current flowing through the light emitting circuit and a reference current flowing through the reference circuit may have a same value. In some embodiments, the input circuit may include: a switching transistor having a gate node to receive the first scan signal, a source node to receive the data voltage, and a drain node connected to the amplification circuit; and a second capacitor connected between the drain node of the switching transistor and the low-potential voltage. In some embodiments, the light emitting circuit may include a driving transistor having a drain node connected to an anode electrode of the light emitting element and a gate node to receive the driving voltage; and the driving transistor may be configured to be reset by an input power voltage prior to the input circuit receiving the first scan signal and be turned on by the first scan signal. In some embodiments, the light emitting circuit, the reference circuit, the amplification circuit, and the input circuit may include P-type transistors. In some embodiments, the driving current may be proportional to the data voltage. In some embodiments, a display panel may include any of the above embodiments of the subpixel circuit. A display device may include: a display panel having a plurality of subpixels; a gate driving circuit configured to supply a plurality of scan signals to the display panel respectively through a plurality of gate lines; a data driving circuit configured to supply a plurality of data voltages to the display panel respectively through a plurality of data lines; and a timing controller configured to drive the gate driving circuit and the data driving circuit. Here, at least one of the subpixels may include: a reference circuit configured to receive a high-potential voltage and to output a control voltage for controlling a driving current flowing through a light emitting element; a light emitting circuit including the light emitting element, the light emitting circuit being configured to receive the control voltage and a low-potential voltage and to control the light emitting element based on a driving voltage; an amplification circuit configured to compare the control voltage and a data voltage to generate the driving voltage for controlling the light emitting circuit; and an input circuit configured to receive the data voltage and a first scan signal and to control a timing of applying the data voltage to the amplification circuit based on the first scan signal. In some embodiments, the amplification circuit may include: a control transistor having a gate node to receive the control voltage and a drain node to provide the driving voltage to the light emitting circuit; and a first capacitor connected to the drain node of the control transistor to transfer an input power voltage. In some embodiments, the reference circuit may include a reference transistor having a drain node and a gate node configured to provide the control voltage and a source node configured to receive the high-potential voltage; and the control transistor and the reference transistor may have a same threshold voltage. In some embodiments, the amplification circuit may further include a reset transistor having a source node to receive a reset voltage, a gate node to receive a second scan signal prior to the input circuit receiving the first scan signal, and a drain node shared with the control transistor; and the driving transistor may be configured to be reset by the second scan signal and be turned on by the first scan signal. In some embodiments, the driving current is proportional to the data voltage. The above description has been presented to enable any person skilled in the art to make and use the various possible embodiments of the present disclosure. Although the example embodiments of the present disclosure have been described in more detail with reference to the accompanying drawings, the present disclosure is not limited thereto and may be embodied in many different forms without departing from the technical concept of the present disclosure. Therefore, the example embodiments disclosed in the present disclosure are provided for illustrative purposes only and are not intended to limit the technical concept of the present disclosure. Therefore, it should be understood that the above-described example embodiments are illustrative in all aspects and do not limit the present disclosure. It will be apparent to those skilled in the art that various modifications and variations can be made in the present disclosure without departing from the spirit or scope of the disclosures. Thus, it is intended that the present disclosure cover such modifications and variations of this disclosure, provided that they come within the scope of the appended claims and their equivalents. | 73,858 |
11862090 | DETAILED DESCRIPTION Since the shapes, sizes, proportions, angles, numbers, etc., disclosed in the drawings for describing the embodiments of the present invention are illustrative, the present invention is not limited to the shown details. The same reference numerals throughout the disclosure correspond to the same elements. Also, throughout the description of the present invention, the detailed description of known technologies incorporated herein will be omitted when it may make the subject matter of the present invention unclear. When terms such as “includes”, “has”, “composed”, etc., mentioned in the present disclosure are used, other parts can be added unless a term “only” is used. A component represented in a singular form includes the expression of plural form thereof unless otherwise explicitly mentioned. In construing components, error ranges are construed as being included unless otherwise explicitly mentioned. In describing positional relationships, when the positional relationship of two parts is described, for example, “on”, “over”, “under”, “next to”, etc., one or more other parts may be positioned between the two parts as long as a term “directly” or “immediately” is not used. While terms such as the first and the second, etc., can be used to describe various components, the components are not limited by the terms mentioned above. The terms are used only for distinguishing between one component and other components. Therefore, the first component to be described below may be the second component within the spirit of the present invention. The same reference numerals throughout the disclosure correspond to the same elements. Hereinafter, various embodiments of the present invention will be described in detail with reference to the accompanying drawings. The component names used in the following description are selected in consideration of making it easier to write the specification and may be different from the component names of an actual product. FIG.1shows a schematic configuration of a display device according to embodiments of the present disclosure. Referring toFIG.1, the display device100according to embodiments of the present disclosure may include a display panel110in which a plurality of sub-pixels SP are arranged, a gate driving circuit120for driving the display panel110, a data driving circuit130, and a controller140. In the display panel110, a plurality of gate lines GL and a plurality of data lines DL are disposed, and a sub-pixel SP is disposed in a region defined by the intersection of the gate line GL and the data line DL. The gate driving circuit120is controlled by the controller140. The gate driving circuit120sequentially outputs scan signals to the plurality of gate lines GL disposed on the display panel110and controls a drive timing of the plurality of sub-pixels SP. In some cases, the gate driving circuit120may output a scan signal for controlling the drive timing of the sub-pixel SP and a light emission control signal for controlling a light emission timing of the sub-pixel SP. In this case, a circuit for outputting the scan signal and a circuit for outputting the light emission control signal may be implemented as separate circuits or as a single circuit. The gate driving circuit120may include one or more gate driver integrated circuits (GDIC) and may be located only on one side or both sides of the display panel110depending on a driving method thereof. Each gate driver integrated circuit (GDIC) may be connected to bonding pad of the display panel110by a tape automated bonding (TAB) method, a chip on glass (COG) method, or a chip on polyimide (COP) method or may be implemented in a gate in panel (GIP) type and disposed directly on the display panel110. In some cases, each gate driver integrated circuit may be integrated and disposed on the display panel110. In addition, each gate driver integrated circuit (GDIC) may be implemented by a chip on film (COF) method in which the GDIC is mounted on a film connected to the display panel110. The data driving circuit130receives image data from the controller140and converts the image data into a data voltage in analog form. Also, the data voltage is output to each data line DL in accordance with a timing at which the scan signal is applied through the gate line GL, so that each sub-pixel SP represents brightness according to the image data. The data driving circuit130may include one or more source driver integrated circuits (SDIC). Each source driver integrated circuit (SDIC) may include a shift register, a latch circuit, a digital to analog converter (DAC), an output buffer, and the like. Each source driver integrated circuit (SDIC) may be connected to a bonding pad of the display panel110by the tape automated bonding (TAB) method, the chip on glass (COG) method, or the chip on polyimide (COP) method, or may be directly disposed on the display panel110, or, in some cases, may be integrated and disposed on the display panel110. Also, each source driver integrated circuit (SDIC) may be implemented in a chip on film (COF) method. In this case, each source driver integrated circuit (SDIC) may be mounted on a film connected to the display panel110and may be electrically connected to the display panel110through wires on the film. The controller140supplies various control signals to the gate driving circuit120and the data driving circuit130and controls operations of the gate driving circuit120and the data driving circuit130. The controller140may be mounted on a printed circuit board, a flexible printed circuit, etc., and may be electrically connected to the gate driving circuit120and the data driving circuit130through the printed circuit board, the flexible printed circuit, etc. The controller140causes the gate driving circuit120to output a scan signal according to a timing generated in each frame, converts an image data received from the outside in accordance with a data signal format used by the data driving circuit130, and outputs the converted image data to the data driving circuit130. The controller140receives, together with the image data, various timing signals including a vertical synchronization signal, a horizontal synchronization signal, an input data enable signal, and a clock signal from the outside (e.g., a host system). The controller140may generate various control signals by using various timing signals received from the outside and may output them to the gate driving circuit120and the data driving circuit130. For example, in order to control the gate driving circuit120, the controller140outputs various gate control signals (GCS) including a gate start pulse (GSP), a gate shift clock (GSC), a gate output enable signal (GOE), etc. Here, the gate start pulse (GSP) controls an operation start timing of one or more gate driver integrated circuits (GDIC) which constitutes the gate driving circuit120. The gate shift clock (GSC) is a clock signal which is commonly input to one or more gate driver integrated circuits (GDIC). The gate shift clock (GSC) controls a shift timing of the scan signal. The gate output enable signal (GOE) designates timing information of one or more gate driver integrated circuits (GDIC). Also, in order to control the data driving circuit130, the controller140outputs various data control signals (DCS) including a source start pulse (SSP), a source sampling clock (SSC), a source output enable signal (SOE), etc. Here, the source start pulse (SSP) controls a data sampling start timing of one or more source driver integrated circuits (SDIC) which constitutes the data driving circuit130. The source sampling clock (SSC) is a clock signal which controls a sampling timing of data in each of the source driver integrated circuits (SDIC). The source output enable signal (SOE) controls an output timing of the data driving circuit130. The display device may further include a power management integrated circuit (not shown) which supplies various voltages or currents to the display panel110, the gate driving circuit120, the data driving circuit130, etc., or controls various voltages or currents to be supplied. Each subpixel SP may be defined by the intersection of the gate line GL and the data line DL, and a liquid crystal or a light emitting device EL may be disposed depending on the type of the display device. The light emitting device EL may be composed of an organic light emitting diode. The organic light emitting diode includes an anode electrode, a cathode electrode, and an organic compound layer (HIL, HTL, EML, ETL, and EIL) formed between the anode electrode and the cathode electrode. The organic compound layer includes a hole injection layer (HIL), a hole transport layer (HTL), an emitting material layer (EML), an electron transport layer (ETL), and an electron injection layer (EIL). When a driving voltage is applied to the anode electrode and the cathode electrode, the holes that have passed through the hole transport layer (HTL) and electrons that have passed through the electron transport layer (ETL) move to the emitting material layer (EML) and form excitons, and as a result, the emitting material layer (EML) produces visible light. Each ofFIGS.2A and2Bis a view showing an example of a sub-pixel structure according to the embodiment of the present disclosure. Referring toFIG.2A, one subpixel includes a switching transistor SW, a driving transistor DT, a compensation circuit CC, and an organic light emitting diode EL. The organic light emitting diode EL operates to emit light in accordance with a driving current generated by the driving transistor DT. The switching transistor SW and the driving transistor DT are three-terminal elements and include a source electrode, a drain electrode, and a gate electrode. Hereinafter, the source electrode will be described as a first electrode and the drain electrode will be described as a second electrode. The switching transistor SW performs a switching operation such that a data signal supplied through the data line DL in response to a gate signal supplied through the gate line GL is stored as a data voltage in a capacitor Cst. The driving transistor DT operates such that the driving current flows between a high potential power supply voltage VDD and a low potential power supply voltage VSS in accordance with the data voltage stored in the capacitor Cst. The compensation circuit CC is for compensating a threshold voltage Vth of the driving transistor DT, etc. Meanwhile, according to various embodiments, the capacitor Cst connected to the switching transistor SW or the driving transistor DT may be located within the compensation circuit CC. The compensation circuit CC is composed of one or more thin film transistors and a capacitor. The compensation circuit CC may be configured in a wide variety of ways according to a compensation method. Also, as shown inFIG.2B, when the compensation circuit CC is included, the subpixel may further include a signal line, a power line, etc., which are for driving a compensation thin film transistor and for supplying a specific signal or electric power. FIG.3shows an example of the drive timing of the sub-pixel shown inFIGS.2A and2B. One frame period for displaying an image may be divided into a refresh period and a holding period in accordance with a synchronization signal SYNC. The display device according to the embodiment may operate in a low-speed driving mode and a high-speed driving mode. In the low-speed driving mode, the display device controls the holding period to be longer for a unit time and controls one-frame period to be longer. When the display device operates at a low speed, power consumption can be reduced. In the high-speed driving mode, the display device controls the holding period to be shorter for a unit time and controls the one-frame period to be shorter in comparison to the low-speed driving mode. The high-speed driving can smoothly represent high-speed images with large image changes. The refresh period may be subdivided into an initialization period, a sampling period, a programming period, and a light emission period. During the initialization period, the data voltage written to the light emitting device EL is initialized by applying an initialization voltage to the subpixel SP. During the sampling period, the threshold voltage Vth of the driving transistor is stored in the capacitor connected to the driving transistor. During the programming period, the data voltage is applied to the subpixel SP, and thus, the data voltage is stored in the capacitor connected to the driving transistor. The sampling period and the programming period are conceptually distinguished. The sampling period and the programming period are separated from each other according to the subpixel structure so that the operations in the periods may be sequentially performed or may be performed at the same time. In the subpixel structure described in the embodiment of the present disclosure, the operations in the sampling period and the operations in the programming period may be performed simultaneously. During the holding period, the data voltage is not supplied through the data lines connected to the light emitting devices EL, respectively, and the light emitting devices emit light by using the data voltage stored in a refresh frame as it is. Thus, the previously programmed data voltage is maintained during the holding period. <Comparison Example> FIGS.4to7show comparison examples to be compared with the present disclosure. FIG.4is a circuit diagram of the sub-pixel according to a comparison example.FIG.5is a drive timing diagram in the refresh period according to the comparison example.FIG.6is a drive timing diagram in the holding period according to the comparison example.FIG.7is a view for describing a bezel area of the display panel according to the comparison example. A pixel circuit according to the comparison example includes eight transistors and two capacitors. The driving transistor DT supplies the driving current to the light emitting device EL. The driving transistor DT includes a first electrode connected to a first node N1, a gate electrode connected to a second node N2, and a second electrode connected to a third node N3. A first transistor T1 includes a first electrode connected to a fifth node N5, a second electrode connected to the first node N1, and a gate electrode connected to a third light emission control signal EM3. When the third light emission control signal EM3 is low, the first transistor T1 is turned on and electrically connects the first node N1 and the fifth node N5. A second transistor T2 includes a first electrode connected to the second node N2, a second electrode connected to a data line that supplies a data voltage VDATA, and a gate electrode connected to a second scan signal SC2. When the second scan signal SC2 is high, the second transistor T2 is turned on and supplies the data voltage VDATA to the second node N2. A third transistor T3 includes a first electrode connected to a power line that supplies the high potential power supply voltage VDD, a second electrode connected to the first node N1, a gate electrode connected to a first light emission control signal EM1. When the first light emission control signal EM1 is low, the third transistor T3 is turned on and supplies the high potential power supply voltage VDD to the first node N1. A fourth transistor T4 includes a first electrode connected to the third node N3, a second electrode connected to a fourth node N4, and a gate electrode connected to a second light emission control signal EM2. When the second emission control signal EM2 is low, the fourth transistor is turned on and electrically connects the third node N3 and the fourth node N4. A fifth transistor T5 includes a first electrode connected to the second node N2, a second electrode connected to the initialization voltage VINI, and a gate electrode connected to a first scan signal SC1. When the first scan signal SC1 is high, the fifth transistor T5 is turned on and supplies the initialization voltage VINI to the second node N2. A sixth transistor T6 includes a first electrode connected to an anode reset voltage VAR, a second electrode connected to the fourth node N4, and a gate electrode connected to a fourth scan signal SC4. When the fourth scan signal SC4 is low, the sixth transistor T6 is turned on and supplies the anode reset voltage VAR to the fourth node N4. A seventh transistor T7 includes a first electrode connected to a bias voltage VOBS, a second electrode connected to the first node N1, and a gate electrode connected to a third scan signal SC3. When the third scan signal SC3 is low, the seventh transistor T7 is turned on and supplies the bias voltage VOBS to the first node N1. The light emitting device EL includes an anode electrode connected to the fourth node N4 and a cathode electrode connected to the low potential power supply voltage VSS. The light emitting device EL receives a driving current from the driving transistor DT and emits light. A first capacitor C1 is connected between the high potential power supply voltage VDD and the fifth node N5. A second capacitor C2 is connected between the fifth node N5 and the second node N2. The second capacitor C2 functions as a storage capacitor for maintaining a voltage signal in the pixel. The second and fifth transistors T2 and T5 may be composed of an oxide semiconductor transistor which uses an oxide semiconductor material as an active layer. The driving in the refresh period of the comparison example will be described with reference toFIG.5. Table 1 shows switching operations of the first to seventh transistors T1 to T7 in first to fifth periods A-R1 to A-R5. In the table 1 below, ON represents that a corresponding transistor is turned on, whereas OFF represents that a corresponding transistor is turned OFF. TABLE 1T1T2T3T4T5T6T7A-R1OFFOFFOFFOFFOFFOFFONA-R2ONOFFONOFFONOFFOFFA-R3ONOFFOFFONONONOFFA-R4ONONOFFOFFOFFOFFOFFA-R5OFFOFFOFFOFFOFFOFFON In the first and fifth periods A-R1 and A-R5, the seventh transistor T7 is turned on and the bias voltage VOBS is applied to the first node N1 and all remaining transistors are turned off. The first and fifth periods A-R1 and A-R5 are periods in which a hysteresis of the driving transistor DT is reduced by directly applying the bias voltage VOBS to the driving transistor DT. The second period A-R2 is an initialization period in which the first, third, and fifth transistors T1, T3, and T5 are turned on and a voltage corresponding to a difference between the initialization voltage VINI and the high potential power supply voltage VDD is stored in the second capacitor C2. All other remaining transistors are turned off during the second period A-R2. The third period A-R3 is a sampling period in which the first, fourth, fifth, and sixth transistors T1, T4, T5, and T6 are turned on and the threshold voltage Vth of the driving transistor DT is sampled. Also, in the third period A-R3, the sixth transistor T6 is turned on and the anode reset voltage VAR is applied to the fourth node N4, so that the anode electrode of the light emitting device EL connected to the fourth node N4 is reset to the anode reset voltage VAR. The fourth period A-R4 is a programming period in which the first and second transistors T1 and T2 are turned on and the data voltage VDATA is supplied to the second node N2. All other remaining transistors are turned off during the fourth period A-R4. The driving of the holding period of the comparison example will be described with reference toFIG.6. Table 2 shows the switching operations of the first to seventh transistors T1 to T7 in first to third periods A-H1 to A-H3. TABLE 2T1T2T3T4T5T6T7A-H1OFFOFFOFFOFFOFFOFFONA-H2ONOFFONONOFFONOFFA-H3OFFOFFOFFOFFOFFOFFON In the first and third periods A-H1 and A-H3, the seventh transistor T7 is turned on and the bias voltage VOBS is applied to the first node N1 and all other remaining transistors are turned off during the first and third periods A-H1 and A-H3. The first and third periods A-H1 and A-H3 are periods in which the hysteresis of the driving transistor DT is reduced by directly applying the bias voltage VOBS to the driving transistor DT. In the second period A-H2, the first, third, fourth, and sixth transistors T1, T3, T4, and T6 are turned on and all remaining transistors are turned off. The second period A-H2 is a period for resetting an anode electrode voltage of the light emitting device EL. Since the sixth transistor T6 is turned on in the second period, the anode reset voltage VAR is applied to the fourth node N4, and the anode electrode of the light emitting device EL connected to the fourth node N4 is reset to the anode reset voltage VAR. As such, the pixel circuit according to the comparison example may operate by a variable refresh rate (VRR) driving method in which a driving frequency is varied according to a display image. FIG.7is a view for describing a bezel area of the display panel according to the comparison example. The display panel110may be divided into a display area DA in which pixels are disposed and a bezel area in which the GIP driving circuit is disposed. The GIP driving circuit may be disposed on both left and right edges of the display panel110, and the pixel circuit may be disposed at the center of the display panel110. That is, the bezel area may be located on both left and right sides of the display panel110, and the display area DA may be located at the center of the display panel110. The pixel circuit according to the comparison example requires various control signals such as the first to fourth scan signals SC1 to SC4, the first to third light emission control signals EM1 to EM3, etc., in order to control the first to seventh transistors T1 to T7. Therefore, the driving circuit for driving the pixel circuit according to the comparison example is complex. As shown inFIG.7, the GIP driving circuit includes seven stages, in order to supply the first to fourth scan signals SC1 to SC4 and the first to third light emission control signals EM1 to EM3 to the pixel circuit. A plurality of the stages constituting the GIP driving circuit may be disposed in a left bezel area BZ1L and a right bezel area BZ1R. The stage for supplying the second scan signal SC2 may be disposed in the left bezel area BZ1L and the right bezel area BZ1R of the display panel110respectively so as to supply the second scan signal SC2 in a double-feeding manner. In the display device including the pixel circuit of the comparison example, in the case of the GIP (gate in panel) model in which the driving circuit is built in the display panel110, the bezel area increases, and thus, it is difficult to form a slim bezel. <Embodiment of the Present Disclosure> FIG.8is a circuit diagram of the sub-pixel according to an embodiment of the present disclosure.FIG.9is a drive timing diagram in the refresh period according to the embodiment of the present disclosure.FIG.10is a drive timing diagram in the holding period according to the embodiment of the present disclosure.FIG.11shows operation states of a first period and a fourth period of the refresh period and of a first period of the holding period of the pixel circuit according to the embodiment of the present disclosure.FIG.12shows an operation state of a second period of the refresh period of the pixel circuit according to the embodiment of the present disclosure.FIG.13shows an operation state of a third period of the refresh period of the pixel circuit according to the embodiment of the present disclosure.FIG.14is a view for describing the bezel area of the display panel according to the embodiment of the present disclosure. The display device according to the embodiment of the present disclosure includes the pixel circuit in which the switching TFT is formed of an oxide semiconductor TFT and the driving TFT is formed of a LTPS (low temperature polycrystalline silicon) TFT. However, in the display device of the present disclosure, the switching TFT is not limited to the oxide semiconductor TFT, and the driving TFT is not limited to the LTPS TFT. Also, they may be variously formed of a multi-type TFT. Also, in the display device, the pixel circuit may include one type of a TFT instead of the multi-type TFTs. Since the oxide semiconductor material has a low off-current, it may be suitable for the switching TFT that has a short turn-on time and maintain a long turn-off time. The oxide semiconductor TFT has a better voltage holding characteristic than that of the LTPS TFT. First, a circuit diagram of the sub-pixel according to the embodiment of the present disclosure will be described with reference toFIG.8. The pixel circuit shown inFIG.8is a pixel circuit arranged in an n-th row among a plurality of the pixel circuits arranged in the form of a matrix on the display panel110. The pixel circuit according to the embodiment includes eight transistors and one capacitor. A first transistor T1, a second transistor T2, and a fifth transistor T5 may be composed of the oxide semiconductor transistor which uses an oxide semiconductor material as an active layer in one embodiment. In the pixel circuit arranged in the n-th row shown inFIG.8, the fifth transistor T5 among the first to seventh transistors T1 to T7 receives a scan signal SC from the gate line of an (n−k)-th row (k is a natural number less than n). Thus, the fifth transistor T5 receives a scan signal SC from the gate line of another row of pixel circuits. The other transistors T1 to T4 and T6 to T7 receive the scan signal SC and the light emission control signal EM from the gate line of the n-th row. In other words, the gate line of the (n)-th row supplies the scan signal SC and the light emission control signal EM to the first to fourth and sixth to seventh transistors T1 to T4 and T6 to T7 constituting the pixel circuit arranged in the (n)-th row, and the gate line of the (n−k)-th row supplies the scan signal to the fifth transistor T5 constituting the pixel circuit arranged in the n-th row. The driving transistor DT supplies a driving current to the light emitting device EL. The driving transistor DT includes the first electrode connected to the first node N1, the gate electrode connected to the second node N2, and the second electrode connected to the third node N3. The first transistor T1 includes a first electrode connected to the second node N2, a second electrode connected to the third node N3, and a gate electrode connected to the first scan signal SC1. The first transistor T1 is turned on when the first scan signal SC1 is high to electrically connect the second node N2 and the third node N3. The second transistor T2 includes a first electrode connected to the first node N1, a second electrode connected to a data line that supplies the data voltage VDATA, and a gate electrode connected to the first scan signal SC1. When the first scan signal SC1 is high, the second transistor T2 is turned on and supplies the data voltage VDATA to the first node N1. The third transistor T3 includes a first electrode connected to a power line that supplies the high potential power supply voltage VDD, a second electrode connected to the first node N1, and a gate electrode connected to the light emission control signal EM. When the light emission control signal EM is low, the third transistor T3 is turned on and supplies the high potential power supply voltage VDD to the first node N1. The fourth transistor T4 includes a first electrode connected to the third node N3, a second electrode connected to the fourth node N4, and a gate electrode connected to the light emission control signal EM. When the light emission control signal EM is low, the fourth transistor T4 is turned on and electrically connects the third node N3 and the fourth node N4. The fifth transistor T5 includes a first electrode connected to the second node N2, a second electrode connected to a voltage line that supplies the initialization voltage VINI, and a gate electrode connected to the first scan signal SC1. The first transistor T1 and the second transistor T2 receive the first scan signal SC1 from the gate line of the n-th row. Compared with this, the fifth transistor T5 receives the first scan signal SC1 from the gate line of the (n−k)-th row. When the first scan signal SC1 is high, the fifth transistor T5 is turned on and supplies the initialization voltage VINI to the second node N2. The sixth transistor T6 includes a first electrode connected to a voltage line that supplies the anode reset voltage VAR, a second electrode connected to the fourth node N4, and a gate electrode connected to a third scan signal SC3. When the third scan signal SC3 is low, the sixth transistor T6 is turned on and supplies the anode reset voltage VAR to the fourth node N4. The seventh transistor T7 includes a first electrode connected to a voltage line that supplies the bias voltage VOBS, a second electrode connected to the first node N1, and a gate electrode connected to the third scan signal SC3. When the third scan signal SC3 is low, the seventh transistor T7 is turned on and supplies the bias voltage VOBS to the first node N1. The light emitting device EL includes an anode electrode connected to the fourth node N4 and a cathode electrode connected to the low potential power supply voltage VSS. The light emitting device EL receives a driving current from the driving transistor DT and emits light. The storage capacitor CST is connected between the high potential supply voltage and the second node N2. The storage capacitor CST maintains a data voltage VDATA signal in the pixel. The driving of the refresh period of the embodiment will be described with reference toFIG.9. Table 3 shows switching operations of the first to seventh transistors T1 to T7 in first to fourth periods B-R1 to B-R4. TABLE 3T1T2T3T4T5T6T7B-R1OFFOFFOFFOFFOFFONONB-R2OFFOFFOFFOFFONOFFOFFB-R3ONONOFFOFFOFFOFFOFFB-R4OFFOFFOFFOFFOFFONON In the first and fourth periods B-R1 and B-R4, only the sixth transistor T6 and the seventh transistor T7 are turned on and all remaining transistors are turned off. The bias voltage VOBS is applied to the first node N1, and the anode reset voltage VAR is applied to the fourth node N4. The first and fourth periods B-R1 and B-R4 are periods in which a hysteresis of the driving transistor DT is reduced by directly applying the bias voltage VOBS to the driving transistor DT. Also, the first and fourth periods B-R1 and B-R4 are periods in which the anode electrode of the light emitting device EL connected to the fourth node N4 is reset to the anode reset voltage VAR. The operation state of the pixel circuit in the first and fourth periods B-R1 and B-R4 is shown inFIG.11. The second period B-R2 is an initialization period. In the second period B-R2, only the fifth transistor T5 is turned on, the initialization voltage VINI is applied to the second node N2, and a voltage corresponding to a difference between the initialization voltage VINI and a high potential driving voltage is stored in the storage capacitor CST. All other remaining transistors are turned off during the second period B-R2. The operation state of the pixel circuit in the second period B-R2 is shown inFIG.12. The third period B-R3 is a programming period in which the first transistor T1 and the second transistor T2 are turned on. In the third period B-R3, the data voltage VDATA is applied to the first node N1, and a voltage obtained by subtracting the threshold voltage Vth of the driving transistor DT from the data voltage VDATA, that is to say, “VDATA-Vth” is applied to the second node N2. In the comparison example, sampling of the threshold voltage Vth of the driving transistor DT and programming of the data voltage VDATA have been performed separately in the third period A-R3 and in the fourth period A-R4. Compared with this, in the embodiment, the sampling of the threshold voltage of the driving transistor DT and the programming of the data voltage VDATA are simultaneously performed in the third period B-R3. The operation state of the pixel circuit of the embodiment in the third period B-R3 is shown inFIG.13. In one embodiment, a certain interval (e.g., a predetermined time interval) should be left between the second period B-R2 and the third period B-R3 such that circuit operations in the initialization period and programming period do not interfere with each other. The second period B-R2 is a period in which the scan signal applied to the (n−k)-th row is high, and the third period B-R3 is a period in which the scan signal applied to the n-th row is high. The interval between the second period B-R2 and the third period B-R3 can be increased by increasing the value of k. In other words, the fifth transistor constituting the pixel circuit of the n-th row should receive the first scan signal from the gate line of the (n−k)-th row which is farther than the (n−1)-th row rather than should receive the first scan signal from the gate line of the (n−1)-th row which is an adjacent row. That is, the value of k is at least 2 in one embodiment. Meanwhile, when a pixel is connected to a gate line located farther, an area occupied by a connecting wiring connecting the pixel and the gate line is increased. The increase of the area occupied by the connecting wiring reduces the aperture ratio of the display panel. This is not desirable. The inventors of the present disclosure have confirmed through collective consideration of these points that it is appropriate that the value of k is 2. The driving in the holding period of the embodiment will be described with reference toFIG.10. Table 4 shows switching operations of the first to seventh transistors T1 to T7 in the first period. TABLE 4T1T2T3T4T5T6T7B-H1OFFOFFOFFOFFOFFONON In the first period B-H1, only the sixth transistor T6 and the seventh transistor T7 are turned on and all remaining transistors are turned off. The bias voltage VOBS is applied to the first node N1, and the anode reset voltage VAR is applied to the fourth node N4. The first period is a period in which the hysteresis of the driving transistor DT is reduced by directly applying the bias voltage VOBS to the driving transistor DT. Also, the first period is a period in which the anode electrode of the light emitting device EL connected to the fourth node N4 is reset to the anode reset voltage VAR. The operation of the pixel circuit in the first period B-H1 is, as shown inFIG.11, the same as the operation of the pixel circuit in the first period B-R1 and the fifth period B-R5 of the refresh period. As such, the pixel circuit according to the embodiment may operate by a variable refresh rate (VRR) driving method in which a driving frequency is varied according to a display image. The pixel circuit according to the embodiment requires the first and third scan signals SC1 and SC3 and the light emission control signal EM in order to control the first to seventh transistors T1 to T7. The pixel circuit of the comparative example described above requires at least seven control signals such as the first to fourth scan signals SC1 to SC4 and the first to third emission control signals EM1 to EM3. Compared with this, the pixel circuit according to the embodiment requires a total of three control signals, i.e., the first and third scan signals SC1 and SC3 and the light emission control signal EM. That is to say, it can be seen that the pixel circuit according to the embodiment requires less control signals than the pixel circuit according to the comparison example. Therefore, the driving circuit for driving the pixel circuit according to the embodiment is less complex than that of the comparison example. FIG.14is a view for describing the bezel area of the display panel110according to the embodiment. The display panel110may be divided into a display area DA in which pixels are disposed and bezel areas BZ2L and BZ2R in which the GIP driving circuit is disposed. The GIP driving circuit may be disposed on both left and right edges of the display panel110, and the pixel circuit may be disposed at the center of the display panel110. That is, the bezel areas BZ2L and BZ2R may be respectively located on both left side (e.g., a first side) and a right side (e.g., a second side) of the display panel110, and the display area DA may be located at the center of the display panel110. As shown inFIG.14, the GIP driving circuit according to the embodiment includes three stages, in order to supply the first and third scan signals SC1 and SC3 and the light emission control signal EM to the pixel circuit. A plurality of stages constituting the GIP driving circuit may be disposed in the left bezel area BZ2L and the right bezel area BZ2R. The stage for supplying the first scan signal SC1 may be disposed in the left bezel area BZ2L and the right bezel area BZ2R of the display panel110respectively so as to supply the first scan signal SC1 in a double-feeding manner. Through the comparison between the display panel110according to the embodiment and the display panel according to the comparison example ofFIG.7, it can be found that the bezel area BZ2L/BZ2R ofFIG.14is reduced compared to the bezel area BZ1L/BZ1R ofFIG.7. As described above, since the driving of the pixel circuit according to the embodiment is simple, the complexity of the driving circuit can be reduced and furthermore a slim bezel of the display device can be achieved. While the embodiment of the present invention has been described with reference to the accompanying drawings, it can be understood by those skilled in the art that the present invention can be embodied in other specific forms without departing from its spirit or essential characteristics. Therefore, the foregoing embodiments and advantages are merely exemplary and are not to be construed as limiting the present invention. The present teaching can be readily applied to other types of apparatuses. The description of the foregoing embodiments is intended to be illustrative, and not to limit the scope of the claims. Many alternatives, modifications, and variations will be apparent to those skilled in the art. In the claims, means-plus-function clauses are intended to cover the structures described herein as performing the recited function and not only structural equivalents but also equivalent structures. | 38,620 |
11862091 | DETAILED DESCRIPTION FIG.1is a schematic diagram of a pixel circuit10of a display panel. The display panel may be an organic light emitting diode (OLED) panel or a micro-OLED panel. The pixel circuit10includes a driving transistor MDRV, a data enable transistor MDEN, an emission control transistor MEM, a storage capacitor C1and an OLED L1. The driving transistor MDRV is configured to control the OLED L1to emit light. The data enable transistor MDEN may serve as a switch for receiving an input data Vdata. The input data Vdata that arrives at the gate terminal of the driving transistor MDRV may determine the current magnitude flowing through the OLED L1, thereby determining the brightness of the OLED L1. The emission control transistor MEM may serve as a switch for controlling the emission of the OLED L1. The pixel circuit10may be operated by receiving a power supply voltage VDD, a data enable signal T_DEN and an emission control signal T_EM. More specifically, the data enable transistor MDEN is controlled by the data enable signal T_DEN, and the emission control transistor MEM is controlled by the emission control signal T_EM. The operations of the pixel circuit10include two phases: a scan phase and an emission phase. In the scan phase, the data enable transistor MDEN is turned on and the emission control transistor MEM is turned off. The input data Vdata may be forwarded to the gate terminal of the driving transistor MDRV and stored in the storage capacitor C1. In the emission phase following the scan phase, the emission control transistor MEM is turned on. A driving current Idrv1, which is generated by the driving transistor MDRV based on the input data Vdata, may flow through the OLED L1to determine the brightness emitted by the OLED L1. In the driving transistor MDRV, the magnitude of the driving current Idrv1may be determined based on the correspondence of the driving current Idrv1and the source-to-gate voltage Vsg of the driving transistor MDRV. Based on the mobility of the driving transistor MDRV, the relationship of the driving current Idrv1and the source-to-gate voltage Vsg may follow a square law or exponential law. For example, if the pixel circuit10is implemented with a thin-film transistor (TFT) process, the mobility is lower and the driving current Idrv1output by the driving transistor MDRV may be relatively low, and it is more possible that the driving transistor MDRV operates in the saturation region to follow the square law. If the pixel circuit10is implemented with a complementary metal-oxide semiconductor (CMOS) process as the silicon-based implementation of the micro-OLED panel, the mobility is higher than in the TFT process. Therefore, in order to achieve identical current magnitudes, the driving transistor MDRV may operate in the sub-threshold region to follow the exponential law. No matter whether the driving transistor MDRV operates based on the square law or exponential law, the driving current Idrv1and the source-to-gate voltage Vsg have one-to-one correspondence, so that the driving current Idrv1may be determined according to the source-to-gate voltage Vsg, which is further determined according to the input data Vdata. For the sake of brevity, the formula of square law is described hereinafter, as shown below: Idrv1=½β[Vsg−Vthp]2; (1) where β represents the gain factor of the driving transistor MDRV, and is determined according to the mobility, normalized oxide capacitance, and width/length ratio of the transistor; and Vthp is the threshold voltage of the driving transistor MDRV. Since the source voltage of the driving transistor MDRV equals the power supply voltage VDD and the gate voltage of the driving transistor MDRV equals the input data Vdata, Equation (1) may be rewritten as: Idrv1=12β[VDD-Vdata-Vthp]2.(2) Note that the threshold voltage Vthp is included in the formula for calculating the driving current Idrv1. In the display panel, the threshold voltage Vthp of different pixels may not be uniform due to process and/or device variations. The mismatch and offset of the threshold voltage Vthp may generate uneven brightness between the pixels, thereby generating the Mura effect. Therefore, the present invention provides a novel pixel circuit with appropriate controls to let the Mura effect caused by the offsets of the threshold voltage Vthp to be minimized. FIGS.2A-2Care schematic diagrams of a pixel circuit20of a display panel according to an embodiment of the present invention. The pixel circuit20includes a driving transistor MDRV,5control transistors M1-M5, a storage capacitor C2and a light emitting device L2, to realize a 6T1C structure. The driving transistor MDRV is configured to control the light emitting device L2to emit light; that is, the driving transistor MDRV may generate a driving current Idrv2according to the received input data Vdata, and output the driving current Idrv2to drive the light emitting device L2to emit light. The storage capacitor C2, which may be coupled between the gate terminal of the driving transistor MDRV and a power supply terminal that supplies the power supply voltage VDD, is configured to store the input data Vdata forwarded to the gate terminal of the driving transistor MDRV, as similar to the storage capacitor C1of the pixel circuit10. The control transistors M1-M5may be deployed and controlled appropriately to cancel the influence of the threshold voltage Vthp of the driving transistor MDRV on the driving current Idrv2. In detail, the control transistor M1is coupled to the upper terminal of the driving transistor MDRV, where the source terminal of the control transistor M1may be coupled to a power supply terminal to receive the power supply voltage VDD, the drain terminal of the control transistor M1may be coupled to the upper terminal of the driving transistor MDRV, and the gate terminal of the control transistor M1may receive an emission control signal T_EM2. The control transistor M1may serve as a switch for controlling the pixel circuit20to receive the power supply voltage VDD. The control transistor M2is coupled between the upper terminal and the gate terminal of the driving transistor MDRV, to serve as a switch for initialization and data reception. In detail, a first terminal of the control transistor M2may be coupled to the upper terminal of the driving transistor MDRV, a second terminal of the control transistor M2may be coupled to the gate terminal of the driving transistor MDRV, and the gate terminal of the control transistor M2may receive an offset control signal T_AZ. The control transistor M2may be used to conduct the gate terminal and the upper terminal of the driving transistor MDRV, to form a diode-connected structure when the input data Vdata is received, so as to obtain the information of the threshold voltage Vthp at the gate terminal of the driving transistor MDRV. The control transistor M3is coupled between the lower terminal and the gate terminal of the driving transistor MDRV, to serve as a switch for initialization. In detail, a first terminal of the control transistor M3may be coupled to the lower terminal of the driving transistor MDRV, a second terminal of the control transistor M3may be coupled to the gate terminal of the driving transistor MDRV, and the gate terminal of the control transistor M3may receive an initialization control signal T_INI. The control transistor M3may be used to initialize the driving transistor MDRV to a lower voltage in each operation cycle, allowing the input data Vdata to be successfully received by the driving transistor MDRV after initialization. The control transistor M4is coupled to the lower terminal of the driving transistor MDRV, to serve as a switch for controlling display data reception. In detail, a first terminal of the control transistor M4may be coupled to a data input terminal of the pixel circuit20for receiving the input data Vdata, a second terminal of the control transistor M4may be coupled to the lower terminal of the driving transistor MDRV, and the gate terminal of the control transistor M4may receive a data enable signal T_DEN. The control transistor M4may be used to control the pixel circuit20to receive the input data Vdata. The control transistor M5is coupled between the lower terminal of the driving transistor MDRV and the light emitting device L2, to serve as a switch for controlling light emission of the pixel circuit20. In detail, a first terminal of the control transistor M5may be coupled to the lower terminal of the driving transistor MDRV, a second terminal of the control transistor M5may be coupled to the light emitting device L2, and the gate terminal of the control transistor M5may receive an emission control signal T_EM1. The control transistor M5may be used to control the currents generated by the driving transistor MDRV to flow to the light emitting device L2. The light emitting device L2includes a first terminal coupled to the control transistor M5and a second terminal coupled to a reference voltage terminal to receive ground or negative voltage. The light emitting device L2, which is configured to emit light as being driven by the driving current Idrv2received from the driving transistor MDRV, may be any device capable of emitting light by receiving currents, such as an OLED. The operations of the pixel circuit20include three phases: a precharge phase, a scan phase and an emission phase.FIGS.2A-2Cillustrate the circuit structures and related waveforms of the control signals in the pixel circuit20, whereFIG.2Ashows the operations of the precharge phase,FIG.2Bshows the operations of the scan phase, andFIG.2Cshows the operations of the emission phase. Note that in the pixel circuit20, the driving transistor MDRV and the control transistors M1-M5are all PMOS transistors, and thus the signals in low level may turn on the corresponding transistors and in high level may turn off the corresponding transistors. As shown inFIG.2A, in the precharge phase (also called initial phase), the control transistors M2, M3, M4and M5are turned on, and the control transistor M1is turned off. An initial voltageVinimay be received from the data input terminal. Since the control transistors M2, M3and M4are all turned on, the source, gate and drain terminals of the driving transistor MDRV are initialized or reset to the initial voltageVini. Since the control transistor M5is turned on, the anode of the light emitting device L2is also initialized or reset to the initial voltageVini. The control transistor M1is turned off to prevent a current conducting path through the driving transistor MDRV and the light emitting device L2to generate unnecessary current consumption. In the precharge phase, the light emitting device L2should not emit light; hence, the value of the initial voltageVinishould be low enough to control the current flowing through the light emitting device L2to be lower than a specific threshold so that the light emitting device L2is prevented from emitting unwanted light. The value of the initial voltageVinireceived by the driving transistor MDRV should also be low enough to ensure that the input data Vdata can be successfully input to the driving transistor MDRV in the next phase. Otherwise, if the level of the initial voltageViniis excessively high, the initial voltageViniat the gate terminal of the driving transistor MDRV may turn off the driving transistor MDRV. For example, the initial voltageVinishould be smaller than the minimal data voltage with a difference greater than the threshold voltage Vthp, i.e., to satisfy Vdata-Vini>Vthp. In an embodiment, the minimal input data Vdata may be 4V, and the initial voltageVinimay be equal to 2V to achieve the purposes of turning on the driving transistor MDRV and turning off the light emitting device L2. As shown inFIG.2B, in the scan phase, the control transistors M2and M4are turned on, and the control transistors M1, M3and M5are turned off. The input data Vdata may be received from the data input terminal. In this embodiment, the pixel circuit20may receive the initial voltageVinifrom the data input terminal in the precharge phase, and receive the input data Vdata from the data input terminal in the scan phase. In other words, the initial voltageViniand the input data Vdata are received from the same terminal, so that the deployment of signal/data lines on the display panel may be simplified. In the scan phase, the driving transistor MDRV receives the input data Vdata through the control transistor M4, and the gate terminal of the driving transistor MDRV starts to be charged. At this moment, the lower terminal of the MDRV may be regarded as the source terminal, which receives the input data Vdata to generate the gate voltage Vdata-Vthp′ with the source-to-gate voltage Vsg of the driving transistor MDRV equal to Vthp′, where Vthp′ is the threshold voltage of the driving transistor MDRV in consideration of body effect (slightly different from the intrinsic threshold voltage Vthp without body effect). The charges corresponding to the gate voltage Vdata-Vthp′ will be stored in the storage capacitor C2. Since the control transistor M2is turned on, the upper terminal (which may be regarded as the drain terminal in this phase) of the driving transistor MDRV may also be charged to the voltage Vdata-Vthp′. After the gate voltage of the driving transistor MDRV reaches Vdata-Vthp′, the driving transistor MDRV may become cutoff and stop charging at the end of the scan phase. As mentioned above, the driving transistor MDRV is initialized or reset to the initial voltageViniin the precharge phase prior to the scan phase. At the start of the scan phase, the initial voltageViniis low enough to ensure that the driving transistor MDRV is turned on when the input data Vdata arrives. In the scan phase, the control transistor M2is turned on by the offset control signal T_AZ, and thus the control transistor M2and the driving transistor MDRV may form a diode-connected structure. The diode-connected structure allows the driving transistor MDRV to generate the gate voltage Vdata-Vthp′ containing the information of the threshold voltage Vthp′, which may be stored in the storage capacitor C2at the end of the scan phase. As shown inFIG.2C, in the emission phase, the control transistors M1and M5are turned on, and the control transistors M2, M3and M4are turned off. The turn-on control transistor M1may charge the upper terminal of the driving transistor MDRV to the power supply voltage VDD. In this phase, the upper terminal of the driving transistor MDRV may be regarded as the source terminal of the driving transistor MDRV since this upper terminal receives the power supply voltage VDD which may be higher than the voltage at the lower terminal of the driving transistor MDRV. The voltage at the lower terminal of the driving transistor MDRV is equal to the anode voltage of the light emitting device L2, VEM, which is a voltage generated by the light emitting device L2under the driving current IDRV2. At this moment, the source-to-gate voltage Vsg of the driving transistor MDRV may be equal to VDD−(Vdata−Vthp′), which is used to determine the magnitude of the driving current Idrv2to be used for driving the light emitting device L2. The control transistor M5is also turned on to pass the driving current Idrv2to the light emitting device L2, allowing the light emitting device L2to emit light. Similarly, the operations of the driving transistor MDRV in the pixel circuit20may follow the square law or exponential law based on the implementations and corresponding device mobility. Taking the square law as an example, the driving current Idrv2may be calculated as follows: Idrv2=12β[Vsg-Vthp]2;(3) Idrv2=12β[VDD-(Vdata-Vthp′)-Vthp]2;(4) where the definition of the parameter B is identical to that described in Equation (1), and will not be repeated herein. Since the threshold voltage Vthp′ includes the body effect where the drain voltage of the driving transistor MDRV is equal to Vdata, Equation (4) may be rewritten as: Idrv2=12β[VDD-(Vdata-(Vthp+γ(2ΦF+VDD-Vdata-(5)2ΦF)))-Vthp]2. In Equation (5), the threshold voltage Vthp may be canceled out to become: Idrv2=12β[VDD-Vdata+γ(2ΦF+VDD-Vdata-2ΦF)]2;(6) where γ is the body effect parameter, and 2ΦF is the surface potential. As can be seen, the formula for calculating the driving current Idrv2only includes a signal dependent term consisting of the input data Vdata, and will not depend on the threshold voltage Vthp, which means that the offset of the threshold voltage Vthp between pixels will not influence the current magnitude and the brightness of the light emitting device L2. Other parameters such as β or γ may not generate significant mismatch or offset that needs to be canceled. As a result, the problem of brightness non-uniformity may be solved. During the operations of the pixel circuit20, the control transistors M1-M5are controlled by the initialization control signal T_INI, the data enable signal T_DEN, the offset control signal T_AZ, and the emission control signals T_EM1and T_EM2. Note that the waveforms of these signals shown inFIGS.2A-2Care merely an example for illustrating a method of handling the operations by allocating the turn-on time and turn-off time of several control signals. For example, at the start of the emission phase, the data enable signal T_DEN turns off the control transistor M4slightly earlier than the offset control signal T_AZ turns off the control transistor M2, as shown inFIGS.2A-2C. Alternatively, the data enable signal T_DEN and the offset control signal T_AZ may toggle simultaneously, or the offset control signal T_AZ may toggle earlier. In another embodiment, the control transistors M2and M4may be controlled by the same control signal. In addition, at the start of the emission phase, the emission control signal T_EM2turns on the control transistor M1slightly earlier than the emission control signal T_EM1turns on the control transistor M5, as shown inFIGS.2A-2C. In another embodiment, these control signals' toggling order may also be modified. Also note that the structure of the pixel circuit20shown inFIGS.2A-2Cis merely an example, and may be modified or adjusted to improve the performance.FIGS.3A-3Care schematic diagrams of a pixel circuit30of a display panel according to an embodiment of the present invention. The structure of the pixel circuit30is similar to the structure of the pixel circuit20, so signals and elements having similar functions are denoted by the same symbols. The difference between the pixel circuit30and the pixel circuit20is that, the pixel circuit30further includes a coupling capacitor C3, which is coupled to the anode of the light emitting device L2. Similarly,FIGS.3A-3Cillustrate the circuit structures and related waveforms of the control signals in the pixel circuit30for the precharge phase, the scan phase and the emission phase, respectively, and the operations are similar to those illustrated inFIGS.2A-2C. In this embodiment, the control transistor M5is turned off in the precharge phase. Therefore, the initial voltage Vini is only used to initialize the driving transistor MDRV, while the light emitting device L2is initialized or reset through the coupling capacitor C3. The coupling capacitor C3may couple an emission off signal T_EM1′ to the anode of the light emitting device L2to perform initialization. At the start of the precharge phase, the emission off signal T_EM1′ has a falling voltage ΔV, which is coupled to the anode of the light emitting device L2to prevent the light emitting device L2from emitting light in the precharge phase (and also in the scan phase). In an embodiment, the emission off signal T_EM1′ may be an inverse signal of the emission control signal T_EM1that controls the control transistor M5. In the pixel circuit20, the light emitting device L2is initialized by using the initial voltage Vini. The initial voltage Vini is forwarded through PMOS switches, and may not be easily forwarded if the value of the initial voltage Vini approaches 0V. In contrast, in the pixel circuit30, the light emitting device L2is initialized by coupling the falling voltage ΔV to its anode, allowing the anode voltage to decrease to an extremely low level, even lower than 0V. This ensures that the light emitting device L2will not emit unwanted light in the precharge phase and the scan phase. As mentioned above, the pixel circuit of the present invention is applicable to a micro-OLED panel, which is implemented with the CMOS process having a higher mobility; hence, a general current operation range of the light emitting diode (e.g., between 10 pA and 5 nA) may be generated by using small data voltages, which are within an input voltage range of 200 mV. This input voltage range is quite small and cannot be easily implemented to generate a desired gamma curve. In order to increase the input voltage range, an improvement of the pixel circuit20may be applied. Please refer toFIGS.4A-4C, which are schematic diagrams of a pixel circuit40of a display panel according to an embodiment of the present invention. The structure of the pixel circuit40is similar to the structure of the pixel circuit20, so signals and elements having similar functions are denoted by the same symbols. The difference between the pixel circuit40and the pixel circuit20is that, the pixel circuit40further includes control transistors M6and M7and a capacitor C4coupled to the gate terminal of the driving transistor MDRV. Similarly,FIGS.4A-4Cillustrate the circuit structures and related waveforms of the control signals in the pixel circuit40for the precharge phase, the scan phase and the emission phase, respectively, and the operations are similar to those illustrated inFIGS.2A-2C. In this embodiment, a first terminal of the capacitor C4is coupled to the gate terminal of the driving transistor MDRV, and a second terminal of the capacitor C4is coupled to the control transistors M6and M7. The control transistor M6may serve as a switch for receiving a positive reference voltage Vref, where a first terminal of the control transistor M6may be coupled to the capacitor C4, a second terminal of the control transistor M6may be coupled to a positive reference voltage terminal for receiving the positive reference voltage Vref, and the gate terminal of the control transistor M6may receive the emission control signal T_EM1. The positive reference voltage Vref may be any appropriate positive voltage. In an embodiment, the positive reference voltage Vref is equal to or less than the power supply voltage VDD. The control transistor M7may serve as a switch for receiving the input data Vdata, where a first terminal of the control transistor M7may be coupled to the capacitor C4, a second terminal of the control transistor M7may be coupled to the data input terminal for receiving the input data Vdata, and the gate terminal of the control transistor M7may receive a sampling control signal T_SMP. Another capacitor C5, which is further illustrated in the pixel circuit40, may be an actually deployed capacitor or parasitic capacitor. If the capacitor C5is actually deployed, it may be coupled between the gate terminal of the driving transistor MDRV and a power supply terminal for receiving the power supply voltage VDD, as shown inFIGS.4A-4C. The control method of the pixel circuit40is similar to the control method of the pixel circuit20, except that the pixel circuit40receives an additional sampling control signal T_SMP, which may turn on the control transistor M7in the scan phase and turn off the control transistor M7in the precharge phase and the emission phase. In the scan phase, the control transistor M7is turned on and the control transistor M6is turned off, and the second terminal of the capacitor C4receives the input data Vdata through the control transistor M7. Meanwhile, the first terminal of the capacitor C4receives the gate voltage of the driving transistor MDRV which is equal to Vdata-Vthp′, and thus the information of the threshold voltage Vthp′ is stored. Subsequently, in the emission phase, the control transistor M6is turned on and the control transistor M7is turned off, and the second terminal of the capacitor C4receives the positive reference voltage Vref. The voltage difference Vref-Vdata at the second terminal of the capacitor C4is coupled to its first terminal, to move the gate voltage of the driving transistor MDRV to Vdata-Vthp′+C4C4+C5(Vref-Vdata). In a similar manner, the driving current Idrv3for driving the light emitting device L2of the pixel circuit40in the emission phase may be calculated based on the source-to-gate voltage Vsg of the driving transistor MDRV, which is expressed as: Idrv3=12β[VDD-Vdata+γ(2ΦF+VDD-Vdata-2ΦF)-(7)C4C4+C5(Vref-Vdata)]2; where the threshold voltage Vthp is perfectly canceled. Equation (7) may further be rearranged as: Idrv3=12β[VDD-C5C4+C5Vdata+γ(2ΦF+VDD-Vdata-(8)2ΦF)-C4C4+C5Vref]2. In an embodiment, the positive reference voltage Vref may be equal to the power supply voltage VDD, and thus Equation (8) may further be simplified as: Idrv3=(9)12β[C5C4+C5(VDD-Vdata)+γ(2ΦF+VDD-Vdata-2ΦF)]2. As can be seen in Equation (8) or (9), the factor of the input data Vdata is divided by a ratio C5/(C4+C5), which means that the same current range may be generated by a larger input data range. It should be noted that the input data range becomes larger if the value of the capacitor C5is smaller; hence, it is preferable to take the parasitic capacitor at the gate terminal of the driving transistor MDRV to realize the capacitor C5. As a result, based on the structure of the pixel circuit40, a larger variation of the input data Vdata may be applied to generate a target current range for driving the light emitting device L2, thereby increasing the input data range. The increased input data range facilitates the settings of the gamma curve, so as to realize a satisfactory visual effect. Please refer toFIGS.5A-5C, which are schematic diagrams of another pixel circuit50of a display panel according to an embodiment of the present invention. The structure of the pixel circuit50is similar to the structure of the pixel circuit40, so signals and elements having similar functions are denoted by the same symbols. The difference between the pixel circuit50and the pixel circuit40is that, in the pixel circuit50, the control transistor M4for receiving the initial voltage Vini and the input data Vdata is coupled to the upper terminal of the driving transistor MDRV. Correspondingly, the control transistor M2, which is coupled between the upper terminal (which may be the source terminal) and the gate terminal of the driving transistor MDRV, is controlled by the initialization control signal T_INI, and the control transistor M3, which is coupled between the lower terminal (which may be the drain terminal) and the gate terminal of the driving transistor MDRV, is controlled by the offset control signal T_AZ. That is, the roles played by the control transistor M2and the control transistor M3are exchanged. In such a situation, in the scan phase, the control transistor M3is conducted with the driving transistor MDRV to form the diode-connected structure, while the control transistor M2is off. Similarly, the capacitors C4and C5are used for dividing the voltage of the input data Vdata to increase the input voltage range. In this embodiment, the capacitor C4is coupled between the upper terminal and the gate terminal of the driving transistor MDRV, and the capacitor C5is coupled between the gate terminal of the driving transistor MDRV and a power supply terminal that supplies the power supply voltage VDD. The capacitor C5may be an actually deployed capacitor or parasitic capacitor. In the pixel circuit50, the formula for calculating the driving current IDRV3for driving the light emitting device L2may also be referred to Equation (8) or (9). The structure of the pixel circuit50may achieve the effect of increasing the input voltage range by using only 6 transistors, which may be simpler and more cost-saving than the 8-transistor structure of the pixel circuit40. Similarly, the waveforms of the control signals in the pixel circuit50for the precharge phase, the scan phase and the emission phase are shown inFIGS.5A-5C, respectively. Other operations of the pixel circuit50are similar to those described in the above paragraphs, and will not be repeated herein. Please note that the present invention aims at providing a novel pixel circuit for canceling the offset generated from the threshold voltage of the driving transistor. Those skilled in the art may make modifications and alterations accordingly. For example, in the above embodiments, the transistors in the pixel circuit are PMOS transistors; but in other embodiments, similar implementations may be realized by using NMOS transistors, where the control signals and the initial voltage may be modified accordingly. In addition, the driving transistor may be operated in the saturation region to follow the abovementioned equations, or in the sub-threshold region or linear region to follow another formula based on the exponential law, depending on the application of the display panel. The threshold voltage may be canceled in a similar manner when the exponential law is applied. Further, the pixel circuit of the present invention may be applied to any self-luminous panel, which includes, but not limited to, an OLED panel, mini-LED panel, micro-LED panel, and micro-OLED panel. To sum up, the present invention provides a pixel circuit for canceling the offset generated from the threshold voltage of the driving transistor. In the scan phase, the driving transistor may receive the input data through a lower terminal that is coupled to the light emitting device, and the input data with information of the threshold voltage are received at the gate terminal of the driving transistor and stored in the storage capacitor, and then canceled in the emission phase. In an embodiment, the light emitting device may be initialized by using a coupling capacitor, which couples a falling voltage to prevent the light emitting device from emitting unwanted light in the precharge phase and the scan phase. In an embodiment, a capacitor is coupled between the data input terminal and the gate terminal of the driving transistor, to divide the input data by a ratio when the input data is applied to generate the driving current in the emission phase, so as to increase the input voltage range. In another embodiment, the driving transistor may receive the input data through an upper terminal, i.e., the source terminal, with a capacitor coupled between the upper terminal and the gate terminal of the driving transistor, so as to achieve the effect of increasing the input voltage range with a fewer number of transistors and simpler circuit structure. Those skilled in the art will readily observe that numerous modifications and alterations of the device and method may be made while retaining the teachings of the invention. Accordingly, the above disclosure should be construed as limited only by the metes and bounds of the appended claims. | 31,374 |
11862092 | DETAILED DESCRIPTION OF EMBODIMENTS A detailed description is provided below with reference to the accompanying drawings to clearly and completely explain how the technical solutions in the embodiments of the present application solve the problems of conventional techniques. Unless otherwise defined, all technical and scientific terms used herein have the meaning commonly understood by those skilled in the art. The terminology used in this specification is only for the purpose of describing specific embodiments, and cannot be regarded as a limitation to the present application. Based on the embodiments of the present application, all other embodiments obtained by those skilled in the art without creative work are within the protection scope of the present application. Please refer toFIG.1toFIG.2.FIG.1is a schematic view illustrating circuit connection of a display device according to one embodiment of the present application, andFIG.2is an equivalent circuit view illustrating resistors in a working state of an operational amplifier of the present application. As shown inFIG.1, a display device1is provided according to one embodiment of the present application. The display device1comprises: a pixel circuit10and at least one operational amplifier (OP)11. The pixel circuit10comprises at least one pixel unit101, and the pixel unit101comprises a driving transistor M0, a storage capacitor C0, and a light-emitting device D1. The operational amplifier11and the driving transistor M0constitute a negative feedback loop for controlling a voltage at an input terminal of the light-emitting device D1to be the same as a data voltage Vdatainput externally. The operational amplifier11is also configured to charge the storage capacitor C0, so that after the storage capacitor C0is charged, the light-emitting device D1is driven by the driving transistor M0to emit light. A charging time for the operational amplifier11to charge the storage capacitor C0is less than a predetermined value; that is, the operational amplifier11can output a high voltage, and the output voltage can quickly charge the storage capacitor C0, which improves the charging efficiency, so that the storage capacitor C0can be fully charged quickly, and a charging time is reduced. Specifically, in the present embodiment, a gate of the driving transistor M0is electrically connected to a first common terminal A, a drain of the driving transistor M0is electrically connected to a first power source VDD, and a source of the driving transistor M0is electrically connected to a second common terminal D. A first terminal of the storage capacitor C0is electrically connected to the first common terminal A, and a second terminal of the storage capacitor C0is electrically connected to a second power source VSS. An anode of the light-emitting device D1serves as its input terminal and is electrically connected to the second common terminal D, and a cathode of the light-emitting device is electrically connected to the second power source VSS. A non-inverting input terminal P of the operational amplifier11is electrically connected to the second common terminal D; an inverting input terminal N of the operational amplifier11is configured to receive the data voltage Vdataand is coupled to an output terminal OUT of the operational amplifier; and the output terminal OUT of the operational amplifier is coupled to the first common terminal A. In the present embodiment, the driving transistor M0is an N-type thin film transistor (TFT), and the light-emitting device D1is an organic light-emitting diode (OLED). It should be noted that, in other embodiments, the light-emitting device can also be other devices that be excited to emit light by using electroluminescent materials. In the present embodiment, the operational amplifier11further comprises a first power terminal and a second power terminal. The first power terminal is configured to connect to a power source VGH which supplies power (a high voltage) to the operational amplifier11; the second power terminal is configured to connect to a power source VGL which supplies power (a low voltage) to the operational amplifier11. This way, the operational amplifier11, after receiving signals of the non-inverting input terminal P and the inverting input terminal N, calculates and outputs a voltage signal between a voltage value of the power source VGH and a voltage value of the power source VGL, so that Vg can be adjusted by negative feedback within a sufficient range. In other embodiments, the operational amplifier11can also use the same power source as the pixel circuit, that is, the first power source VDD and the second power source VSS. Preferably, the display device1further comprises: a first resistor R1and a second resistor R2. The inverting input terminal N of the operational amplifier11receives the data voltage Vdatavia the first resistor R1and is electrically connected to the output terminal OUT via the second resistor R2. As shown inFIG.2, the first resistor R1and the second resistor R2can both be in a form of a thin film transistor (TFT). A gate of the TFT as the first resistor R1is electrically connected to the second power terminal of the operational amplifier11to receive power from the power source VGL, a first electrode of the TFT as the resistor R1is connected to the inverting input terminal N of the operational amplifier11, and a second electrode of the TFT as the resistor R1is connected to the data voltage Vdata. A gate of the TFT as the second resistor R2is electrically connected to the second power terminal of the operational amplifier11to receive the power source VGL, a first electrode of the TFT as the second resistor R2is connected to the output terminal OUT of the operational amplifier11, and a second electrode of the TFT as the second resistor R2is connected to the inverting input terminal N of the operational amplifier11. When the gate of the TFT receives the power source VGL, the TFT is turned on. The TFT can be used as a resistor by its characteristics. Referring toFIG.1, according to the connection manner of the driving transistor M0, the storage capacitor C0, and the operational amplifier11in the drawing, a voltage range of the output terminal of the operational amplifier11is ϵ (VGL, VGH). According to the working principle of TFT, it can be known that a current flowing through the light-emitting device D1is equal to a current flowing through the driving transistor M0, namely: IM0=IOLED=12uCoxWL(VDDVAVth)2 IM0is the current flowing through the driving transistor M0, IOLEDis the current flowing through the light-emitting device D1, u is an amplification factor of the operational amplifier11, W/L is the aspect ratio of the driving transistor M0, VDD is a voltage of the drain of the driving transistor M0, VAis a voltage of the first common terminal A, and Vthis a threshold voltage of the driving transistor M0. As the voltage VAat the first common terminal A increases, the current IOLEDflowing through the light-emitting device D1decreases, and the voltage at the second common terminal D decreases. As the voltage VAdecreases, the current IOLEDincreases, and the voltage at the second common terminal D increases. As the data voltage Vdatais input, the voltage at the inverting input terminal N of the operational amplifier11at this time is Vdata, and the voltage at the non-inverting input terminal P of the operational amplifier11is a voltage at the anode of the light-emitting device D1(that is, a voltage at the second common terminal D). According to the working principle of the operational amplifier, the output is: Vout=u(VP−VN) Wherein, u is the amplification factor of the operational amplifier11, namely: u=(R1+R2)/R1; VPis a non-inverting voltage at the non-inverting input terminal P of the operational amplifier11, and VNis an inverting voltage at the inverting input terminal N of the operational amplifier11. According to the working principle of the operational amplifier, when the non-inverting voltage VPis greater than the inverting voltage, it can be known from the above that the output voltage Voutof the operational amplifier11is high, a voltage at the first common terminal A increases, and a voltage at the second common terminal D decreases. As a result, the non-inverting voltage VPdecreases until it is the same as the inverting voltage VN, and at this time the output of the operational amplifier11is stabilized. When the non-inverting voltage VPis less than the inverting voltage VN, it can be known from the above that the output voltage Voutof the operational amplifier11is low, the voltage at the first common terminal A decreases, and the voltage at the second common terminal D increases. As a result, the non-inverting voltage VPincreases until it is the same as the inverting voltage VN, and at this time, the output of the operational amplifier11is stabilized. The output voltage Voutoutput by the operational amplifier11can quickly charge the storage capacitor C0and improve its charging efficiency. Through negative feedback of the operational amplifier circuit, the data voltage Vdatais the same as the voltage at the second common terminal D. That is to say, the voltage at the second common terminal D corresponding to the current flowing through the light-emitting device D1with different gray levels is the data voltage. It can be known that, in each pixel unit101, the current flowing through the light-emitting device D1is relevant to the voltage at the second common terminal D, and is irrelevant to the driving transistor M0. The driving transistor M0provides a driving current for the light-emitting device D1to emit light. Preferably, the operational amplifier11further disconnects a path between the operational amplifier11and the pixel unit101in response to a switch control signal SW to ensure that the storage capacitor C0supplies a stable voltage for the driving transistor M0operating in a saturation region. Therefore, the driving transistor M0can supply the driving current for the light emission of the light-emitting device D1. Through switch control, the operational amplifier11is disconnected after the operation is completed, thus reducing its influence on the light emission of the pixel circuit, and better controlling the operation of the pixel display device. Specifically, the display device1further comprises a switch S1; the switch S1is electrically connected between the output terminal OUT of the operational amplifier11and the first common terminal A, and a control terminal of the switch S1is configured to receive the switch control signal SW. The switch S1may be a switching element, a triode, a metal oxide semiconductor (MOS) transistor, and other element that can realize a switch function. The present application utilizes the negative feedback of the operational amplifier, so that the current flowing through the light-emitting device in each pixel is irrelevant to the driving thin film transistor, thereby solving the problem of non-uniformity of the display panel due to different threshold voltages and channel electron mobility resulting from immature thin film transistor manufacturing processes. Accordingly, the uniformity of the panel is improved. In addition, the output voltage of the operational amplifier can quickly charge the storage capacitor, improve the charging efficiency, and reduce the charging time. At the same time, under the control of the switch, the operational amplifier is turned off when the operations are completed, thus less affecting the light emission of the pixel circuit and better controlling the operations of the pixel display device. Please refer toFIGS.3and4.FIG.3is a schematic view illustrating the display device according to one embodiment of the present application.FIG.4is a schematic view illustrating a switch control signal waveform of the embodiment shown inFIG.3. As shown inFIG.3, the present application provides a display device1. The display device1comprises a pixel circuit10and a plurality of operational amplifiers11(OP1 to OPn). The pixel circuit10comprises a plurality of pixel units101arranged in an array. The pixel unit101comprises a driving transistor M0, a storage capacitor C0, and a light-emitting device D1. For details of the circuit, please refer toFIG.1. The operational amplifier11and the driving transistor M0constitute a negative feedback loop for controlling a voltage at an input terminal of the light-emitting device D1to be the same as a data voltage input externally. The operational amplifier11is further configured to charge the storage capacitor C0, so that after the storage capacitor C0is charged, the light-emitting device D1is driven by the driving transistor M0to emit light, wherein a charging time for the operational amplifier11to charge the storage capacitor C0is less than a predetermined value. In the present embodiment, the pixel units101in a same column are electrically connected to the same operational amplifier11. In detail, a gate of each driving transistor M0of the pixel units101in the same column is electrically connected to a first common terminal A and connected to an output terminal of the same operational amplifier11. A source of each driving transistor M0of the pixel units101in each column is electrically connected to a second common terminal D and connected to a non-inverting input terminal P of the same operational amplifier11. The inverting input terminal N of each of the operational amplifiers11is electrically connected to a data line to receive the corresponding data voltage Vdata. The pixel units101in a same row disconnect paths between the pixel units101and the corresponding operational amplifiers11(OP1 to OPn) in response to the same switch control signal SW. As shown inFIG.3, SW<1> to SW<N> are the switch control signals for disconnecting the connection between the operational amplifiers11(OP1 to OPn) and the pixel units101after the corresponding operational amplifiers11(OP1 to OPn) are stabilized, and thereby the operational amplifier11(OP1 to OPn) less affect the light emission of the pixel circuit101. The pixel units101in the same row respond to a same pixel turn-on signal GOA. As shown inFIG.3, GOA<1> to GOA<N> are the pixel turn-on signals that can control the pixel units101in the corresponding rows. Specifically, the display device1comprises a display region1aand a non-display region1bsurrounding the display region1a; and the pixel circuit10is disposed in the display region1a, and the operational amplifier11(OP1 to OPn) is disposed in the non-display region1b. Specifically, the operational amplifier11(OP1 to OPn) is arranged between an external driving IC and the pixel circuit10. The external driving IC inputs the data voltage Vdatathrough the data line to supply a current, and the current is amplified through the operational amplifier11(OP1 to OPn) and stored in the storage capacitor C0. The corresponding operational amplifier11(OP1 to OPn) is disconnected after the storage capacitor C0is stabilized, so that the driving transistor M0drives the corresponding light-emitting device D1to emit light. Because the operational amplifier11can output a higher voltage, the storage capacitor C0is quickly charged, and the charging time is reduced. For example, to better control the operation of the pixel display device, it is necessary to disconnect the operational amplifier11after the operation is completed to ensure that the storage capacitor supplies a stable voltage to the driving transistor M0operating in a saturation region. The switch control signal SW controls whether the operational amplifier11operates in the circuit of the display device1, as shown inFIG.4illustrating the switch control signal waveform. It can be known fromFIG.4that when the switch control signal SW is at a low level, the operational amplifier11is in a working state, and the storage capacitor C0is charged at this time. When the switch control signal SW is at a high level, the operational amplifier11is disconnected. At this time, the storage capacitor C0ensures that the driving transistor M0operates in the saturation region, so that the driving transistor M0can supply a driving current for the light-emitting device D1to emit light. In the present application, driving the light-emitting device to emit light by a current can well compensate for the influence of the threshold voltage and channel electron mobility of the driving TFT on the light-emitting current of the light-emitting device, so that the uniformity of the display panel is improved. Furthermore, the output voltage of the operational amplifier can quickly charge the storage capacitor, improve the charging efficiency, and reduce the charging time; at the same time, through the switch control, the operational amplifier will be disconnected after operations are completed, so the operational amplifier less affects the light emission of the pixel circuit and better control the operations of the pixel display device. In order to simplify the present disclosure, the foregoing embodiments only disclose the components and configurations of some specific examples, as well as the working principle the present application. The above embodiments only describe some ways to realize the present application, and are only used to help understand the technical solutions and main ideas of the present application. The description of each embodiment has its own emphasis. For those that are not described in detail in one embodiment, reference may be made to related descriptions of other embodiments. Certainly, the above are only preferable embodiments of the present application, and cannot be used to limit the protection scope of the present application. Those of ordinary skill in the art should understand that they can still modify the technical solutions described in the foregoing embodiments, or make equivalent replacements to some of the technical features; and these modifications or replacements should be deemed to be within the protection scope of the technical solutions of the embodiments of the present application. It should be noted that modifications and improvements can be made by those of ordinary skill in the art without departing from the concept of the present invention. Such modifications and improvements should fall within the protection scope of the present invention. Therefore, the protection scope of the present invention should be defined by the appended claims. | 18,551 |
11862093 | DESCRIPTION OF THE EMBODIMENTS Like features have been designated by like references in the various figures. In particular, the structural and/or functional features that are common among the various embodiments may have the same references and may dispose identical structural, dimensional and material properties. For the sake of clarity, only the steps and elements that are useful for an understanding of the embodiments described herein have been illustrated and described in detail. Unless indicated otherwise, when reference is made to two elements connected together, this signifies a direct connection without any intermediate elements other than conductors, and when reference is made to two elements coupled together, this signifies that these two elements can be connected or they can be coupled via one or more other elements. Further, a signal which alternates between a first constant state, for example, a low state, noted “0”, and a second constant state, for example, a high state, noted “1”, is called a “binary signal”. The high and low states of different binary signals of a same electronic circuit may be different. In practice, the binary signals may correspond to voltages or to currents which may not be perfectly constant in the high or low state. In the following disclosure, unless otherwise specified, when reference is made to absolute positional qualifiers, such as the terms “front”, “back”, “top”, “bottom”, “left”, “right”, etc., or to relative positional qualifiers, such as the terms “above”, “below”, “upper”, “lower”, etc., or to qualifiers of orientation, such as “horizontal”, “vertical”, etc., reference is made to the orientation shown in the figures. Unless specified otherwise, the expressions “around”, “approximately”, “substantially” and “in the order of” signify within 10%, and preferably within 5%. A pixel of an image corresponds to the unit element of the image displayed by the display screen. When the display screen is a color image display screen, it generally comprises, for the display of each image pixel, at least three emission and/or light intensity regulation components, also called display sub-pixels, which each emit a light radiation substantially in a single color (for example, red, green, or blue). The superposition of the radiations emitted by the three display sub-pixels provides the observer with the colored sensation corresponding to the pixel of the displayed image. In this case, the assembly formed by the three display sub-pixels used for the display of a pixel of an image is called display pixel of the display screen. The display of a video on a display screen comprises the display of successive images on the display screen, an image being also called a frame, at a display frequency, also called refreshment frequency, which generally varies between 50 Hz and 240 Hz. FIGS.1A to1Dillustrate successive steps of a method of displaying a frame on a display screen10of a display device5. Display screen10comprises an array of display pixels Pixi,jarranged in M rows and in N columns, i being an integer varying from 1 to M and j being an integer varying from 1 to N. As an example, M is an integer which varies from 1 to 2,000 and N is an integer which varies from 1 to 4,000. As an example, inFIGS.1A to1D, M is equal to 5 and N is equal to 12. Display device5further comprises a selection circuit SEL which is coupled to the display pixels Pixi,jof each row by at least one row electrode WLivarying from 1 to M. Display device5further comprises a data circuit COL coupled to the display pixels Pixi,jof each column electrode BLj, j varying from 1 to N. Data circuit COL may comprise a shift register20comprising N memory cells22j, j varying from 1 to N, and a buffer memory30comprising N memory cells32j, j varying from 1 to N. Data circuit COL receives digital image signals DATA containing the information relative to the image pixels to be displayed. Each memory cell22jand each memory cell32jmay store the digital image signals containing the information relative to a single display pixel. Selection circuit SEL and data circuit COL receive synchronization signals SYNC, for example, binary signals. A first synchronization signal may indicate, for each image pixel, the end of the transmission of the digital image signals DATA relative to this image pixel. A second synchronization signal may indicate, for each row of the frame to be displayed, the end of the transmission of the digital image signals DATA relative to this row. A third synchronization signal may indicate, for each frame to be displayed, the end of the transmission of the digital image signals DATA relative to this frame. FIG.2shows an embodiment of a display pixel Pixi,jcomprising a display circuit DISP with light-emitting diodes LED, and a control circuit COM, coupled to row electrode WLiand to column electrode BLj. Control circuit COM is configured to control the light-emitting diodes LED of display circuit DISP from the digital or analog image signals received from column electrode BLjwhen it receives an activation signal from row electrode WLi. Display screen10and display pixels Pixi,jmay have the structures described in document WO 2019/016481 or WO 2019/016482. ConsideringFIGS.1A to1Dagain, the digital image signals DATA relative to the image pixels to be displayed on the first row of display screen10are supplied in series to shift register20, by first memory cell221, the delivery of the digital signals relative to a new image pixel to memory cell221causing the shifting of the digital signals stored in memory cell22jto the next memory cell22j+1. FIG.1Aschematically shows the digital image signals relative to an image pixel stored in the first memory cell221of shift register20. InFIG.1B, all the digital image signals relative to the display pixels to be displayed on the first row of display screen10have been delivered in series to shift register20and the digital image signals stored in each memory cell22jof shift register20have been loaded into the memory cell32jof buffer memory30. Further, the display pixels Pix1,j, j varying from 1 to N, of the first row have been activated by selection circuit SEL. FIG.1Cshows the display pixels Pix1,j, j varying from 1 to N, of the first row displaying the image pixels corresponding to the digital image pixels stored in buffer memory and transmitted, in digital or analog form, to display pixels Pix1,jby column electrodes BL1to BLN. Preferably, display pixels Pix1,jkeep on displaying the display pixels relative to the digital signals that they have received as long as they are not selected again by selection circuit SEL. Simultaneously, digital image signals relative to the display pixels to be displayed on the second row of display screen10are delivered in series to shift register20. InFIG.1D, all the digital image signals relative to the display pixels to be displayed on the second row of display screen10have been delivered in series to shift register20and the digital image signals stored in shift register20have been loaded into buffer memory30. The display pixels Pix2,j, with j varying from 1 to N, of the second row are then selected by selection circuit SEL. The previously-described steps are repeated until the Mthrow of display screen10. Selection circuit SEL then receives a synchronization signal SYNC indicating the frame end and then selects again the first row of display screen10for the display of the next frame. FIG.3illustrates an embodiment of a method of displaying an image IM of decreased dimensions, called reduced image IM hereafter, on display screen10in a low-power mode. Image IM is called reduced since the number of image pixels of image IM is smaller than the number of display pixels of display screen10. More particularly, the number of rows of image pixels of image IM is smaller than the number M of rows of display screen and/or the number of columns of image pixels of image IM is smaller than the number N of columns of display screen10. According to an embodiment, selection circuit SEL is controlled to start the display of reduced image IM at a row of number K, indicated by arrow F1inFIG.3, other than the first row of display screen10, and/or data circuit COL only delivers digital image signals from a column of number L, indicated by arrow F2inFIG.3, different from the first column of display screen10. This advantageously enables to decrease the number of display pixels of the display screen to be activated for the display of an image in low-power mode.FIG.4partially and schematically shows an embodiment of a display device40with a low-power mode in the case where the image signals delivered to display pixels Pixi,jare digital signals andFIG.5partially and schematically shows a variant of display device40with a low-power mode in the case where the image signals delivered to display pixels Pixi,jare analog signals. Display device40comprises all the elements of the display device5shown inFIG.1A. In the case where the image signals delivered to display pixels Pixi,jare digital signals (FIG.4), each memory cell32j, j varying from 1 to N, of buffer memory30may be directly coupled to column electrode BLj. In the case where the image signals delivered to display pixels Pixi,jare analog signals (FIG.5), each memory cell32j, j varying from 1 to N, of buffer memory30may be coupled to column electrode BLjvia a digital-to-analog converter41j(DAC). Display device40further comprises a memory42, also called register, and a routing circuit44receiving as an input digital image signals DATA and delivering digital image signals DATA to one of memory cells221to22Nof shift register20according to the signal stored in memory42. According to an embodiment, memory42comprises N bits, B1to BN, a single bit Bjof memory42being at “1”, all the other bits of memory42being at “0”, and the rank j of the memory cell22jhaving the digital image signals DATA delivered thereto is the same as the bit Bjof memory42which is at “1”. Display device40further comprises a module46configured to receive a signal SCOLrepresentative of the first column of display screen10from which reduced image IM is to be displayed and configured to store in memory42a signal representative of signal SCOL. According to an embodiment, routing circuit44comprises N switches SW1to SWN. Each switch SWj, j varying from 1 to N, couples an input node IN, receiving digital image signals DATA, to a terminal of switch SWj, the other terminal of switch SWjbeing coupled to memory cell22j. Each switch SWj, j varying from 1 to N, is controlled by a control signal ENjdelivered from the bit Bjstored in memory42. According to an embodiment, when bit Bjis at “1”, signal ENjcontrols the turning-on of switch SWjand when bit Bjis at “0”, signal ENjcontrols the turning-off of switch SWj. A single one of bits B1to BNis at “1” so that a single one of switches SW1to SWNis on. InFIG.4, routing circuit44is described with N switches SW1to SWN. However, routing circuit44may comprise less than N switches. As a variant, routing circuit44is configured to deliver digital image signals DATA only to one of memory cells22j, j varying from 1 to Q, Q being an integer smaller than N. According to another embodiment, memory42comprises a number nbits of bits such that N is smaller than number 2 raised to power nbits, for example, 16 bits, and the rank j of the memory cell22jhaving digital image signals DATA provided thereto is stored in memory42. Control signals ENj, j varying from 1 to N, are then delivered by logic circuits, not shown, based on the data stored in memory42, so that switch SWjis on and all the other switches of routing circuit44are off. Display device40further comprises a memory48, also called register, selection circuit SEL being configured to select, for the display of the first row of a new frame, first the row of display screen10according to the signal stored in memory48. According to an embodiment, memory48comprises M bits, B′1to B′M, a single bit B′iof memory48being at “1”, all the other bits of memory48being at “0”, and the rank i of the row which is selected first is the same as the index of the bit B′iof memory48which is at “1”. In the same way as for memory42, in another embodiment, memory48contains the rank i of the row which has been selected first. Selection circuit SEL comprises a module50configured to receive a signal of indication of the first row SROWof display screen10to be selected and configured to store in memory48a message adapted to signal SROW. In a normal operating mode, where each displayed frame has the same dimensions as display screen10, that is, the same number of image pixel rows as the number of display pixel rows of display screen10and the same number of image pixel columns as the number of display pixel columns of display screen10, signal SROWindicates that the row to be selected for the display of the first row of a new frame is the first row of display screen10and signal SCOLindicates that the column of screen12from which each new frame is to be displayed is the first column of display screen10. In the low-power mode, where each displayed frame has dimensions smaller than those of display screen10, the row of display screen10, designated by signal SROW, to be selected for the display of the first row of the frame may be different from the first row of display screen10and the column of display screen10, designated by signal SCOL, from which the frame should be displayed may be different from the first column of display screen10. FIG.6partially and schematically shows a more detailed embodiment of a portion of the shift register20, of the routing circuit44, and of the memory42shown inFIG.4or5. In this embodiment, each memory cell22j, j varying from 1 to N, corresponds to a D-type flip-flop, three memory cells22j−1,22j, and22j+1being shown as an example inFIG.6. Each flip-flop22jcomprises a data input D, two set inputs R and S, two complementary outputs, a single output Q being shown, and is rated by a clock signal CLK. The D input of memory cell22jis coupled to the Q output of memory cell241. Further, each memory cell Bj, with j varying from 1 to N, corresponds to a D-type flip-flop, three memory cells Bj−1, Bj, and Bj+1being shown as an example inFIG.6. Each flip-flop Bjcomprises a D data input, two set inputs R and S, two complementary outputs, a single output Q being shown, and is rated by a clock signal CLK′. The D input of memory cell Bjis coupled to the Q output of memory cell Bj−1. The truth table [Table 1] of each memory cell22jand Bjis the following: TABLE 1SRDQn+10000001110x101x011NANA Each switch SWjmay be controlled by a signal ENjand is configured to couple input node IN to the D input of flip-flop SWjwhen signal ENjis at “1”. Signal ENjis delivered by the Q output of memory cell Bj. A reset signal Reset is delivered to the R input of each memory cell Bj, j varying from 2 to N, and to the S input of memory cell B1(shown as an example by memory cell Bj−1inFIG.6). This enables, during a reset step, the display of an image to start at the first column of the display screen by default. The information of the column of screen10from which each new frame should be displayed in the low-power mode is loaded into memory42via a LOAD input coupled to the D input of first memory cell B1. In the embodiment previously described in relation withFIG.5, memory cells22jand Bjare formed by D flip-flops. However, memory cells22jand Bjmay be formed with other types of flip-flops or of logic latches. In the previously-described embodiments, in the low-power mode, a single reduced image is displayed on display screen10. According to an embodiment, two or more than two reduced images, each having dimensions smaller than the dimensions of display screen10, may be displayed on display screen10in low-power mode. FIG.7shows an embodiment of a circuit51for delivering signals SROWand SCOLin the case where P reduced images are to be displayed on the display screen in low-power mode, P being an integer greater than or equal to 2, for example, varying from 2 to 10. Circuit51comprises a memory52having, for the kthreduced image with k varying from 1 to Q, data representative of the integral number Nbkof rows of the reduced image stored therein. Memory52delivers a signal Nb equal to one of values Nbk. Circuit51comprises a memory54having, for the kthreduced image with k varying from 1 to Q, data representative of the first row Lkof display screen10at which the first row of the reduced image should be displayed stored therein. Memory54delivers signal SROWequal to one of values Lk. Circuit51comprises a memory56having, for the kthreduced image with k varying from 1 to Q, data representative of the first column Ckof display screen10at which the first column of the reduced image should be displayed stored therein. Memory56delivers signal SCOLequal to one of values Ck. According to an embodiment, each memory52,54, and56is controlled by a signal Shift_en. According to an embodiment, signal Shift_en is a binary signal. As an example, when signal Shift_en does not vary, the signals SROW, SCOL, and Nb delivered by memories52,54, and56are not modified, and when signal Shift_en switches from “0” to “1”, the signals SROW, SCOL, and Nb delivered by memories52,54, and56are modified. As an example, when memory52delivers signal Nb equal to value Nbk, with k smaller than Q, it may deliver signal Nb equal to value Nbk+1on reception of a pulse of signal Shift_en. Further, when memory54delivers signal SROWequal to value Lk, with k smaller than Q, it may deliver signal SROWequal to Lk+1on reception of a pulse of signal Shift_en. Further, when memory56delivers signal SCOLequal to value Ck, with k smaller than Q, it may deliver signal Scot, equal to value Ck+1 on reception of a pulse of signal Shift_en. According to an embodiment, circuit51further comprises a counter58which increments a signal CPT and a module60receiving signals CPT and Nb, delivering signal Shift_en, and delivering a reset signal resetn to counter58. Counter58increments signal CPT each time it receives an end-of-frame synchronization signal SYNC. According to an embodiment, module60is configured to compare counter CPT with the number Nb supplied by memory52and is configured to emit a pulse of signal Shift_en when signal CPT is equal to number Nb and to reset counter58. Signals Nb, SROWand SCOLare thus modified for each new reduced image to be modified. Various embodiments and variants have been described. Those skilled in the art will understand that certain features of these various embodiments and variants may be combined, and other variants will occur to those skilled in the art. Finally, the practical implementation of the described embodiments and variants is within the abilities of those skilled in the art based on the functional indications given hereabove. | 18,965 |
11862094 | DETAILED DESCRIPTION Exemplary embodiments of the present disclosure will hereinafter be described in detail with reference to the accompanying drawings. The same or like reference indicia may be used to designate the same or like features throughout the drawings, and repeated descriptions thereof may be omitted. FIG.1illustrates a display device according to an exemplary embodiment of the present disclosure. Referring toFIG.1, a display device1000may include a pixel unit100, a scan driver200, a data driver300, a sensing circuit400, a power management driver500, and a timing controller600. The display device1000may be a flat panel display device, a flexible display device, a curved display device, a foldable display device, a bendable display device, or a stretchable display device. Also, the display device may be applied to a transparent display device, a head-mounted display device, a wearable display device, or the like. Further, the display device1000may be applied to various electronic devices, such as a smartphone, a tablet, a smart pad, a television (TV), or a monitor. The display device1000may be implemented as an organic light-emitting display device. The configuration shown and described is only an example, and the configuration of the display device1000is not limited thereto. For example, the display device1000may be a self-emissive display device including an inorganic light-emitting element, a liquid crystal display device, or the like. In an embodiment, the display device1000may be driven while the period thereof is divided into a display period during which an image is displayed and a sensing period during which the characteristics of driving transistors included in respective pixels PX are sensed. The sensed characteristics, in turn, may be used to detect faults, pixel conditions, and/or to adaptively adjust display output characteristics. The pixel unit100may include pixels PX disposed to be coupled to data lines DL1to DLm (where m is a natural number), scan lines SL1to SLn (where n is a natural number), control lines CL1to CLn, and sensing lines SSL1to SSLm. The pixels PX may receive a voltage of a first driving power VDD through a first driving power terminal (indicated as PT1inFIGS.2A and2B) of the power management driver500and a voltage of a second driving power VSS through a second driving power terminal (indicated as PT2inFIGS.2A and2B) of the power management driver500. Although, inFIG.1, n scan lines SL1to SLn are illustrated, the present disclosure is not limited thereto. For example, in accordance with the circuit structure of each pixel PX, one or more control lines, scan lines, emission control lines, sensing lines, or the like may be additionally formed in the pixel unit100. In an embodiment, transistors included in each pixel PX may be N-type oxide Thin Film Transistors (TFTs). For example, such an oxide TFT may be a low-temperature polycrystalline oxide (LTPO) TFT. However, this is merely exemplary, and the transistors are not limited thereto. For example, an active pattern or semiconductor layer included in each transistor may include an inorganic semiconductor such as amorphous silicon or poly silicon, an organic semiconductor, or the like. At least one of the transistors included in the display device1000may be replaced with a P-type transistor. The timing controller600may generate a data driving control signal DCS, a scan driving control signal SCS, and a power driving control signal PCS in response to externally supplied synchronization signals. The data driving control signal DCS generated by the timing controller600may be supplied to the data driver300, the scan driving control signal SCS may be supplied to the scan driver200, and the power driving control signal PCS may be supplied to the power management driver500. Further, the timing controller600may supply image data DATA in which externally supplied input image data is realigned to the data driver300. The data driving control signal DCS may include a source start signal and clock signals. The source start signal may control a time point at which the sampling of data starts. The clock signals may be used to control a sampling operation. The scan driving control signal SCS may include a scan start signal, a control start signal, and clock signals. The scan start signal may control the timing of scan signals. The control start signal may control the timing of control signals. The clock signals may be used to shift the scan start signal and/or the control start signal. The power driving control signal PCS may control the supply of the voltage of the first driving power VSS of the first driving power terminal and the voltage of the second driving voltage VDD of the second driving power terminal, respectively, including the actual signal levels of the voltages. In an embodiment, the power driving control signal PCS may include a sensing control signal SCTL for controlling the actual voltage level of the first driving power VSS at the first driving power terminal. The timing controller600may further control the operation of the sensing circuit400. For example, the timing controller600may control the timing at which a reference voltage is supplied to the pixels PX through the sensing lines SSL1to SSLm and/or the timing at which currents generated in the pixels PX are sensed through the sensing lines SSL1to SSLm. The scan driver200may receive the scan driving control signal SCS from the timing controller600. The scan driver200having received the scan driving control signal SCS may supply the scan signals to the scan lines SL1to SLn, and may supply control signals to the control lines CL1to CLn. For example, the scan driver200may sequentially supply the scan signals to the scan lines SL1to SLn. When the scan signals are sequentially supplied to the scan lines SL1to SLn, the pixels PX may be selected on a horizontal line basis. For this operation, each scan signal may be set to a gate-on voltage, such as a logic high level, so that a transistor included in the corresponding pixel PX is turned on. Similarly, the scan driver200may sequentially supply the control signals to the control lines CL1to CLn. The control signals may be used to sense or extract driving currents flowing through the pixels, which may be based on currents flowing through the corresponding driving transistors. The timings and waveforms at which the scan signals and the control signals are supplied may be set differently depending on the display period and the sensing period. Although, inFIG.1, a single scan driver200is illustrated as outputting both scan signals and control signals, the present disclosure is not limited thereto. For example, the scan driver200may include a first scan driver which supplies scan signals to the pixel unit100and a second scan driver which supplies control signals to the pixel unit100. The data driver300may receive the data driving control signal DCS from the timing controller600. The data driver300may supply data signals, such as sensing data signals, for detecting pixel characteristics to the pixel unit100during the sensing period. The data driver300may supply data signals for displaying an image to the pixel unit100based on the image data DATA during the display period. The sensing circuit400may generate compensation values for compensating for the characteristic values of the pixels PX based on sensing values, such as sensing currents, provided from the sensing lines SSL1to SSLm. In detail, the sensing circuit400may calculate or sense the amount of degradation of the driving transistor included in each pixel PX and/or the amount of degradation of the light-emitting element using sensing values, such as sensing currents, provided from the sensing lines SSL1to SSLm. For example, the sensing circuit400may detect and compensate for the change in the characteristics of the light-emitting element occurring due to a change in the threshold voltage of the driving transistor included in each pixel PX, a change in the mobility of the driving transistor, and the degradation of the driving transistor. In an embodiment, the sensing circuit400may supply a predetermined reference voltage to the pixels PX through the sensing lines SS1to SSLm and receive currents or voltages extracted from the pixels PX during the sensing period. The extracted currents or voltages may correspond to the sensing values, and the sensing circuit400may detect the change in the characteristics of driving transistors based on the sensing values. The sensing circuit400may calculate compensation values for compensating for the input image data based on the detected characteristic change. The compensation values may be provided to the timing controller600or the data driver300. During the display period, the sensing circuit400may supply a predetermined reference voltage for displaying an image to the pixel unit100through the sensing lines SSL1to SSLm. Although, inFIG.1, the sensing circuit400is illustrated as being a component separate from the timing controller600, at least some of the components of the sensing circuit400may be included in the timing controller600. For example, the sensing circuit400and the timing controller600may be formed in a single driver Integrated Circuit (IC). Furthermore, the data driver300may also be included in the timing controller600. Therefore, at least some of the sensing circuit400, the data driver300, and the timing controller600may be formed in a single driver IC. The power management driver500may supply the voltage of the first driving power VSS and the voltage of the second driving power VDD to the pixel unit100in response to the power driving control signal PCS. In an embodiment, the voltage at the first driving power terminal (the voltage of the first driving power VSS) may determine a cathode voltage of the light-emitting element, and the voltage at the second driving power terminal (the voltage of the second driving power VDD) may determine a drain voltage of the driving transistor. In an embodiment, the power management driver500may supply a voltage to the first driving power terminal as a first voltage during the sensing period and a voltage to the first driving power terminal as a second voltage during the display period. The second voltage V2may be a ground voltage GND, for example. The voltage of the first driving power terminal may be supplied to the pixels PX through a first power line PL1, and the voltage of the second driving power terminal may be supplied to the pixels PX through a second power line PL2. In an embodiment, the first power line PL1may be provided on a front surface of the pixel unit100in the form of a common electrode. Such a first power line PL1may have a strong possibility of being short-circuited to or in contact with another line or conductive element due to a crack in or deformation of a display panel including the pixel unit100, such as to cause a short-circuit or a reduced impedance fault. Through a short-circuit fault (“short”) fault on the first power line PL1, an overcurrent may occur, and thus a risk of heat generation or may arise. Although the present disclosure shows detection of short-circuit and/or reduced impedance faults for illustrative purposes, it is not limited thereto. For example, open-circuit and/or increased impedance faults may similarly be detected. FIG.2Aillustrates an example of a pixel included in the display device ofFIG.1, andFIG.2Billustrates another example of a pixel included in the display device ofFIG.1. InFIGS.2A and2B, for the convenience of description, a pixel which is located on an i-th horizontal line and is coupled to a j-th data line DLj is illustrated. Referring toFIGS.2A and2B, a pixel PXij may include a light-emitting element LD, a first pixel transistor T1, such as a driving transistor, a second pixel transistor T2, a third pixel transistor T3or T3′, and a storage capacitor Cst. A first electrode of the light-emitting element LD, which may be an anode or a cathode without limitation, is coupled to a second node N2, and a second electrode of the light-emitting element LD, which may be the other of the anode or cathode without limitation, is coupled to the VSS source of the first driving power terminal. The light-emitting element LD may generate light with predetermined luminance in accordance with the amount of current supplied from the first pixel transistor T1, such as through the driving transistor. A first electrode of the first pixel transistor T1may be coupled to a second driving power terminal PT2to which a voltage of a second driving power VDD is supplied, and a second electrode thereof may be coupled to the first electrode of the light-emitting element LD. A gate electrode of the first transistor T1may be coupled to a first node N1. The first pixel transistor T1controls the amount of current flowing into the light-emitting element LD in accordance with the voltage of the first node N1. A first electrode of the second pixel transistor T2may be coupled to the data line DLj, and a second electrode thereof may be coupled to the first node N1. A gate electrode of the second pixel transistor T2may be coupled to a scan line SLi. When a scan signal is supplied through the scan line SLi, the second pixel transistor T2may be turned on, and may then transfer a data signal from the data line DLj to the first node N1. In an embodiment, as illustrated inFIG.2A, the third pixel transistor T3may be coupled between a sensing line SSLj and the second electrode, such as the second node N2, of the first pixel transistor T1. A gate electrode of the third pixel transistor T3may be coupled to a control line CLi. When a control signal is supplied through the control line CLi, the third pixel transistor T3may be turned on, and may then electrically couple the sensing line SSLj and the second node N2, such as the second electrode of the first pixel transistor T1, to each other. In an embodiment, when the third pixel transistor T3is turned on, a reference voltage may be supplied to the second node N2through the sensing line SSLj. In an embodiment, when the third pixel transistor T3is turned on, a current generated in the first pixel transistor T1may be supplied to a sensing circuit, such as the sensing circuit400ofFIG.1, through the sensing line SSLj. In another embodiment, as illustrated inFIG.2B, a third pixel transistor T3′ may be coupled between the data line DLj and the second electrode, such as the second node N2, of the first pixel transistor T1. A gate electrode of the third pixel transistor T3′ may be coupled to the control line CLi. When a control signal is supplied through the control line CLi, the third pixel transistor T3′ may be turned on, and may then electrically couple the data line DLj and the second node N2, such as the second electrode of the first pixel transistor T1, to each other. In an embodiment, when the third pixel transistor T3′ is turned on, a reference voltage may be supplied to the second node N2through the data line DLj. In an embodiment, when the third pixel transistor T3′ is turned on, a current generated in the first pixel transistor T1may be supplied to a sensing circuit (e.g.,400ofFIG.1) through the data line DLj. In this way, the pixel PXij ofFIG.2Bmay receive the data signal through the data line DLj or transfer the current sensed from the pixel PXij to the sensing circuit (e.g.,400ofFIG.1) through the data line DLj, in a time-division manner. The storage capacitor Cst may be coupled between the first node N1and the second node N2. The storage capacitor Cst may store a voltage corresponding to a voltage difference between the first node N1and the second node N2. In an embodiment of the present disclosure, the circuit structure of the pixel PXij is not limited byFIG.2A or2B. For example, the light-emitting element LD may be interposed between the second driving power terminal PT2and the first electrode of the first pixel transistor T1. Further, although inFIG.2AandFIG.2Bthe pixel transistors T1to T3are illustrated as being NMOS transistors, the present disclosure is not limited thereto. For example, at least one of the pixel transistors T1to T3may be implemented as a PMOS transistor. FIG.3illustrates an example of an operation of the display device ofFIG.1, andFIG.4illustrates an example of the operation of the pixel ofFIG.2AorFIG.2B. Referring toFIGS.1to4, the display device1000may be driven so that the period thereof is divided into a display period DP during which an image is displayed and a sensing period SP during which the characteristics of a first pixel transistor T1included in each pixel PX are sensed. In an embodiment, during the sensing period SP, image data may be compensated for based on the sensed characteristic information. During the display period DP, a predetermined reference voltage, which is a constant voltage, may be supplied to sensing lines SSL1to SSLm. During the display period DP, the scan driver200may sequentially supply scan signals to the scan lines S1to Sn. Also, during the display period DP, the scan driver200may sequentially supply control signals to control lines CL1to CLn. For an i-th horizontal line, a scan signal and a control signal may be supplied at substantially the same time. Therefore, the second pixel transistor T2and the third pixel transistor T3may be simultaneously turned on or off. When the second pixel transistor T2is turned on, a data signal DS corresponding to image data may be supplied from a respective data line DLj to the first node N1. When the third pixel transistor T3is turned on, the reference voltage may be supplied to the second node N2. Therefore, the storage capacitor Cst may store a voltage corresponding to a voltage difference between the data signal DS and the reference voltage. Here, since the reference voltage is set to a constant voltage, the voltage stored in the storage capacitor Cst may be stably determined by the data signal DS. When the supply of the scan signal and the control signal to the i-th scan line SLi and the i-th control line CLi is stopped, the second pixel transistor T2and the third pixel transistor T3may be turned off. Thereafter, the first pixel transistor T1may control the amount of current (driving current) supplied to the light-emitting element LD in accordance with the voltage stored in the storage capacitor Cst. Therefore, the light-emitting element LD may emit light with luminance corresponding to the driving current of the first pixel transistor T1. The power management driver may output a voltage of a first driving power VSS to a first driving power terminal PT1. In an embodiment, during the display period DP, the power management driver500may output a second voltage V2of the first driving power VSS to the first driving power terminal PT1. During the display period DP, the first driving power terminal PT1may be output in the form of a constant voltage. The second voltage V2may have a voltage level sufficiently different from a first voltage V1to be applied for image display. For example, the second voltage V2may be a ground voltage. In an embodiment, during the sensing period SP, the scan driver200may sequentially supply scan signals to the scan lines SL1to SLn. Also, during the display period DP, the scan driver200may sequentially supply control signals to the control lines CL1to CLn. In an embodiment, the length of the control signals supplied during the sensing period SP may be longer than that of the control signals supplied during the display period DP. Also, during the sensing period SP, a part of the control signal supplied to an i-th control line CLi may overlap a scan signal supplied to an i-th scan line SLi. For example, the control signal supplied to the i-th control line CLi starts to be supplied simultaneously with the scan signal supplied to the i-th scan line SLi, and may be supplied for a time longer than that of the scan signal. When the scan signal and the control signal are simultaneously supplied, the second and third pixel transistors T2and T3are turned on. When the second pixel transistor T2is turned on, a sensing data signal or voltage SGV for sensing may be supplied from the respective data line DLj to the first node N1. Simultaneously with the supply of the sensing data signal, a reference voltage may be supplied to the second node N2by the turn-on operation of the third pixel transistor T3. Therefore, the storage capacitor Cst may store a voltage corresponding to a voltage difference between the sensing data signal SGV and the reference voltage. Thereafter, when the supply of the scan signal is stopped, the second pixel transistor T2may be turned off. When the second pixel transistor T2is turned off, the first node N1may float. Accordingly, the voltage of the second node N2may rise, and thus a sensing current may be generated through the first pixel transistor T1. The sensing current may be supplied to the sensing circuit (e.g.,400ofFIG.1). In an embodiment, during the sensing period SP, the power management driver500may output a first voltage V1of the first driving power VSS to the first driving power terminal PT1so as to calculate characteristics. For example, the first voltage V1may be higher than the reference voltage (e.g., voltage supplied to the second node N2through the sensing line SSLj). Further, the first voltage V1may be set to a voltage higher than the second voltage V2. In other words, the first voltage V1may be set to the voltage higher than the voltage of the second node N2so that the light-emitting element LD does not emit light. Accordingly, during the sensing period SP, a sensing current may flow through the sensing circuit400along the sensing line SSLj without flowing through the light-emitting element LD. In an embodiment, a transition period TP may be inserted between the display period DP and the sensing period SP. During the transition period TP, the power management driver500may be controlled such that the timing at which the first voltage V1of the first driving power terminal PT1is output does not overlap the timing at which the second voltage V2is output. The timing diagram ofFIG.4indicates signals supplied to the pixel PXij ofFIG.2during the display period DP, the transition period TP, and the sensing period SP, and shows an operation scheme that is substantially the same as that described above with reference toFIG.3. Thus, repeated descriptions thereof may be omitted. FIG.5illustrates a power management driver according to an exemplary embodiment of the present disclosure. Referring toFIGS.1and5, the power management driver500may include a first power supply520, a second power supply540, a controller560, and a short detector580. In an embodiment, the power management driver500may be mounted on the display device1000in the form of a driver IC. However, this is merely exemplary, and at least some of the components of the power management driver500may be directly formed on a display panel or may be included in the timing controller600. The first power supply520may supply the voltage of first driving power terminal PT1to the first power line PL1. In an embodiment, the first power line PL1may be coupled to a cathode electrode of a light-emitting element LD included in a pixel PXij. The first power supply520may supply the first voltage V1of the first driving power terminal PT1to the pixel PXij through the first power line PL1during a sensing period in response to a first enable signal EN1. Also, the first power supply520may supply the second voltage V2of the first driving power terminal PT1to the pixel PXij through the first power line PL1during a display period in response to a second enable signal EN2. In an embodiment, the first power supply520may convert input power VIN, supplied from an external power source (e.g., a battery or the like), into first driving power terminal PT1having the first voltage V1or the second voltage V2. For example, the first power supply520may have the structure of a boost converter or an inverting buck boost converter. In an embodiment, the first power supply520may sequentially control the output of the first voltage V1and the output of the second voltage V2in response to the first and second enable signals EN1and EN2. This operation may be described in greater detail later with reference toFIG.6. The second power supply540may supply the voltage of the second driving power terminal PT2(the voltage of the second driving power VDD) to the pixel PXij through a second power line PL2in response to a third enable signal EN3. In an embodiment, the second power line PL2may be coupled to a drain electrode of a first pixel transistor T1(or a driving transistor) of the pixel PXij. The second driving power terminal PT2may have a high-potential Direct Current (DC) voltage. For example, the voltage of the second driving power terminal PT2may be higher than the first voltage V1and the second voltage V2. However, this is merely exemplary, and the voltage of the second driving power terminal PT2may be higher than the second voltage V2, but may be lower than or equal to the first voltage V1. The second power supply540may convert the input power VIN supplied from an external power source (e.g., a battery or the like) into the voltage of the second driving power terminal PT2. For example, the second power supply540may have the structure of a boost converter. In an embodiment, the second driving power terminal PT2may supply a voltage having a constant magnitude to the second power line PL2, regardless of a sensing period, a transition period, and a display period. However, this is merely exemplary, and the voltage level of the second driving power terminal PT2may change if necessary. The controller560may control the timings at which the first voltage V1and the second voltage V2are respectively output during the transition period in response to the sensing control signal SCTL. In an embodiment, the controller560may generate the first enable signal EN1by delaying the sensing control signal SCTL, and may generate the second enable signal EN2by inverting and delaying the sensing control signal SCTL. Also, the controller560may control the detection operation and/or the protection operation of the short detector580. For example, the controller560may limit the output of a protection signal (or a shutdown signal) based on short detection during a masking period existing in an initial stage of the sensing period. In accordance with an embodiment, the controller560may analyze a glitch (or noise) in a detected value, which is detected by the short detector580, and may then determine whether to stop the driving of the power management driver500. For example, when the time during which noise in the detected value is detected is longer than a predetermined reference time (e.g., a noise ignorance time), the controller560may control the power management driver500so that the power management driver500is not shut down. Furthermore, when the time during which noise is detected is shorter than or equal to the reference time, the power management driver500may be controlled such that such noise is ignored when power shutdown is controlled. The short detector580may detect a short in the second power line PL2based on current flowing through an output terminal (e.g., the second power line PL2) during the sensing period. Since a current (e.g., a sensing current) generated in the first pixel transistor T1of the pixel PXij flows into the sensing line SSLj through the third pixel transistor T3during the sensing period, a current path to the normal first power line PL1is not formed, or alternatively, a very small amount of current flows through the first power line PL1. However, when the first power line PL1is in contact with or is shorted to other lines, a current path may be formed through a short point. For example, a line for transferring a logic high level (e.g., about 25 V) of a scan signal or like may be shorted to the first power line PL1. In this case, since the logic high level is higher than the output voltage of the first power supply520, a negative current sinking from the first power line PL1to the first power supply520may be detected. In contrast, a line for transferring a logic low level (e.g., about −10 V) of a scan signal or like may be shorted to the first power line PL1. In this case, since the logic low level is lower than the output voltage of the first power supply520, a positive current that flows from the output terminal of the first power supply520into the first power line PL1may be detected. The short detector580may extract such a negative current and a positive current, and may then output a protection signal for protecting the power management driver500and the display device1000based on the result of a comparison between the extracted values and a reference value. Based on the protection signal, the driving of the power management driver500and/or the display device1000may be stopped or shut down. FIG.6illustrates an example of the power management driver ofFIG.5. For convenience of description,FIG.6illustrates an embodiment in which some components of the first power supply520and the controller560are embodied. Referring toFIGS.5and6, the power management driver500may include a first power supply520, a second power supply540, a controller560, and a short detector580. The first power supply520may include a voltage determiner525, a first switch201, and a second switch202. The first power supply520may supply a first voltage V1to a first power line PL1during a sensing period, and may supply a second voltage V2to the first power line PL1during a display period. The voltage determiner525may determine the first voltage V1based on input power VIN. In an embodiment, the voltage determiner525may include a digital-to-analog converter (DAC)522and a voltage output circuit524. However, this is merely exemplary, and the voltage determiner525may further include an additional boost converter component for generating the voltage of the first driving power terminal (i.e., the voltage of the first driving power VSS). The DAC522may output the first voltage V1having a voltage level corresponding to a driving condition based on the voltage of the input power VIN. For example, the first voltage V1, which is analog output, may be determined based on an 8-bit digital input value. The voltage output circuit524may temporarily store the first voltage V1, and then output the first voltage V1to an output terminal OT. AlthoughFIG.6illustrates a buffer configuration which is operated by DC driving power VCC and outputs an input voltage, and a configuration in which the first switch SW1is coupled, the present disclosure is not limited thereto. For example, the voltage output circuit524may also be implemented as a three-state, or tri-state buffer which further includes an enable terminal for switching on or off a connection between the input and output terminals thereof. The first switch SW1may be coupled between the voltage determiner525(e.g., the voltage output circuit524) and the first power line PL1. The first switch SW1may be turned on in response to a first enable signal EN1. When the first switch SW1is turned on, the first voltage V1may be supplied to the first power line PL1through a predetermined node PN1. In an embodiment, the first switch SW1may be implemented using various structures, such as a Bipolar Junction Transistor (BJT) and a Field Effect Transistor (FET), for example, a Metal Oxide Semiconductor Field Effect Transistor (MOSFET). The second switch SW2may be coupled between the first power line PL1and a voltage source to which the second voltage V2is supplied. In an embodiment, the voltage source may be ground GND, and the second voltage V2may be a ground voltage. However, this structure is merely exemplary, and the magnitude of the second voltage V2is not limited thereto. For example, as the second voltage V2, any voltage that is capable of guaranteeing stable driving of a pixel circuit PC and a light-emitting element LD of the pixel PXij during a display period is sufficient. The second voltage V2may be a predetermined negative voltage. Here, the pixel circuit PC may denote a configuration corresponding to the transistors T1, T2, and T3and the storage capacitor Cst, other than the light-emitting element LD, in the configurations of the pixel PXij ofFIG.2AorFIG.2B. The second switch SW2may be turned on in response to a second enable signal EN2. When the second switch SW2is turned on, the first power line PL1may be electrically coupled to the ground GND, and the voltage of the first driving power terminal may be set to the ground voltage. In an embodiment, the second switch SW2may include a first sub-transistor SST1and a second sub-transistor SST2. The first and second sub-transistors SST1and SST2may be coupled in parallel between the node PN1and the ground. Gate electrodes of the first and second sub-transistors SST1and SST2may receive the second enable signal EN2in common. In an embodiment, the controller560may include a first delay component562and a second delay component564. The first delay component562may generate the first enable signal EN1by delaying a sensing control signal SCTL. The sensing control signal SCTL may have an activation level (or a gate-on level) during a sensing period, and may have a deactivation level (or a gate-off level) during a display period. The first delay component562may control a time point at which the first enable signal EN1makes a transition during a transition period. The second delay component564may generate the second enable signal EN2by inverting and delaying the sensing control signal SCTL. In an embodiment, the second delay component564may include an inverter which inverts the sensing control signal SCTL. The second delay component564may control a time point at which the second enable signal EN2makes a transition during a transition period. When the first and second switches SW1and SW2are simultaneously turned on, the output terminal OT is electrically coupled to the ground GND. Accordingly, the output of the voltage determiner525is provided to the ground GND through the node PN1, and thus an overcurrent may occur. By the driving of the first and second delay components562and564, the turn-on times and turn-on timings of the first and second switches SW1and SW2may be controlled. Accordingly, a driving error, in which the first and second switches SW1and SW2are simultaneously turned on, and the occurrence of an overcurrent (and heat generation) attributable to the driving error may be prevented. Each of the first and second delay components562and564may include components, such as various types of known signal deformation circuits (or delay circuits) and shift registers. FIG.7illustrates an example of an operation of the power management driver ofFIG.6. Referring toFIGS.1,6, and7, the first and second delay components562and564may respectively output the first and second enable signals EN1and EN2by controlling the sensing control signal SCTL. The driving of the display device1000may include a display period DP, a sensing period SP, and a first transition period TP1inserted between the display period DP and the sensing period SP. In an embodiment, the driving scheme ofFIG.7may be applied to a display-off or power-off operation of the display device1000. For example, after the display operation has been terminated, sensing of the pixels PX may be performed. The first transition period TP1may be activated after the display period DP. For example, the first transition period TP1may be a preparation period for sensing the pixels PX during the sensing period SP. The length of the first transition period TP1may be set to about 60 μs. However, this is merely exemplary, and the length of the first transition period TP1may be set depending on resolution, the size of the display device1000, driving frequency, or the like. The sensing control signal SCTL may have a gate-on level during the sensing period SP, and a gate-off level during the display period DP. Hereinafter, a description will be made under the premise that a logic high level is a gate-on level. During the display period, the first enable signal EN1may have a gate-off level, and the second enable signal EN2may have a gate-on level. During the display period DP, the first switch SW1may be turned off, and the second switch SW2may be turned on. Therefore, the source of the first driving power terminal may be coupled to the ground GND, and may have the second voltage V2. During the first transition period TP1, a scan signal and a control signal are not supplied. In an embodiment, during the first transition period TP1, the first switch SW1may be turned on after the second switch SW2has been turned off. The second enable signal EN2may make a transition from a gate-on level to a gate-off level at a first time point t1of the first transition period TP1. Therefore, the second switch SW2may remain turned on during a first period P1of the first transition period TP1, and may be turned off at the first time point t1. For example, the length of the first period P1may be set to 1 μs. The second delay component564may delay an inverted signal of the sensing control signal SCTL to the first time point t1, and may output the delayed signal as the second enable signal EN2. Because the voltage of the first driving power terminal is maintained at the second voltage V2during the first period P1, the display of an image may be stably performed during the display period DP. The second enable signal EN2may have a gate-off level in response to the sensing control signal SCTL during the sensing period SP. Next, at a second time point t2, the first enable signal EN1may make a transition from a gate-off level to a gate-on level. The first switch SW1may be turned on at the second time point t2. When the first switch SW1is turned on, the voltage of the first driving power terminal may be output as the first voltage V1. The first delay component562may delay the sensing control signal SCTL to the second time point t2, and may output the delayed signal as the first enable signal EN1. The first enable signal EN1may have a gate-on level in response to the sensing control signal SCTL during the sensing period SP. During the second period P2between the first time point t1and the second time point t2, both the first and second switches SW1and SW2may be turned off. The length of the second period P2may be set to about 10 μs to 60 μs. Here, since the time during which the first and second switches SW1and SW2are turned off is very short, the first power line PL1may be maintained at the second voltage V2. That is, since the time point at which the second switch SW2is turned off is clearly separated from the time point at which the first switch SW1is turned on, heat generation and unnecessary power consumption that may occur when an output value from the output terminal OT is supplied to the ground GND may be prevented or minimized. During the sensing period SP, a scan signal and a control signal may be supplied again to the pixels PX. In an embodiment, after the second time point t2, the sensing period SP may start. Accordingly, during a third period P3, the voltage of the first driving power terminal may rise up to the first voltage V1. Therefore, during the sensing period SP, the first voltage V1of the first driving power terminal may be stably supplied. However, this is merely exemplary, and the second time point t2may be the same as the start point of the sensing period SP. FIG.8illustrates an example of an operation of the power management driver ofFIG.6. Referring toFIGS.1,6,7, and8, the first and second delay components562and564may respectively output the first and second enable signals EN1and EN2by controlling the sensing control signal SCTL. In an embodiment, the driving scheme ofFIG.8may also be applied to a display-on or a power-on operation of the display device1000. For example, before a display operation starts, sensing of the pixels PX may be performed. A second transition period TP2may be activated after the sensing period SP. For example, the second transition period TP2may be a preparation period for displaying an image during the display period DP. The length of the second transition period TP2may be set to about 60 μs. However, this is merely exemplary, and the length of the second transition period TP2may be set depending on resolution, the size of the display device1000, driving frequency, or the like. During the second transition period TP2, a scan signal and a control signal are not supplied. In an embodiment, during the second transition period TP2, the second switch SW2may be turned on after the first switch SW1has been turned off. The first enable signal EN1may make a transition from a gate-on level to a gate-off level at a third time point t3of the second transition period TP2. Therefore, the first switch SW1may remain turned on during a fourth period P4of the second transition period TP2, and may be turned off at the third time point t3. In an embodiment, the length of the fourth period P4may correspond to the sum of the lengths of the first period P1and the second period P2. The length of the fourth period P4may correspond to the time by which the first enable signal EN1is delayed from the sensing control signal SCTL. The length of the fourth period P4may be set to about 10 μs to 60 μs. However, this structure is merely exemplary, and the length of the fourth period P4is not limited thereto. The first delay component562may delay the sensing control signal SCTL to the third time point t3, and may output the delayed signal as the first enable signal EN1. Because the voltage of the first driving power terminal is maintained at the first voltage V1during the fourth period P4, a sensing operation may be stably performed during the sensing period SP. Thereafter, at a fourth time point t4, the second enable signal EN2may make a transition from a gate-off level to a gate-on level. The second switch SW2may be turned on at the fourth time point t4. When the second switch SW2is turned on, the voltage of the first driving power terminal may be output as the second voltage V2. The second delay component564may delay an inverted signal of the sensing control signal SCTL to the fourth time point t4, and may output the delayed signal as the second enable signal EN2. During a fifth period P5between the third time point t3and the fourth time point t4, both the first and second switches SW1and SW2may be turned off. The length of the fifth period P5may be set to about 10 μs to 60 μs. Here, since the time during which the first and second switches SW1and SW2are turned off is very short, the first power line PL1may be maintained at the first voltage V1. In an embodiment, after the fourth time point t4, the display period DP may start. Accordingly, during a sixth period P6, the voltage of the first driving power terminal may drop to the second voltage V2. That is, since the time point at which the first switch SW1is turned off is clearly separated from the time point at which the second switch SW2is turned on through the first transition period TP1and the second transition period TP2, heat generation and the consumption of unnecessary power that may occur when the output of the output terminal OT is supplied to the ground GND may be prevented or minimized. FIG.9illustrates an example of the short detector and the controller included in the power management driver ofFIG.6, andFIG.10illustrates an example of an operation of the short detector and the controller ofFIG.9. Referring toFIGS.6,9, and10, the short detector580may include a detected value extractor582and a protector584. The detected value extractor582may extract a first detected value POSV based on a positive current flowing through the output terminal OT during a sensing period SP, and may extract a second detected value NEGV based on a negative current flowing therethrough. The first detected value POSV and the second detected value NEGV may be extracted as voltage values or current values. In an embodiment, the detected value extractor582may extract the first detected value POSV and the second detected value NEGV based on a positive current and/or a negative current that flow into an amplifier-type voltage output circuit524. During a display period DP, the first switch SW1is turned off, and thus the current detection and extraction by the detected value extractor582are not performed. The protector584may be supplied with the first detected value POSV and the second detected value NEGV. The protector584may generate a protection signal PTS based on the first detected value POSV and the second detected value NEGV. In an embodiment, the protector584may compare the first detected value POSV with a first reference value REF1, and may compare the second detected value NEGV with a second reference value REF2. When the first detected value POSV or the second detected value NEGV is greater than a predetermined reference, the protector584may determine that a short has occurred in the first power line PL1, and may output the protection signal PTS. The protection signal PTS may be used to determine whether to drive the power management driver500and/or the display device (e.g.,1000ofFIG.1). In an initial stage of the sensing period SP, the first and second detected values POSV and NEGV may be in unstable states due to a change in the voltage level of the first driving power terminal and variation in the driving of pixels. For example, in the initial stage of the sensing period SP, the first and second detected values POSV and NEGV may contain unnecessary noise. Therefore, there is the possibility of being falsely determined that a short has occurred due to the noise. In an embodiment, in order to prevent such a driving error, the controller560may limit the output of the protection signal PTS during a masking period MSP. For example, the controller may supply a masking signal MSS to the short detector580during a transition period TP and the masking period MSP. InFIG.10, the masking signal MSS may be set to a gate-off level, such as a logic low level, for deactivating the operation of a predetermined component. The masking period MSP may be a preset initial period of the sensing period. For example, the masking period MSP may be a period during which the detection of current/voltage by the detected value extractor582and/or the output of the protection signal PTS by the protector584are suppressed or masked. The length of the masking period MSP may be set to about 1 ms to 5 ms. The detected value extractor582does not extract the first detected value POSV and the second detected value NEGV in response to the masking signal MSS. Alternatively, the protector584may block the output of the protection signal PTS in response to the masking signal MSS. As described above, the masking period MSP may be inserted into the initial stage of the sensing period PS, thus improving the reliability of short detection and protective driving by the short detector580may be improved. FIG.11illustrates an example of the short detector included in the power management driver ofFIG.6. Referring toFIGS.6,9,10, and11, the short detector580may include a detected value extractor582and a protector584. In an embodiment, the detected value extractor582may be configured in or coupled to an amplifier-type voltage output circuit524. The voltage output circuit524may include a comparator5241, a first transistor M1, and a second transistor M2. The comparator5241may compare an internal driving voltage V0with a first voltage V1output from the comparator5241, and may output a voltage corresponding to the result of the comparison. The first transistor M1may be coupled between the source of first DC power VCC1and an output terminal OT. The second transistor M2may be coupled between the output terminal OT and the ground. Gate electrodes of the first and second transistors M1and M2may be coupled to an output terminal of the comparator5241. The first transistor M1may be a PMOS transistor, and the second transistor M2may be an NMOS transistor. Depending on the result of the comparison between the internal driving voltage V0and the first voltage V1by the comparator5241, one of the first and second transistors M1and M2is turned on, and thus the first voltage V1having a constant voltage level may be output through the output terminal OT. The detected value extractor582may include third to sixth transistors M3to M6and first and second resistors R1and R2. The detected value extractor582may extract a first detected value POSV using the third transistor M3coupled between the source of the first DC power VCC1and the ground. A gate electrode of the third transistor M3may receive the output voltage of the comparator5241. In an embodiment, the third transistor M3may be a PMOS transistor. The first resistor R1may be coupled between the third transistor M3and the ground. When the third transistor M3is turned on, a positive current flows into the ground through the third transistor M3and the first resistor R1, and the voltage of a first sensing node SN1may be extracted as the first detected value POSV. In an embodiment, the size of the third transistor M3may be smaller than that of the first transistor M1. For example, a channel length of the third transistor M3may be shorter than that of the first transistor M1. Accordingly, the positive current may be converted into a value corresponding to the ratio of the channel lengths, and the value may be extracted. The detected value extractor582may extract a second detected value NEGV using the fourth to sixth transistors M4to M6. The fourth transistor M4may be coupled between the source of second DC power VCC2and the sixth transistor M6, and the fifth transistor M5may be coupled between the source of second DC power VCC2and the second resistor R2. Gate electrodes of the fourth and fifth transistors M4and M5may be coupled to each other, and the gate electrode and the drain electrode of the fourth transistor M4may be coupled to each other. That is, the fourth and fifth transistors M4and M5may be coupled in the structure of a current minor. In an embodiment, the fourth and fifth transistors M4and M5may be PMOS transistors. The sixth transistor M6may be coupled between the fourth transistor M4and ground, and may include a gate electrode coupled to the output terminal of the comparator5241. The sixth transistor M6may be an NMOS transistor. When the second transistor M2and the sixth transistor M6are turned on, a negative current or a current, obtained by reducing the negative current at a predetermined rate, may flow through a second sensing node SN2by means of the driving of the current minor composed of the fourth and fifth transistors M4and M5. Therefore, the voltage of the second sensing node SN2may be extracted as the second detected value NEGV. In an embodiment, the sizes of the fourth and fifth transistors M4and M5may be smaller than that of the second transistor M2. For example, the channel lengths of the fourth and fifth transistors M4and M5may be shorter than that of the second transistor M2. Also, the channel lengths of the fourth transistor M4and the fifth transistor M5may be identical to or different from each other. Accordingly, the magnitude of the negative current may be controlled depending on the ratio of the channel lengths. Meanwhile, the voltage levels of the first DC power VCC1and the second DC power VCC2may be identical to or different from each other. When a short occurs in the first power line PL1, the absolute values of the first detected value POSV and/or the second detected value NEGV may increase due to the occurrence of an overcurrent. The protector584may include a first comparator5841, a second comparator5842, a logical OR operating component5843, and a switch5844. The first comparator5841may compare the first detected value POSV with a first reference value REF1, and may output a first result CR1. When the first detected value POSV is greater than the first reference value REF1, it may be determined that a short has occurred between the first power line PL1and a line for transferring a voltage higher than the first voltage V1. Here, the first result CR1may have a first level (e.g., a logic high level). When the first detected value POSV is less than or equal to the first reference value REF1, the first result CR1may have a second level (e.g., a logic low level). The second comparator5842may compare the second detected value NEGV with a second reference value REF2, and may output a second result CR2. When the magnitude (or absolute value) of the second detected value NEGV is greater than the second reference value REF2, it may be determined that a short has occurred between the first power line PL1and a line for transferring a voltage lower than the first voltage V1. Here, the second result CR2may have a first level (e.g., a logic high level). When the absolute value of the second detected value NEGV is less than or equal to the second reference value REF2, the second result CR2may have a second level (e.g., a logic low level). The logical OR operating component5843may generate a protection signal PTS based on the result of a logical OR operation on the first result CR1and the second result CR2. In an embodiment, when at least one of the first result CR1and the second result CR2has a first level, the logical OR operating component5843may output the protection signal PTS (or a protection signal having a logic high level). In contrast, when both the first result CR1and the second result CR2have a second level, the logical OR operating component5843does not output a protection signal PTS (alternatively, outputs a protection signal PTS having a logic low level). In an embodiment, the switch5844may control the output of the protection signal PTS during the sensing period SP in response to a masking signal MSS. That is, by the output of the masking signal MSS (or the output of the masking signal MSS having a gate-off level), the switch5844may be turned off. Therefore, during a masking period MSP, the output of the protection signal PTS may be blocked. FIG.12illustrates an example of the controller included in the power management driver ofFIG.6, andFIG.13illustrates an example of an operation of the controller ofFIG.12. Referring toFIGS.1,6,12, and13, the controller560may include a counting counter566and a shutdown controller568. The shutdown controller568may count up the time during which a protection signal PTS is output. For example, the shutdown controller568may include a counter which counts up a period during which a gate-on level of the protection signal PTS is output. When the counted value is greater than a preset shutdown reference time REFT, the shutdown controller568may output the protection signal PTS as a shutdown signal SDS. For example, counting-up may be performed at intervals of 1 ms, and the shutdown reference time REFT may be set to about 5 ms. Therefore, when the protection signal PTS having a gate-on level is output for a time of 5 ms, the shutdown signal SDS may be output, and driving for protecting the power management driver500or the display device1000from an overcurrent may be performed. In accordance with an embodiment, the shutdown signal SDS may shut down the operation of the power management driver500or the display device1000. The counting controller566may generate a reset signal RST for resetting the counted value based on a first glitch time that is the time during which a glitch in the first detected value POSV is detected. The reset signal RST may be provided to the shutdown controller568. Also, the counting controller566may generate the reset signal RST based on a second glitch time that is the time during which a glitch in the second detected value NEGV is detected. The first glitch time may correspond to the time during which noise in the first detected value POSV is output. For example, as illustrated inFIG.13, the glitch time may be defined as a period GT1or GT2during which the first detected value POSV decreases below the first reference value REF1. Similarly, the second glitch time during which noise in the second detected value NEGV is output may be defined as a period during which the second detected value NEGV decreases below the second reference value REF2. During short detection, due to noise containing a glitch or the like, the sensitivity and accuracy of short detection may be deteriorated. The counting controller566may control the output of the reset signal RST based on the length of the time during which noise occurs. That is, the counting controller566may determine whether respective states of the first and second detected values POSV and NEGV are in an overcurrent state or in a temporary state attributable to noise or the like. In an embodiment, when the first glitch time is greater than a preset noise ignorance time NIT (e.g., this relationship is indicated by GT1>NIT inFIG.13), the counting controller566may generate the reset signal RST. For example, the noise ignorance time NIT may be set to about 0.5 ms. The shutdown controller568may reset the counted value in response to the reset signal RST. In an embodiment, when the first glitch time is less than or equal to the noise ignorance time NIT (e.g., this relationship is indicated by GT1≤NIT inFIG.13), the counting controller566does not generate a reset signal RST. That is, when the first glitch time is less than or equal to the noise ignorance time NIT, the corresponding glitch or noise may be ignored. Therefore, the shutdown controller568may maintain a count-up operation. When the counted value corresponds to the shutdown reference time REFT (e.g., this is indicated by t6inFIG.13), the shutdown signal SDS may be output. Similarly, the counting controller566may determine whether to output the reset signal RS depending on the result of a comparison between the second glitch time and the noise ignorance time NIT. In this way, the controller560may identify a glitch and noise in the first and second detected values POSV and NEGV detected during the sensing period, and may perform a protection function related to short detection (overcurrent detection), thus improving the reliability of short detection. As described above, the power management driver500and the display device1000having the power management driver according to embodiments of the present disclosure may definitely separate a turn-off time point of the first switch SW1and a turn-on time point of the second switch SW2through a transition period between a display period and a sensing period. Therefore, heat generation and unnecessary power consumption that may occur when the first driving power having a first voltage V1is supplied to the ground GND during the sensing period may be prevented or minimized. Further, a protection function related to short detection and/or overcurrent detection may be performed in such a way that a masking period is inserted into the initial stage of the sensing period, and a glitch and noise in the detected values POSV and NEGV detected during the sensing period are identified or removed, thus improving the sensitivity and reliability of a function of detecting a short in a first power line PL1and protecting the first power line PL1. Embodiments of the present disclosure are not limited to the foregoing, and may be expanded in various forms without departing from the spirit and scope of the present disclosure. For example, the detectable faults are not limited to short-circuit faults, but may include other faults based on detectable changes in impedance such as, for example, short-circuit faults, reduced but non-zero impedance faults, increased but non-infinite impedance faults, and open-circuit faults. Moreover, the transition period may be after the display period and before the next sensing period, and/or after the sensing period and before the next display period, without limitation. Although exemplary embodiments of the present disclosure have been described, those of ordinary skill in the pertinent art will appreciate that the present disclosure may be modified and changed in various forms without departing from the scope or spirit of the present disclosure as set forth in the accompanying claims and their equivalents. | 60,769 |
11862095 | DETAILED DESCRIPTION The following merely illustrates the principles of the disclosure. It will thus be appreciated that those skilled in the art will be able to devise various arrangements which, although not explicitly described or shown herein, embody the principles of the disclosure and are included within its spirit and scope. Furthermore, all examples and conditional language recited herein are principally intended expressly to be only for pedagogical purposes to aid the reader in understanding the principles of the disclosure and the concepts contributed by the inventors to furthering the art, and are to be construed as being without limitation to such specifically recited examples and conditions. Moreover, all statements herein reciting principles, aspects, and embodiments of the disclosure, as well as specific examples thereof, are intended to encompass both structural and functional equivalents thereof. Additionally, it is intended that such equivalents include both currently known equivalents as well as equivalents developed in the future, i.e., any elements developed that perform the same function, regardless of structure. Thus, for example, it will be appreciated by those skilled in the art that any block diagrams herein represent conceptual views of illustrative circuitry embodying the principles of the disclosure. Similarly, it will be appreciated that any flow charts, flow diagrams, state transition diagrams, pseudo code, and the like represent various processes which may be substantially represented in computer readable medium and so executed by a computer or processor, whether or not such computer or processor is explicitly shown. The functions of the various elements shown in the Drawing, including any functional blocks that may be labeled as “processors”, may be provided through the use of dedicated hardware as well as hardware capable of executing software in association with appropriate software. When provided by a processor, the functions may be provided by a single dedicated processor, by a single shared processor, or by a plurality of individual processors, some of which may be shared. Moreover, explicit use of the term “processor” or “controller” should not be construed to refer exclusively to hardware capable of executing software, and may implicitly include, without limitation, digital signal processor (DSP) hardware, network processor, application specific integrated circuit (ASIC), field programmable gate array (FPGA), read-only memory (ROM) for storing software, random access memory (RAM), and non-volatile storage. Other hardware, conventional and/or custom, may also be included. Software modules, or simply modules which are implied to be software, may be represented herein as any combination of flowchart elements or other elements indicating performance of process steps and/or textual description. Such modules may be executed by hardware that is expressly or implicitly shown. Unless otherwise explicitly specified herein, the figures comprising the drawing are not drawn to scale. Referring now to the drawings, wherein like reference numbers refer to like elements throughout the several views, there is shown inFIG.1, a schematic drawing of the salient features of an image-rendering system in accordance with the present disclosure. Display100includes display system102, sensing system104, and processor106. Display system102includes a plurality of display pixels, each of which contains a plurality of OLED-based sub-pixels, pixel-drive circuitry, and associated system electronics. Sensing system104includes a plurality of sensors and analog-to-digital conversion (ADC) circuitry that is operatively coupled with the sensors. Processor106is preferably an external processor configured to do at least some of: provide image data to display system102; receive sensor signals from the ADC circuitry; run programs and store data; perform software routines for estimating the health (i.e., state of degradation) of one or more OLEDs in display area202(seeFIG.2); determine suitable drive-signal compensation for the OLEDs; and compensate the image data accordingly to provide the compensated drive signals to their corresponding display pixels. In the depicted example, processor106is incorporated into an image processing system, which is typically used to drive a conventional display. In some embodiments, however, processor106includes hardware and/or firmware that is local to the display system and/or sensing system. In some embodiments, it is preferable that methods for determining the required compensation are integrated into the firmware of a display. FIG.2depicts a schematic drawing of a more detailed perspective view of a portion of display100. Display system102includes display region202, which is the region of the display in which images are generated by emission of light from the plurality of OLED-based pixels. Display region202(also referred to as the “active OLED pixel area”) comprises a plurality of display pixels, each of which includes at least one OLED and its associated pixel-drive circuitry, as well as any other associated electronic circuitry. The plurality of OLEDs and their associated drive circuitry are located on substrate208, which defines the backplane of display region202. The display area is covered by cover glass210and substrate208is disposed on the front surface of carrier board212. Sensing system104includes sensors204and analog-to-digital conversion (ADC) circuitry206. Sensors204are conventional optical sensors that are arranged around the perimeter of display area202. In the depicted example, each of sensors204is a conventional photodetector; however, any suitable sensor can be used in sensing system104without departing from the scope of the present invention. Sensors204are arranged such that their respective substrates are oriented orthogonally to the plane of substrate208and, as a result, they receive light from the OLEDs at the edges of cover glass210. In some embodiments, cover glass210includes optical elements (e.g., diffractive optical elements, holograms, prisms, angled mirrors, etc.) for improving the ability of sensors204to sense the luminescence of one or more of the OLEDs of the display pixels. ADC circuitry206comprises one or more conventional analog-to-digital converter circuits and associated additional components suitable for converting the output of sensors204into digital signals usable by processor106. As would be apparent to one skilled in the art, after reading this Specification, optical sensors (e.g., photodetectors) can have limited sensitivity in the range of low brightness of an OLED microdisplay. As a result, the luminance intensity of a single pixel (or sub-pixel) in a display can be too small to be measured by some sensors. Furthermore, pixel-to-pixel differences in brightness can be extremely small relative to the sensitivity of such sensors, making it difficult, if not impossible, for a post-processing circuit to differentiate the difference in an external compensation system. It is an aspect of the present disclosure, however, that a test image can be generated by the display and used to determine which, if any, OLEDs in the display require compensation and how to compensate them. Specifically, by embedding a periodic function in the output of each OLED under test and employing lock-in amplifier detection techniques in the detection of their output signals, sensor sensitivity can be enhanced. It should be noted that, in some cases, such an image can be limited to the output of only one pixel if the sensitivity of the sensor or sensors is sufficient. Furthermore, methods disclosed herein enable a learning process in which the number of pixels required in a test image can be experimentally determined over time. In addition, the light-collection efficiency of the sensors can be improved by including grating slits on the edges of cover glass210, thereby further enhancing the overall light-detection sensitivity in an external sensor-based compensation system for an OLED microdisplay. FIG.3depicts a block diagram of the salient components of processing circuitry in accordance with the present disclosure. Processing circuitry206includes optional preamplification stage302, lock-in amplifier (LIA)304, low-pass filter (LPF)306, optional amplification stage308, and analog-to-digital converter310. FIG.4depicts operations of a method for compensating one or more pixels in a display in accordance with the present disclosure. Method400begins with operation401, wherein processor106applies drive signal110to a group of OLEDs within display area202, where a periodic signal is embedded in the drive signal. In some embodiments, processor106applies a drive signal containing a periodic signal to only one OLED in display100. It should be noted that, typically, a periodic signal having high modulation frequency is preferred, since a high modulation frequency has less noise influence than a low one. In the depicted example, the applied drive signal is modulated using pulse-width modulation (PWM) having a primary frequency; however, any suitable modulation scheme can be used to modulate the output of the pixels under test without departing from the scope of the present disclosure. FIG.5depicts two exemplary modulation signals suitable for embedding in the drive signals provided to OLEDs under test in accordance with the present disclosure. Modulation signal500has a 50% duty cycle and is implemented using a signal continuous pulse period that occupies the first half of each display frame. Modulation signal502also has a 50% duty cycle; however, it is implemented using five of short pulse periods within each display frame. For each modulation signal, the primary modulation frequency is given by the display refresh rate (i.e., the number of display frames per second) multiplied by the number of PWM pulses per display frame. For each of exemplary modulation signals500and502, the frame refresh rate is equal to 120 frames per second. As a result, the primary modulation frequencies of modulation signals500and502are 120 Hz and 600 Hz, respectively. As noted above, a periodic signal having higher modulation frequency typically has less noise influence than a periodic signal having a lower-frequency. As a result, modulation signal502would normally be preferred over modulation signal500. At operation402, sensors204detect light from display area202. As will be apparent to one skilled in the art, after reading this Specification, the light detected by sensors204is a “mixed-luminance signal” that includes the optical signals generated by each driven OLED (i.e., “pixel luminance”), as well as optical noise comprising stray light from the environment surrounding display area202. In some cases, the optical noise luminance can be stronger than the pixel luminance; therefore, the optical noise luminance will dominate the sensor output. As a result, sensor output108will provide incorrect optical information to processor106, leading to incorrect compensation for the aging of OLEDs in the display. It is necessary, therefore, to selectively pick out the pixel luminance from the noisy mixed-luminance signal so that the pixel aging can be accurately determined and proper compensation can be applied to the OLEDs. Optional preamplification stage302is a conventional preamplifier suitable for amplifying sensor output214without adding significant noise to the signal. It should be noted that, after the pre-amplification stage, the PWM modulation frequency will remain dominant in sensor output214. At operation403, synchronous demodulation is used to detect the primary frequency component in sensor output214. In the depicted example, synchronous demodulation is performed via LIA304, which selectively detects the primary frequency component in sensor output214based on the known modulation applied to drive signal110provided by processor106. The known modulation is frequency is typically provided to LIA304by processor106so that it can be used as a demodulation reference frequency. LIA304is a compact lock-in amplifier circuit fabricated on an application-specific integrated circuit (ASIC) that is external to the backplane of display100. LIA304detects the primary frequency of the PWM component in sensor output214thereby enabling the pixel luminescence to be isolated from noise signals arising from the environment around display area202. In some embodiments, LIA304selectively chooses the primary modulated signal, demodulates it and gets the DC component, which can differ from brightness intensity. At operation404, residual frequencies from the LIA are filtered out via conventional low-pass-filter (LPF)306. At optional operation405, the output of low-pass filter306is amplified by optional amplification stage308. In the depicted example, amplification stage308comprises an operational amplifier, as well as other associated circuitry. At operation406, the output of low-pass filter306is converted to a digital signal via conventional analog-to-digital converter (ADC)310and provided to processor106as sensing signal108. The ability to selectively detect the primary frequency of the modulated output of one or more OLEDs from a display affords embodiments in accordance with the present disclosure significant advantages over the prior art, including:i. external sensor sensitivity can be enhanced and a difference in negligibly low brightness ranges can be detected in pixel compensation methods in an OLED-based microdisplay; orii. valid signal components can be selectively chosen from the output mixed with considerable noise components; oriii. the combination of i and ii. It should be noted that, in some embodiments, one or both of preamplification stage302and amplification stage308are not included in processing circuitry206. At operation407, processor determines a suitable compensation for drive signal110during image generation based on sensing signal108. In some embodiments, additional compensation methods are used to augment the methods and apparatus described herein, such as compensation methods described in U.S. Provisional Patent Application Ser. No. 63/209,215, filed Jun. 10, 2021, entitled “OLED-Based Display Having Pixel Compensation and Method”, which is incorporated herein by reference. In some embodiments, it is desirable to improve the amount of light collected by sensors204by including one or more optical elements on or in cover glass210. FIGS.6A and6Bdepict schematic drawings of exemplary optical elements suitable for inclusion in the cover glass of a display to improve the light-collection efficiency of externally located sensors in accordance with the present disclosure. Arrangement600includes cover glass210, sensor204, and grating602. Grating602comprises a pattern of slits604formed at the edge of cover glass210. Slits604are narrow (e.g., of order10microns, or tens of microns, wide, etc.) features formed into the sidewalls of the cover glass. In the depicted example, slits604are formed into the sidewalls of the cover glass using laser lithography; however, any suitable method for forming slits604can be used. Methods suitable for forming slits604include, without limitation, laser-assisted etching, single-point diamond machining, laser ablation, particle blasting, and the like. In some embodiments, grating602includes patterns of material deposited onto the sidewalls of cover glass210via methods such as shadow-mask deposition, and the like. Arrangement606includes cover glass210, sensor204, facet608, and masking material610. Facet608is a beveled edge of the cover glass that is configured to increase the area available for mounting sensor204. In some cases, the beveled edge acts to refract more light into the sensing region of a sensor. Masking material610is a material suitable for blocking and/or absorbing light received at facet608. In the depicted example, masking material610is black photoresist material (i.e., black matrix); however, myriad materials suitable for use in masking material610will be apparent to the skilled artisan after reading this Specification. It is to be understood that the present specification teaches some examples of an exemplary embodiment of the present invention and that many variations of the invention can easily be devised by those skilled in the art after reading this disclosure and that the scope of the present invention is to be determined by the following claims. | 16,564 |
11862096 | DETAILED DESCRIPTION In the specification, the expression that a first component (or region, layer, part, etc.) is “on”, “connected with”, or “coupled with” a second component means that the first component is directly on, connected with, or coupled with the second component or means that a third component is interposed therebetween. Like reference numerals refer to like components. Also, in drawings, the thickness, ratio, and dimension of components are exaggerated for effectiveness of description of technical contents. The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting. As used herein, “a”, “an,” “the,” and “at least one” do not denote a limitation of quantity, and are intended to include both the singular and plural, unless the context clearly indicates otherwise. For example, “an element” has the same meaning as “at least one element,” unless the context clearly indicates otherwise. “At least one” is not to be construed as limiting “a” or “an.” “Or” means “and/or.” The term “and/or” includes one or more combinations of the associated listed items. The terms “first”, “second”, etc. are used to describe various components, but the components are not limited by the terms. The terms are used only to differentiate one component from another component. For example, without departing from the scope and spirit of the present disclosure, a first component may be referred to as a second component, and similarly, the second component may be referred to as the first component. The articles “a,” “an,” and “the” are singular in that they have a single referent, but the use of the singular form in the specification should not preclude the presence of more than one referent. Also, the terms “under”, “beneath”, “on”, “above”, etc. are used to describe a relationship between components illustrated in a drawing. The terms are relative and are described with reference to a direction indicated in the drawing. It will be understood that the terms “include”, “comprise”, “have”, etc. specify the presence of features, numbers, steps, operations, elements, or components, described in the specification, or a combination thereof, not precluding the presence or additional possibility of one or more other features, numbers, steps, operations, elements, or components or a combination thereof. Unless otherwise defined, all terms (including technical terms and scientific terms) used in this specification have the same meaning as commonly understood by those skilled in the art to which the present disclosure belongs. Furthermore, terms such as terms defined in the dictionaries commonly used should be interpreted as having a meaning consistent with the meaning in the context of the related technology, and should not be interpreted in ideal or overly formal meanings unless explicitly defined herein. Hereinafter, embodiments of the present disclosure will be described with reference to accompanying drawings. FIG.1illustrates a display device, according to an embodiment of the present disclosure. Referring toFIG.1, a portable terminal is illustrated as an example of a display device DD according to an embodiment of the present disclosure. The portable terminal may include a tablet PC, a smartphone, a personal digital assistant (“PDA”), a portable multimedia player (“PMP”), a game console, a wristwatch-type electronic device, and the like. However, the present disclosure is not limited thereto. The present disclosure may be used for small and medium electronic devices such as a personal computer, a notebook computer, a kiosk, a car navigation unit, and a camera, in addition to large-sized electronic equipment such as a television or an outside billboard. The above examples are provided only as an embodiment, and it is obvious that the display device DD may be applied to any other electronic device(s) without departing from the concept of the present disclosure. As shown inFIG.1, a display surface, on which a first image IM1and a second image IM2are displayed, is parallel to a plane defined by a first direction DR1and a second direction DR2. The display device DD includes a plurality of areas separated on a display surface. The display surface includes a display area DA, in which the first image IM1and the second image IM2are displayed, and a non-display area NDA adjacent to the display area DA. The non-display area NDA may be referred to as a bezel area. For example, the display area DA may have a rectangular shape. The non-display area NDA surrounds the display area DA. Also, although not illustrated, for example, the display device DD may include a partially-curved shape. As a result, one area of the display area DA may have a curved shape. The display area DA of the display device DD includes a first display area DA1and a second display area DA2. In a specific application program, the first image IM1may be displayed on the first display area DA1, and the second image IM2may be displayed on the second display area DA2. For example, the first image IM1may be a video, and the second image IM2may be a still image or text information having a long change period. According to an embodiment, the display device DD may drive the first display area DA1, in which the video is displayed, at a normal frequency or a frequency higher than the normal frequency, and may drive the second display area DA2, in which the still image is displayed, at a frequency lower than the normal frequency. The display device DD may reduce power consumption by lowering the operating frequency of the second display area DA2. The size of each of the first display area DA1and the second display area DA2may be a preset size, and may be changed by an application program. In an embodiment, when the still image is displayed in the first display area DA1and the video is displayed in the second display area DA2, the first display area DA1may be driven at a frequency lower than the normal frequency, and the second display area DA2may be driven at the normal frequency or a frequency higher than the normal frequency. Besides, the display area DA may be divided into three or more display areas. An operating frequency of each of the display areas may be determined depending on the type (a still image or video) of an image displayed in each of the display areas. FIGS.2A and2Bare perspective views of a display device DD2, according to an embodiment of the present disclosure.FIG.2Aillustrates the display device DD2in an unfolded state.FIG.2Billustrates the display device DD2in a folded state. As shown inFIGS.2A and2B, the display device DD2includes the display area DA and the non-display area NDA. The display device DD2may display an image through the display area DA. The display area DA may include a plane defined by the first direction DR1and the second direction DR2, in a state where the display device DD2is unfolded. The thickness direction of the display device DD2may be parallel to a third direction DR3crossing the first direction DR1and the second direction DR2. Accordingly, the front surfaces (or upper surfaces) and the bottom surfaces (or lower surfaces) of the members constituting the display device DD2may be defined based on the third direction DR3. The non-display area NDA may be referred to as a bezel area. For example, the display area DA may have a rectangular shape. The non-display area NDA surrounds the display area DA. The display area DA may include a first non-folding area NFA1, a folding area FA, and a second non-folding area NFA2. The folding area FA may be bent about a folding axis FX extending in the first direction DR1. When the display device DD2is folded, the first non-folding area NFA1and the second non-folding area NFA2may face each other. Accordingly, in a state where the display device DD2is fully folded, the display area DA may not be exposed to the outside, which may be referred to as “in-folding”. However, embodiments are not limited thereto and the operation of the display device DD2is not limited thereto. In an embodiment of the present disclosure, when the display device DD2is folded, the first non-folding area NFA1and the second non-folding area NFA2may be opposite to each other. Accordingly, in a state where the display device DD2is folded, the first non-folding area NFA1may be exposed to the outside, which may be referred to as “out-folding”. The display device DD2may perform only one operation of an in-folding operation or an out-folding operation. Alternatively, the display device DD2may perform both the in-folding operation and the out-folding operation. In this case, the same area of the display device DD2, for example, the folding area FA may be folded inwardly and outwardly. Alternatively, some areas of the display device DD2may be folded inwardly, and other areas may be folded outwardly. One folding area and two non-folding areas are illustrated inFIGS.2A and2B, but the number of folding areas and the number of non-folding areas are not limited thereto. For example, the display device DD2may include a plurality of non-folding areas, of which the number is greater than two, and a plurality of folding areas interposed between non-folding areas adjacent to one another. FIGS.2A and2Billustrates that the folding axis FX is parallel to the minor axis of the display device DD2. However, the present disclosure is not limited thereto. For example, the folding axis FX may extend in a direction parallel to the major axis of the display device DD2, for example, the second direction DR2. FIGS.2A and2Billustrates that the first non-folding area NFA1, the folding area FA, and the second non-folding area NFA2may be sequentially arranged in the second direction DR2. However, the present disclosure is not limited thereto. For example, the first non-folding area NFA1, the folding area FA, and the second non-folding area NFA2may be sequentially arranged in the first direction DR1. The plurality of display areas DA1and DA2may be defined in the display area DA of the display device DD2.FIG.2Aillustrates the two display areas DA1and DA2as an example. However, the number of display areas DA1and DA2is not limited thereto. The plurality of display areas DA1and DA2may include the first display area DA1and the second display area DA2. For example, the first display area DA1may be an area where the first image IM1is displayed, and the second display area DA2may be an area in which the second image IM2is displayed. For example, the first image IM1may be a video, and the second image IM2may be a still image or an image (text information or the like) having a long change period. The display device DD2according to an embodiment may operate differently depending on an operating mode. The operating mode may include a single frequency mode and a multi-frequency mode. In the single frequency mode, the display device DD2may drive both the first display area DA1and the second display area DA2at a normal frequency. In the multi-frequency mode, the display device DD2according to an embodiment may drive the first display area DA1where the first image IM1is displayed at a first operating frequency, and may drive the second display area DA2where the second image IM2is displayed, at a second operating frequency lower than the normal frequency. In one embodiment, the first operating frequency may be equal to or higher than the normal frequency. The size of each of the first display area DA1and the second display area DA2may be a preset size, and may be changed by an application program. In an embodiment, the first display area DA1may correspond to the first non-folding area NFA1, and the second display area DA2may correspond to the second non-folding area NFA2. In addition, a first portion of the folding area FA may correspond to the first display area DA1, and a second portion of the folding area FA may correspond to the second display area DA2. In an embodiment, the entire folding area FA may correspond to only one of the first display area DA1and the second display area DA2. In an embodiment, the first display area DA1may correspond to the first portion of the first non-folding area NFA1, and the second display area DA2may correspond to the second portion of the first non-folding area NFA1, the folding area FA, and the second non-folding area NFA2. That is, the size of the second display area DA2may be greater than the size of the first display area DA1. In an embodiment, the first display area DA1may correspond to the first non-folding area NFA1, the folding area FA, and the first portion of the second non-folding area NFA2, and the second display area DA2may be the second portion of the second non-folding area NFA2. That is, the size of the first display area DA1may be greater than the size of the second display area DA2. As illustrated inFIG.2B, in a state where the folding area FA is folded, the first display area DA1may correspond to the first non-folding area NFA1, and the second display area DA2may correspond to the folding area FA and the second non-folding area NFA2. FIGS.2A and2Billustrates that the display device DD2has one folding area, as an example of a display device. However, the present disclosure is not limited thereto. For example, the present disclosure may also be applied to a display device having two or more folding areas, a rollable display device, or a slidable display device. Hereinafter, the display device DD shown inFIG.1will be described as an example. However, the display device DD shown inFIG.1may be identically applied to the display device DD2shown inFIGS.2A and2B. FIG.3Ais a diagram for describing an operation of a display device in a single frequency mode.FIG.3Bis a diagram for describing an operation of a display device in a multi-frequency mode. Referring toFIG.3A, the first image IM1displayed in the first display area DA1may be a video. The second image IM2displayed in the second display area DA2may be a still image or an image (e.g., a keypad for manipulating a game) having a long change period. That is, the still image is not changed for a relative long time, compared to the video. The first image IM1displayed in the first display area DA1shown inFIG.1and the second image IM2displayed in the second display area DA2are examples, and various images may be displayed on the display device DD. In a single frequency mode NFM, the operating frequencies of the first display area DA1and the second display area DA2of the display device DD are the same and a normal frequency. For example, the normal frequency may be 120 Hertz (Hz). In the single frequency mode NFM, 120 frames (i.e., images of first to 120th frames F1to F120) may be sequentially displayed for 1 second in the first display area DA1and the second display area DA2of the display device DD. Referring toFIG.3B, in the multi-frequency mode MFM, the display device DD may set an operating frequency of the first display area DA1, in which the first image IM1is displayed, as the first operating frequency, and may set an operating frequency of the second display area DA2, in which the second image IM2is displayed, as a second operating frequency lower than the first operating frequency. In an embodiment, the first image IM1may be a video, and the second image IM2may be a still image. In an embodiment, the first operating frequency may be 120 Hz, and the second operating frequency may be 1 Hz. The first operating frequency and the second operating frequency may be variously changed. In the multi-frequency mode MFM, when the first operating frequency is 120 Hz and the second operating frequency is 1 Hz, a data signal corresponding to the first image IM1may be provided to the display panel DP (seeFIG.4) in the first display area DA1of the display device DD for in each of the first to 120th frames F1to F120. The second image IM2may be displayed only in the first frame F1in the second display area DA2, and an image may not be displayed in the remaining frames F2to F120. The operation of the display device DD in the multi-frequency mode MFM will be described in detail later. FIG.4is a block diagram of a display device according to an embodiment of the present disclosure. Referring toFIG.4, a display device DD includes a display panel DP, a driving controller100, a data driving circuit200, and a voltage generator300. The driving controller100receives an image signal RGB and a control signal CTRL. The driving controller100generates an output image signal DATA by converting a data format of an image signal RGB so as to be suitable for the interface specification of the data driving circuit200. The driving controller100outputs a scan control signal SCS, a data control signal DCS, and a light emitting driving signal ECS. The data driving circuit200receives the data control signal DCS and the output image signal DATA provided from the driving controller100. The data driving circuit200converts the output image signal DATA into data signals and then outputs the data signals to a plurality of data lines DL1to DLm to be described later. The data signals refer to analog voltages corresponding to a grayscale value of the output image signal DATA. The display panel DP includes scan lines GIL1to GILn, GCL1to GCLn, and GWL1to GWLn+1, emission control lines EML1to EMLn, the data lines DL1to DLm and the pixels PX. The display panel DP may further include a scan driving circuit SD and an emission driving circuit EDC. In an embodiment, the scan driving circuit SD may be arranged on a first side of the display panel DP. The scan lines GIL1to GILn, GCL1to GCLn, and GWL1to GWLn+1 extend from the scan driving circuit SD in the first direction DR1. The emission driving circuit EDC is arranged on a second side of the display panel DP. The emission control lines EML1to EMLn extend from the emission driving circuit EDC in a direction opposite to the first direction DR1. The scan lines GIL1to GILn, GCL1to GCLn, and GWL1to GWLn+1 and the emission control lines EML1to EMLn are arranged to be spaced from one another in the second direction DR2. The data lines DL1to DLm extend from the data driving circuit200in a direction opposite to the second direction DR2, and are arranged spaced from one another in the first direction DR1. In the example shown inFIG.4, the scan driving circuit SD and the emission driving circuit EDC are arranged to face each other with the pixels PX interposed therebetween, but the present disclosure is not limited thereto. For example, the scan driving circuit SD and the emission driving circuit EDC may be positioned adjacent to each other on one of the first side and the second side of the display panel DP in another embodiment. In still another embodiment, the scan driving circuit SD and the emission driving circuit EDC may be implemented with one circuit. The plurality of pixels PX are electrically connected to the scan lines GIL1to GILn, GCL1to GCLn, and GWL1to GWLn+1, the emission control lines EML1to EMLn, and the data lines DL1to DLm. Each of the plurality of pixels PX may be electrically connected to four scan lines and one emission control line. For example, as shown inFIG.4, pixels PX in a first row each may be connected to the scan lines GIL1, GCL1, GWL1, and GWL2and the emission control line EML1. Furthermore, pixels PX in a j-th row each may be connected to the scan lines GILj, GCLj, GWLj, and GWLj+1 and the emission control line EMLj. Each of the plurality of pixels PX includes a light emitting element ED (seeFIG.5) and a pixel circuit PXC (seeFIG.5) for controlling the light emission of the light emitting element ED. The pixel circuit PXC may include one or more transistors and one or more capacitors. The scan driving circuit SD and the emission driving circuit EDC may include transistors formed through the same process as the pixel circuit PXC. Each of the plurality of pixels PX receives a first driving voltage ELVDD, a second driving voltage ELVSS, an initialization voltage VINT, and an anode initialization voltage VAINT provided from the voltage generator300. The scan driving circuit SD receives the scan control signal SCS provided from the driving controller100. The scan driving circuit SD may output scan signals to the scan lines GIL1to GILn, GCL1to GCLn, and GWL1to GWLn+1 in response to the scan control signal SCS. The circuit configuration and operation of the scan driving circuit SD will be described in detail later. According to one embodiment, the driving controller100may divide the display panel DP into the first display area DA1(seeFIG.1) and the second display area DA2(seeFIG.1) based on the image signal RGB and the control signal CTRL, and may set an operating frequency of each of the first display area DA1and the second display area DA2. For example, in a normal mode, the driving controller100drives the first display area DA1and the second display area DA2at a normal frequency (e.g., 120 Hz). In a multi-frequency mode, the driving controller100may drive the first display area DA1at a first operating frequency (e.g., 120 Hz) and the second display area DA2at a second operating frequency (e.g., 1 Hz). In an embodiment, in the multi-frequency mode, a first operating frequency of the first display area DA1may be lower than or equal to a normal frequency, and a second operating frequency of the second display area DA2may be lower than the normal frequency. However, the present disclosure is not limited thereto. For example, in the multi-frequency mode, the first operating frequency of the first display area DA1and the second operating frequency of the second display area DA2may be variously changed. The voltage generator300generates voltages to operate the display panel DP. In an embodiment, the voltage generator300generates the first driving voltage ELVDD, the second driving voltage ELVSS, the initialization voltage VINT, and the anode initialization voltage VAINT. The driving controller100according to an embodiment of the present disclosure may output a voltage control signal VCTRL for controlling an operation of the voltage generator300. In an embodiment, the voltage generator300may change a voltage level of the anode initialization voltage VAINT in response to the voltage control signal VCTRL. In an embodiment, when the second display area DA2(seeFIG.1) is driven at a second operating frequency lower than the normal frequency, the driving controller100may output the voltage control signal VCTRL such that the voltage level of the anode initialization voltage VAINT provided to the pixels PX of the second display area DA2is changed. In this specification, it is described that the voltage generator300operates in response to the voltage control signal VCTRL provided from the driving controller100, but the present disclosure is not limited thereto. In an embodiment, the voltage generator300may operate in response to a voltage control signal provided from various host devices such as an application processor, a graphic processor, a central processing unit (“CPU”), and the like. FIG.5is an equivalent circuit diagram of a pixel, according to an embodiment of the present disclosure. FIG.5illustrates an equivalent circuit diagram of a pixel PXij connected to the i-th data line DLi among the data lines DL1to DLm, the j-th scan lines GILj, GCLj, and GWLj and the (j+1)-th scan line GWLj+1 among the scan lines GIL1to GILn, GCL1to GCLn, and GWL1to GWLn+1, and the j-th emission control line EMLj among the emission control lines EML1to EMLn, which are illustrated inFIG.4. Each of the plurality of pixels PX shown inFIG.4may have the same circuit configuration as the equivalent circuit diagram of the pixel PXij shown inFIG.5. Referring toFIG.5, a pixel PXij according to an embodiment includes a pixel circuit PXC and at least one light emitting element ED. The pixel circuit PXC includes first to seventh transistors T1, T2, T3, T4, T5, T6, and T7and a capacitor Cst. In an embodiment, the light emitting element ED may be a light emitting diode. In an embodiment, it is described that the one pixel PXij includes one light emitting element ED. The third and fourth transistors T3and T4among the first to seventh transistors T1to T7are N-type transistors by using an oxide semiconductor as a semiconductor layer. Each of the first, second, fifth, sixth, and seventh transistors T1, T2, T5, T6, and T7is a P-type transistor having a low-temperature polycrystalline silicon (“LTPS”) semiconductor layer. However, the present disclosure is not limited thereto, and all of the first to seventh transistors T1to T7may be P-type transistors or N-type transistors. In an embodiment, at least one of the first to seventh transistors T1to T7may be an N-type transistor, and the remaining transistors may be P-type transistors. Moreover, the circuit configuration of a pixel according to an embodiment of the present disclosure is not limited toFIG.5. The pixel circuit PXC illustrated inFIG.5is only an example. For example, the configuration of the pixel circuit PXC may be modified and implemented. The scan lines GILj, GCLj, GWLj, and GWLj+1 may deliver scan signals GIj, GCj, GWj, and GWj+1, respectively. The emission control line EMLj may deliver an emission control signal EMj. The data line DLi delivers a data signal Di. The data signal Di may have a voltage level corresponding to the image signal RGB input to the display device DD (seeFIG.4). The first to third driving voltage lines VL1, VL2, and VL3may deliver the first driving voltage ELVDD, the second driving voltage ELVSS, and the initialization voltage VINT to the pixel PXij, respectively. A voltage line AVL may deliver the anode initialization voltage VAINT. The first transistor T1includes a first electrode SE connected to the first driving voltage line VL1via the fifth transistor T5, a second electrode electrically connected to an anode of the light emitting element ED via the sixth transistor T6, and a gate electrode connected to one end of the capacitor Cst. The first transistor T1may receive the data signal Di delivered by the data line DLi depending on the switching operation of the second transistor T2and then may supply a driving current Id to the light emitting element ED. The second transistor T2includes a first electrode connected to the data line DLi, a second electrode connected to the first electrode SE of the first transistor T1, and a gate electrode connected to the scan line GWLj. The second transistor T2may be turned on depending on the scan signal GWj received through the scan line GWLj and then may deliver the data signal Di delivered from the data line DLi to the first electrode SE of the first transistor T1. The third transistor T3includes a first electrode connected to the gate electrode of the first transistor T1, a second electrode connected to the second electrode of the first transistor T1, and a gate electrode connected to the scan line GCLj. The third transistor T3may be turned on depending on the scan signal GCj received through the scan line GCLj, and thus, the gate electrode and the second electrode of the first transistor T1may be connected, that is, the first transistor T1may be diode-connected. The fourth transistor T4includes a first electrode connected to the gate electrode of the first transistor T1, a second electrode connected to the third driving voltage line VL3through which the initialization voltage VINT is supplied, and a gate electrode connected to the scan line GILj. The fourth transistor T4may be turned on depending on the scan signal GIj received through the scan line GILj and then may perform an initialization operation of initializing a voltage of the gate electrode of the first transistor T1by supplying the initialization voltage VINT to the gate electrode of the first transistor T1. The fifth transistor T5includes a first electrode connected to the first driving voltage line VL1, a second electrode connected to the first electrode SE of the first transistor T1, and a gate electrode connected to the emission control line EMLj. The sixth transistor T6includes a first electrode connected to the second electrode of the first transistor T1, a second electrode connected to the anode of the light emitting element ED, and a gate electrode connected to the emission control line EMLj. The fifth transistor T5and the sixth transistor T6may be simultaneously turned on depending on the emission control signal EMj received through the emission control line EMLj. In this way, the first driving voltage ELVDD may be compensated through the first transistor T1thus diode-connected and may be supplied to the light emitting element ED. The seventh transistor T7includes a first electrode connected to the anode of the light emitting element ED, a second electrode connected to the voltage line AVL, and a gate electrode connected to the scan line GWLj+1. The seventh transistor T7is turned on depending on the scan signal GWj+1 received through the scan line GWLj+1, and bypasses a current of the anode of the light emitting element ED to the voltage line AVL. As described above, one end of the capacitor Cst is connected to the gate electrode of the first transistor T1, and the other end of the capacitor Cst is connected to the first driving voltage line VL1. The cathode of the light emitting element ED may be connected to the second driving voltage line VL2that delivers the second driving voltage ELVSS. A structure of the pixel PXij according to an embodiment is not limited to the structure shown inFIG.5. The number of transistors included in the one pixel PXij, the number of capacitors included in the one pixel PXij, and the connection relationship thereof may be variously modified. FIG.6is a timing diagram for describing an operation of a pixel illustrated inFIG.5. Hereinafter, an operation of a display device according to an embodiment will be described with reference toFIGS.5and6. Referring toFIGS.5and6, the scan signal GIj having a high level is provided through the scan line GILj during an initialization interval within one frame Fs. When the fourth transistor T4is turned on in response to the scan signal GIj having a high level, the initialization voltage VINT is supplied to the gate electrode of the first transistor T1through the fourth transistor T4so as to initialize the first transistor T1. Next, when the scan signal GCj having a high level is supplied through the scan line GCLj during data programming and compensation interval, the third transistor T3is turned on. The first transistor T1is diode-connected by the third transistor T3thus turned on and is forward-biased. At this time, when the scan signal GWj having a low level is supplied through the scan line GWLj, the second transistor T2is turned on. In the case, a compensation voltage, which is obtained by reducing the voltage of the data signal Di supplied from the data line DLi by a threshold voltage of the first transistor T1, is applied to the gate electrode of the first transistor T1. That is, a gate voltage applied to the gate electrode of the first transistor T1may be a compensation voltage. As the first driving voltage ELVDD and the compensation voltage are applied to opposite ends of the capacitor Cst, respectively, a charge corresponding to a difference between the first driving voltage ELVDD and the compensation voltage may be stored in the capacitor Cst. In the meantime, the seventh transistor T7is turned on in response to the scan signal GWj+1 having a low level that is delivered through the scan line GWLj+1. A part of the driving current Id may be drained through the seventh transistor T7as a bypass current Ibp. When the light emitting element ED emits light under the condition that a minimum current of the first transistor T1flows as a driving current Id for the purpose of displaying a black image, the black image may not be normally displayed. Accordingly, the seventh transistor T7in the pixel PXij according to an embodiment of the present disclosure may drain (or disperse) a part of the minimum current of the first transistor T1to a current path, which is different from a current path to the light emitting element ED, as the bypass current Ibp. Herein, the minimum current of the first transistor T1means a current flowing under the condition that a gate-source voltage of the first transistor T1is smaller than the threshold voltage, that is, the first transistor T1is turned off. As a minimum driving current Id (e.g., a current of 10 picoamperes (pA) or less) is delivered to the light emitting element ED, with the first transistor T1turned off, an image of black luminance is expressed. When the minimum driving current Id for displaying a black image flows, the influence of a bypass transfer of the bypass current Ibp may be great; on the other hand, when a large driving current Id for displaying an image such as a normal image or a white image flows, there may be almost no influence of the bypass current Ibp. Accordingly, when a driving current Id for displaying a black image flows, a light emitting current led of the light emitting element ED, which corresponds to a result of subtracting the bypass current Ibp drained through the seventh transistor T7from the driving current Id, may have a minimum current amount to such an extent as to accurately express a black image. Accordingly, a contrast ratio may be improved by implementing an accurate black luminance image by using the seventh transistor T7. In an embodiment, the bypass signal is the scan signal GWj+1 having a low level, but is not necessarily limited thereto. The bypass current Ibp flowing from the anode of the light emitting element ED to the voltage line AVL may be adjusted depending on the voltage level of the anode initialization voltage VAINT provided through the voltage line AVL. Next, during a light emitting interval, the emission control signal EMj supplied from the emission control line EMLj is changed from a high level to a low level. During a light emitting interval, the fifth transistor T5and the sixth transistor T6are turned on by the emission control signal EMj having a low level. In this case, the driving current Id is generated depending on a voltage difference between the gate voltage of the gate electrode of the first transistor T1and the first driving voltage ELVDD and is supplied to the light emitting element ED through the sixth transistor T6, and the light emitting current Ted flows through the light emitting element ED. FIG.7illustrates scan signals GI1to GI3840in a multi-frequency mode. Referring toFIGS.1and7, in an embodiment, the scan signals Gil to GI1920correspond to the first display area DA1of the display device DD. The scan signals GI1921to GI3840correspond to the second display area DA2of the display device DD. In a multi-frequency mode, the frequency of each of the scan signals GI1to GI1920is 120 Hz, and the frequency of each of the scan signals GI1921to GI3840may be 1 Hz. The scan signals Gil to GI1920may be activated at a high level in each of the first to 120th frames F1to F120. The scan signals GI1921to GI3840may be activated at a high level only in the first frame F1. Accordingly, the first display area DA1in which a video is displayed may be driven in response to the scan signals Gil to GI1920having a normal frequency (e.g., 120 Hz). The second display area DA2where a still image is displayed may be driven in response to the scan signals GI1921to GI3840having a low frequency (e.g., 1 Hz). Only the second display area DA2, where the still image is displayed, is driven at a low frequency, thereby reducing power consumption while deterioration of the display quality of the display device DD (seeFIG.1) is minimized. FIG.7illustrates only the scan signals GI1to GI3840. However, similarly to the scan signals GI1to GI3840, the scan driving circuit SD (seeFIG.4) and the emission driving circuit EDC (seeFIG.4) may generate scan signals GC1to GC3840and GW1to GW3841and emission control signals EM1to EM3840. FIG.8illustrates scan signals and an emission control signal, which are provided to a j-th row, when a pixel in a j-th row is driven at a first operating frequency identical to a normal frequency. Referring toFIG.8, when a pixel in the j-th row is driven at the first operating frequency identical to the normal frequency in the single frequency mode NFM, the scan signals GIj, GCj, GWj, and GWj+1 and the emission control signal EMj transition to an active level in each of the first to 120th frames F1to F120. In an embodiment, in the case of the scan signals GIj and GCj, a high level is an active level. In the case of the scan signals GWj and GWj+1 and the emission control signal EMj, a low level is an active level. FIG.9illustrates scan signals and an emission control signal, which are provided to a j-th row, when a pixel in a j-th row is driven at a second operating frequency lower than a normal frequency. Referring toFIG.9, when the j-th row pixel is driven at a second operating frequency (e.g., 1 Hz) lower than a normal frequency in the multi-frequency mode MFM, the scan signals GIj, GCj, GWj, and GWj+1 and the emission control signal EMj transition to an active level in the first frame F1. In an embodiment, in the case of the scan signals GIj and GCj, a high level is an active level. In the case of the scan signals GWj and GWj+1 and the emission control signal EMj, a low level is an active level. In each of the second to 120th frames F2to F120, the scan signals GIj and GCj are maintained at a low level, which is an inactive level, and the scan signals GWj and GWj+1 and the emission control signal EMj transition to an active level. Returning toFIG.5, a parasitic capacitance Cp may be present between the anode of the light emitting element ED and the scan line GILj. As illustrated inFIG.8, as the scan signal GIj transitions from a low level to a high level, and then again transitions from a high level to a low level in each of the first to 120th frames F1to F120, a voltage level of the anode of the light emitting element ED may be changed due to the parasitic capacitance Cp. A change in a voltage level of the anode of the light emitting element ED leads to a change in the luminance of the light emitting element ED. As illustrated inFIG.9, when the scan signal GIj is maintained at a low level in each of the second to 120th frames F2to F120, there is little change in the voltage level of the anode of the light emitting element ED due to the parasitic capacitance Cp. When all the pixels PX of the display panel DP illustrated inFIG.4are driven at the same operating frequency, the change in luminance of the light emitting element ED due to the parasitic capacitance Cp may not be visually perceived by a user. However, when the pixels PX in the first display area DA1is driven at the first operating frequency and the pixels PX in the second display area DA2is driven at the second operating frequency, a luminance difference between the first display area DA1and the second display area DA2due to the parasitic capacitance Cp may be visually perceived by the user. FIG.10illustrates a luminance change of a first display area in a first frame and a second frame when the first display area is driven at a first operating frequency identical to a normal frequency. FIG.11illustrates a luminance change of a second display area in a first frame and a second frame when the second display area is driven at a second operating frequency lower than a normal frequency. As described inFIGS.9and10, when the first display area DA1is driven at a first operating frequency identical to a normal frequency, there is little change in luminance of the first display area DA1between the first frame F1and the second frame F2. However, when the second display area DA2is driven at a second operating frequency lower than the normal frequency, the luminance of the second display area DA2may be different in the first frame F1and the second frame F2. A luminance difference LD may be visually perceived by a user. In particular, as illustrated inFIGS.8and9, when the first display area DA1is driven at 120 Hz and the second display area DA2is driven at 1 Hz, the scan signal GIj is maintained at a low level in the second to 120th frames F2to F120, and thus a difference in luminance between the first display area DA1and the second display area DA2may be visually perceived by the user. FIG.12is an embodiment of a diagram illustrating scan signals and an anode initialization voltage in a multi-frequency mode.FIG.13is a diagram conceptually illustrating a change in an anode initialization voltage according to a first display area and a second display area of a display device ofFIG.12. Referring toFIGS.12and13, during the first frame F1of the multi-frequency mode MFM, the scan signals GI1to GI3840may sequentially transition to a high level. During the second frame F2of the multi-frequency mode MFM, the scan signals GI1to GI1920corresponding to the first display area DA1may sequentially transition to a high level, and the scan signals GI1921to GI3840corresponding to the second display area DA2may be maintained at a low level. In an embodiment, during the first frame F1, the anode initialization voltage VAINT provided to the voltage line AVL illustrated inFIG.5is maintained at a first voltage level V1. While the scan signals GI1to GI1920sequentially transition to a high level during the second frame F2, the anode initialization voltage VAINT is maintained at the first voltage level V1. While the scan signals GI1921to GI3840are maintained at a low level, the anode initialization voltage VAINT is maintained at a second voltage level V2. In an embodiment, the second voltage level V2may be a higher voltage level than the first voltage level V1. For example, the first voltage level V1may be −3.5 volts (V), and the second voltage level V2may be −3 V. As illustrated inFIGS.4and11, in the second frame F2of the multi-frequency mode MFM, the luminance difference LD between the first display area DA1and the second display area DA2of the display panel DP is generated because the voltage level of the anode of the light emitting element ED is changed based on whether the parasitic capacitance Cp is present. Accordingly, in the same manner as when the scan signals GI1to GI1920transition to a high level, the voltage level of the anode of the light emitting element ED may be changed by increasing the voltage level of the anode initialization voltage VAINT while the scan signals GI1921to GI3840are maintained at a low level. Accordingly, the luminance difference LD between the first display area DA1and the second display area DA2of the display panel DP may be effectively minimized. FIG.14is another embodiment of a diagram illustrating scan signals and an anode initialization voltage in a multi-frequency mode.FIG.15is a diagram conceptually illustrating a change in an anode initialization voltage according to a first display area and a second display area of a display device ofFIG.14. Referring toFIGS.14and15, during the first frame F1of the multi-frequency mode MFM, the scan signals GI1to GI3840may sequentially transition to a high level. During the second frame F2of the multi-frequency mode MFM, the scan signals GI1to GI1920corresponding to the first display area DA1may sequentially transition to a high level, and the scan signals GI1921to GI3840corresponding to the second display area DA2may be maintained at a low level. In an embodiment, while the scan signals GI1to GI1918corresponding to the first display area DA1sequentially transition to a high level, the anode initialization voltage VAINTa provided to the voltage line AVL shown inFIG.5is maintained at the first voltage level V1. While some scan signals GI1919and GI1920corresponding to the first display area DA1and some scan signals GI1921and GI1922corresponding to the second display area DA2are driven, the anode initialization voltage VAINTa increases step by step from the first voltage level V1to the second voltage level V2. That is, while the scan signals GI1919and GI1920corresponding to a part of the first display area DA1adjacent to the second display area DA2and the scan signals GI1921and GI1922corresponding to a part of the second display area DA2adjacent to the first display area DA1are driven, the anode initialization voltage VAINTa is changed step by step from the first voltage level V1to the second voltage level V2. While the scan signals GI1923to GI3840corresponding to the second display area DA2are maintained at a low level, the anode initialization voltage VAINTa is maintained at the second voltage level V2. In an embodiment, the second voltage level V2may be a higher voltage level than the first voltage level V1. A sharp luminance difference in the boundary area between the first display area DA1and the second display area DA2may be reduced as the voltage level of the anode initialization voltage VAINTa is changed step by step from the first voltage level V1to the second voltage level V2in the boundary area where the first display area DA1and the second display area DA2are met. In the example shown inFIGS.12to15, the second voltage level V2is described as being higher than the first voltage level V1as an example, but the present disclosure is not limited thereto. In another embodiment, when the second display area DA2is driven at a second operating frequency lower than the normal frequency, the second voltage level V2of the anode initialization voltage VAINTa may be lower than the first voltage level V1. FIG.16Aillustrates an image displayed in a first display area and a second display area when an anode initialization voltage having the same voltage level is provided to a first display area and a second display area of a display device. When the anode initialization voltage VAINT (seeFIG.5) having the same voltage level is provided to the first display area DA1and the second display area DA2of the display device DD, even though the same image signal is provided to the first display area DA1and the second display area DA2, images displayed in the first display area DA1and the second display area DA2may be displayed with different luminance or color. FIG.16Billustrates an image displayed in a first display area and a second display area when anode initialization voltages having different voltage levels are provided to a first display area and a second display area of a display device, respectively. In the case where the anode initialization voltage VAINT having a first voltage level is provided to the first display area DA1of the display device DD, and the anode initialization voltage VAINT having a second voltage level different from the first voltage level is provided to the second display area DA2, when the same image signal is provided to the first display area DA1and the second display area DA2, an image displayed in the first display area DA1and the second display area DA2may have the same luminance and color. FIG.17is a block diagram of a display device, according to another embodiment of the present disclosure. Referring toFIG.17, a display device DD-1includes the display panel DP, the driving controller100, the data driving circuit200, and the voltage generator300. The display device DD-1shown inFIG.17has a configuration similar to the display device DD shown inFIG.4. The same reference numerals are used for the same components, and additional descriptions are omitted to avoid redundancy. The display panel DP may be divided into the first display area DA1and the second display area DA2. First pixels PX1arranged from a first row to a j-th row may correspond to the first display area DA1. Second pixels PX2arranged from a k-th row to an n-th row may correspond to the second display area DA2. Herein, each of ‘j’, ‘k’, and ‘n’ may be a natural number and may be “k=j+1”. The first pixels PX1are electrically connected to the scan lines GIL1to GILj, GCL1to GCLj, and GWL1to GWLj+1, the emission control lines EML1to EMLj, and the data lines DL1to DLm. Each of the first pixels PX1may be electrically connected to four scan lines and one emission control line. For example, as shown inFIG.17, pixels in a first row may be connected to the scan lines GIL1, GCL1, GWL1, and GWL2and the emission control line EML1. Furthermore, pixels in the j-th row may be connected to the scan lines GILj, GCLj, GWLj, and GWLj+1 and the emission control line EMLj. The second pixels PX2are electrically connected to the scan lines GILk to GILn, GCLk to GCLn, GWLk to GWLn+1, the emission control lines EMLk to EMLn, and the data lines DL1to DLm. Each of the plurality of second pixels PX2may be electrically connected to four scan lines and one emission control line. For example, as illustrated inFIG.17, pixels in the k-th row may be connected to the scan lines GILk, GCLk, GWLk, and GWLk+1 and the emission control line EMLk. Also, pixels in the n-th row may be connected to the scan lines GILn, GCLn, GWLn, and GWLn+1 and the emission control line EMLn. In an embodiment, the first pixels PX1may be electrically connected to a first voltage line AVL1. The second pixels PX2may be electrically connected to a second voltage line AVL2. The voltage generator300generates the first driving voltage ELVDD, the second driving voltage ELVSS, the initialization voltage VINT, a first anode initialization voltage VAINT1, and a second anode initialization voltage VAINT2. The first anode initialization voltage VAINT1may be provided to the first pixels PX1through the first voltage line AVL1. The second anode initialization voltage VAINT2may be provided to the second pixels PX2through the second voltage line AVL2. The driving controller100outputs the voltage control signal VCTRL for setting a voltage level of each of the first anode initialization voltage VAINT1and the second anode initialization voltage VAINT2. The voltage generator300may change the voltage level of each of the first anode initialization voltage VAINT1and the second anode initialization voltage VAINT2in response to the voltage control signal VCTRL. FIG.18is an equivalent circuit diagram of a pixel according to another embodiment of the present disclosure. FIG.18illustrates an equivalent circuit diagram of a first pixel PX1ijconnected to the i-th data line DLi among the data lines DL1to DLm, the j-th scan lines GILj, GCLj, and GWLj and the (j+1)-th scan line GWLj+1 among the scan lines GIL1to GILj, GCL1to GCLj, and GWL1to GWLj+1, and the j-th emission control line EMLj among the emission control lines EML1to EMLj, which are illustrated inFIG.17. The first pixel PX1ijincludes a circuit configuration similar to the pixel PXij shown inFIG.5. The same reference numerals are used for the same components, and additional descriptions are omitted to avoid redundancy. The seventh transistor T7includes a first electrode connected to the anode of the light emitting element ED, a second electrode connected to the first voltage line AVL1, and a gate electrode connected to the scan line GWLj+1. The seventh transistor T7is turned on depending on the scan signal GWj+1 received through the scan line GWLj+1, and bypasses a current Ibp of the anode of the light emitting element ED to the first voltage line AVL1. FIG.19is an equivalent circuit diagram of a pixel, according to still another embodiment of the present disclosure. FIG.19illustrates an equivalent circuit diagram of a second pixel PX2ikconnected to the i-th data line DLi among the data lines DL1to DLm, the k-th scan lines GILk, GCLk, and GWLk and the (k+1)-th scan line GWLk+1 among the scan lines GILk to GILn, GCLk to GCLn, and GWLk to GWLn+1, and the k-th emission control line EMLk among the emission control lines EMLk to EMLn, which are illustrated inFIG.17. The second pixel PX2ikincludes a circuit configuration similar to the pixel PXij shown inFIG.5. The same reference numerals are used for the same components, and additional descriptions are omitted to avoid redundancy. The seventh transistor T7includes a first electrode connected to the anode of the light emitting element ED, a second electrode connected to the second voltage line AVL2, and a gate electrode connected to the scan line GWLk+1. The seventh transistor T7is turned on depending on the scan signal GWk+1 received through the scan line GWLk+1, and bypasses a current Ibp of the anode of the light emitting element ED to the second voltage line AVL2. FIGS.20to22are diagrams illustrating changes in the first anode initialization voltage VAINT1and the second anode initialization voltage VAINT2according to an operating mode. Referring toFIGS.17,20,21, and22, the driving controller100may output an output image signal DATA in synchronization with a vertical synchronization signal VSYNC included in the control signal CTRL. Furthermore, the driving controller100may output the voltage control signal VCTRL for changing a voltage level of each of the first anode initialization voltage VAINT1and the second anode initialization voltage VAINT2in synchronization with the vertical synchronization signal VSYNC. In the following description, during the single frequency mode NFM, the driving controller100drives the first pixels PX1in the first display area DA1and the second pixels PX2in the second display area DA2at the first operating frequency. In an embodiment, the first operating frequency may be a reference frequency. During a low frequency mode (LFM1, LMF2), the driving controller100may drive the first pixels PX1in the first display area DA1and the second pixels PX2in the second display area DA2at an operating frequency lower than the first operating frequency. During a multi-frequency mode (MFM1, MMF2), the driving controller100may drive the first pixels PX1in the first display area DA1at a first operating frequency, and may drive the second pixels PX2in the second display area DA2at an operating frequency lower than the first operating frequency. FIG.20illustrates changes in the first anode initialization voltage VAINT1and the second anode initialization voltage VAINT2in a single frequency mode and a low frequency mode. InFIG.20, the first to fourth frames F1to F4correspond to the single frequency mode NFM; the fifth to eighth frames F5to F8correspond to the first low frequency mode LFM1; and, the ninth to nineteenth frames F9to F19correspond to the second low frequency mode LFM2. Referring toFIGS.17and20, during the single frequency mode NFM, both the first pixels PX1in the first display area DA1of the display panel DP and the second pixels PX2in the second display area DA2of the display panel DP may be driven at a first operating frequency. The fact that the first pixels PX1and the second pixels PX2are driven at the first operating frequency means that each of the frequencies of scan signals GI1to GIn, GC1to GCn, and GW1to GWn+1 and the emission control signals EM1to EMn is the first operating frequency. In the single frequency mode NFM, the driving controller100may output the output image signal DATA in synchronization with the vertical synchronization signal VSYNC. “D” of the output image signal DATA means a valid data signal having a predetermined grayscale level corresponding to the image signal RGB. In the single frequency mode NFM, each of the first anode initialization voltage VAINT1and the second anode initialization voltage VAINT2may be maintained at a first voltage level Va. In the first low frequency mode LFM1, the first pixels PX1in the first display area DA1and the second pixels PX2in the second display area DA2may be driven at a second operating frequency lower than the first operating frequency of the single frequency mode NFM. In an embodiment, when the first operating frequency is 120 Hz, the second operating frequency may be 60 Hz. The driving controller100may output the valid data signal “D” as the output image signal DATA during some frames (i.e., fifth and seventh frames F5and F7) in the first low frequency mode LFM1, and may output a bias signal “B” as the output image signal DATA during some other frames (i.e., sixth and eighth frames F6and F8) in the first low frequency mode LFM1. The bias signal “B” may correspond to a predetermined voltage level for initializing the first electrode SE of the first transistor T1illustrated inFIG.18. The bias signal “B” may be referred to as an “invalid data signal” so as to be distinguished from the valid data signal “D”. In another embodiment, the driving controller100may not output the bias signal “B” as the output image signal DATA in the sixth and eighth frames F6and F8. In this case, in the sixth and eighth frames F6and F8, the output image signal DATA may be an invalid data signal (e.g., a data signal corresponding to a black grayscale). During some frames (i.e., fifth and seventh frames F5and F7) in the first low frequency mode LFM1, each of the first anode initialization voltage VAINT1and the second anode initialization voltage VAINT2may be maintained at the first voltage level Va. During some other frames (i.e., sixth and eighth frames F6and F8) in the first low frequency mode LFM1, each of the first anode initialization voltage VAINT1and the second anode initialization voltage VAINT2may be changed to a second voltage level Vb. In an embodiment, the second voltage level Vb of each of the first anode initialization voltage VAINT1and the second anode initialization voltage VAINT2is a voltage level lower than the first voltage level Va. A parasitic capacitance Cpa may be present between the anode of the light emitting element ED shown inFIG.18and the scan line GWLj+1. In the example shown inFIG.9, when a voltage level of an anode terminal of the light emitting element ED is changed by a voltage level change of the scan signal GWj+1 delivered through the scan line GWLj+1 in a section (e.g., the second to 120th frames F2to F120) where the scan signals GIj and GCj are maintained at a low level, the light emitting element ED may emit light. Such the undesired luminescence may affect display quality. Therefore, in an embodiment, the voltage level change of the anode terminal of the light emitting element ED may be minimized by changing the voltage level of each of the first anode initialization voltage VAINT1and the second anode initialization voltage VAINT2to the second voltage level Vb lower than the first voltage level Va during frames (i.e., sixth and eighth frames F6and F8) where the valid data signal “D” is not provided. In an embodiment, the first voltage level Va may be −4.1 V, and the second voltage level Vb may be −4.2 V. In another embodiment, the second voltage level Vb of each of the first anode initialization voltage VAINT1and the second anode initialization voltage VAINT2may be a voltage level higher than the first voltage level Va. The first voltage level Va and the second voltage level Vb of each of the first anode initialization voltage VAINT1and the second anode initialization voltage VAINT2may be changed to be suitable for the characteristics of the display panel DP. In the second low frequency mode LFM2, the first pixels PX1in the first display area DA1and the second pixels PX2in the second display area DA2may be driven at a third operating frequency lower than the first operating frequency of the single frequency mode NFM. In an embodiment, when the first operating frequency is 120 Hz, the third operating frequency may be 30 Hz. The driving controller100may output the valid data signal “D” as the output image signal DATA during some frames (i.e., ninth, thirteenth, seventeenth frames F9, F13, and F17) in the second low frequency mode LFM2, and may output the bias signal “B” as the output image signal DATA during some other frames (i.e., tenth, eleventh, twelfth, fourteenth, fifteenth, sixteenth, eighteenth, and nineteenth frames F10, F11, F12, F14, F15, F16, F18, and F19) in the second low frequency mode LFM2. The bias signal “B” may correspond to a predetermined voltage level for initializing the first electrode SE of the first transistor T1shown inFIG.18and the first electrode SE of the first transistor T1shown inFIG.19. During some frames (i.e., ninth, thirteenth, and seventeenth frames F9, F13, and F17) in the second low frequency mode LFM2, each of the first anode initialization voltage VAINT1and the second anode initialization voltage VAINT2may be maintained at a first voltage level Va. During some other frames (i.e., tenth, eleventh, twelfth, fourteenth, fifteenth, sixteenth, eighteenth, and nineteenth frames F10, F11, F12, F14, F15, F16, F18, and F19) in the second low frequency mode LFM2, each of the first anode initialization voltage VAINT1and the second anode initialization voltage VAINT2may be changed to a second voltage level Vb. In an embodiment, the second voltage level Vb of each of the first anode initialization voltage VAINT1and the second anode initialization voltage VAINT2is a voltage level lower than the first voltage level Va. FIG.21illustrates changes in the first anode initialization voltage VAINT1and the second anode initialization voltage VAINT2in a single frequency mode and a multi-frequency mode. InFIG.21, the first to fourth frames F1to F4correspond to the single frequency mode NFM; the fifth to eighth frames F5to F8correspond to a first multi-frequency mode MFM1; and, the ninth to nineteenth frames F9to F19correspond to a second multi-frequency mode MFM2. Referring toFIGS.17and21, during the single frequency mode NFM, both the first pixels PX1in the first display area DA1of the display panel DP and the second pixels PX2in the second display area DA2of the display panel DP may be driven at a first operating frequency. In the single frequency mode NFM, the driving controller100may output the output image signal DATA in synchronization with the vertical synchronization signal VSYNC. “D” of the output image signal DATA means a valid data signal having a predetermined grayscale level corresponding to the image signal RGB. In the single frequency mode NFM, each of the first anode initialization voltage VAINT1and the second anode initialization voltage VAINT2may be maintained at a first voltage level Va. In the first multi-frequency mode MFM1, the first pixels PX1in the first display area DA1may be driven at the first operating frequency, and the second pixels PX2in the second display area DA2may be driven at a second operating frequency lower than the first operating frequency. In an embodiment, when the first operating frequency is 120 Hz, the second operating frequency may be 60 Hz. The driving controller100may output the valid data signal “D” as the output image signal DATA during some frames (i.e., fifth and seventh frames F5and F7) in the first multi-frequency mode MFM1. The driving controller100may sequentially output the valid data signal “D” and the bias signal “B” as the output image signal DATA during each of some other frames (i.e., the sixth and eighth frames F6and F8) in the first multi-frequency mode MFM1. During each of the sixth and eighth frames F6and F8, the valid data signal “D” may be provided to the first pixels PX1corresponding to the first display area DA1, and the bias signal “B” may be provided to the second pixels PX2corresponding to the second display area DA2. That is, the first pixels PX1corresponding to the first display area DA1may receive the valid data signal “D” during all frames (i.e., the fifth to eighth frames F5to F8) in the first multi-frequency mode MFM1. The second pixels PX2corresponding to the second display area DA2may receive the valid data signal “D” during the fifth and seventh frames F5and F7in the first multi-frequency mode MFM1, and may receive the bias signal “B” during the sixth and eighth frames F6and F8in the first multi-frequency mode MFM1. In the first multi-frequency mode MFM1, the first pixels PX1corresponding to the first display area DA1are driven at the first operating frequency, and thus the first anode initialization voltage VAINT1is maintained at the first voltage level Va. During some frames (i.e., the fifth and seventh frames F5and F7) in the first multi-frequency mode MFM1, the second anode initialization voltage VAINT2is maintained at the first voltage level Va. During some other frames (i.e., the sixth and eighth frames F6and F8) in the first multi-frequency mode MFM1, the second anode initialization voltage VAINT2may be changed to the second voltage level Vb. In an embodiment, the second voltage level Vb of the second anode initialization voltage VAINT2is a voltage level lower than the first voltage level Va. In an embodiment, during an inactive level of the vertical synchronization signal VSYNC (i.e., a vertical blank section), the second anode initialization voltage VAINT2may be changed from the first voltage level Va to the second voltage level Vb. In the second multi-frequency mode MFM2, the first pixels PX1in the first display area DA1may be driven at the first operating frequency, and the second pixels PX2in the second display area DA2may be driven at a third operating frequency lower than the first operating frequency. In an embodiment, when the first operating frequency is 120 Hz, the third operating frequency may be 30 Hz. The driving controller100may output the valid data signal “D” as the output image signal DATA during some frames (i.e., ninth, thirteenth, seventeenth frames F9, F13, and F17) in the second multi-frequency mode MFM2, and may alternately output the valid data signal “D” and the bias signal “B” as the output image signal DATA during each of some other frames (i.e., tenth, eleventh, twelfth, fourteenth, fifteenth, sixteenth, eighteenth, and nineteenth frames F10, F11, F12, F14, F15, F16, F18, and F19) in the second multi-frequency mode MFM2. That is, the first pixels PX1corresponding to the first display area DA1receive the valid data signal “D” during all frames (i.e., the ninth to nineteenth frames F9to F19) in the second multi-frequency mode MFM2. The second pixels PX2corresponding to the second display area DA2may receive the valid data signal “D” during the ninth, thirteenth and seventeenth frames F9, F13and F17in the second multi-frequency mode MFM2, and may receive the bias signal “B” during the tenth, eleventh, twelfth, fourteenth, fifteenth, sixteenth, eighteenth, and nineteenth frames F10, F11, F12, F14, F15, F16, F18, and F19. In the second multi-frequency mode MFM2, the first anode initialization voltage VAINT1is maintained at the first voltage level Va. During some frames (i.e., the ninth, thirteenth, and seventeenth frames F9, F13, and F17) in the second multi-frequency mode MFM2, the second anode initialization voltage VAINT2may be maintained at the first voltage level Va. During some other frames (i.e., the tenth, eleventh, twelfth, fourteenth, fifteenth, sixteenth, eighteenth, and nineteenth frames F10, F11, F12, F14, F15, F16, F18, and F19) in the second multi-frequency mode MFM2, the second anode initialization voltage VAINT2may be changed to the second voltage level Vb. In an embodiment, the second voltage level Vb of the second anode initialization voltage VAINT2is a voltage level lower than the first voltage level Va. In the example shown inFIG.21, in the single frequency mode NFM, the first multi-frequency mode MFM1, and the second multi-frequency mode MFM2, in each of which the first pixels PX1corresponding to the first display area DA1are driven at the first operating frequency, the first anode initialization voltage VAINT1may be maintained at the first voltage level Va. During each of the frames F1to F5, F7, F9, F13, and F17where the valid data signal “D” is provided as the output image signal DATA to the second pixels PX2corresponding to the second display area DA2, the second anode initialization voltage VAINT2may be maintained at the first voltage level Va. During the frames F6, F8, F10, F11, F12, F14, F15, F16, F18, and F19(i.e., the sixth frame F6, the eighth frame F8, the tenth frame F10, during which the bias signal “B” is provided as the output image signal DATA to the second pixels PX2corresponding to the second display area DA2, the second anode initialization voltage VAINT2may be changed to the second voltage level Vb. FIG.22illustrates changes in the first anode initialization voltage VAINT1and the second anode initialization voltage VAINT2in a single frequency mode and a multi-frequency mode. InFIG.22, the first to fourth frames F1to F4correspond to the single frequency mode NFM; the fifth to eighth frames F5to F8correspond to a third multi-frequency mode MFM3; and, the ninth to nineteenth frames F9to F19correspond to a fourth multi-frequency mode MFM4. Referring toFIGS.17and22, during the single frequency mode NFM, both the first pixels PX1in the first display area DA1of the display panel DP and the second pixels PX2in the second display area DA2of the display panel DP may be driven at a first operating frequency. In the single frequency mode NFM, the driving controller100may output the output image signal DATA in synchronization with the vertical synchronization signal VSYNC. “D” of the output image signal DATA means a valid data signal having a predetermined grayscale level corresponding to the image signal RGB. In the single frequency mode NFM, each of the first anode initialization voltage VAINT1and the second anode initialization voltage VAINT2may be maintained at a first voltage level Va. In the third multi-frequency mode MFM3, the first pixels PX1in the first display area DA1may be driven at a second operating frequency lower than a first operating frequency, and the second pixels PX2in the second display area DA2may be driven at a third operating frequency lower than the second operating frequency. In an embodiment, when the first operating frequency is 120 Hz, the second operating frequency may be 60 Hz, and the third operating frequency may be 30 Hz. The driving controller100outputs the valid data signal “D” as the output image signal DATA during the fifth frame F5in the third multi-frequency mode MFM3. The driving controller100may sequentially output the valid data signal “D” and the bias signal “B” as the output image signal DATA during the seventh frame F7in the third multi-frequency mode MFM3. The driving controller100may output the bias signal “B” as the output image signal DATA during the sixth and eighth frames F6and F8in the third multi-frequency mode MFM3. That is, the first pixels PX1corresponding to the first display area DA1receive the valid data signal “D” during the fifth and seventh frames F5and F7in the third multi-frequency mode MFM3. The second pixels PX2corresponding to the second display area DA2may receive the valid data signal “D” during the fifth frame F5in the third multi-frequency mode MFM3and may receive the bias signal “B” during the sixth to eighth frames F6to F8in the third multi-frequency mode MFM3. During the fifth and seventh frames F5and F7in the third multi-frequency mode MFM3, the first anode initialization voltage VAINT1is maintained at the first voltage level Va. During the sixth and eighth frames F6and F8, the first anode initialization voltage VAINT1is changed to the second voltage level Vb. During the fifth frame F5in the third multi-frequency mode MFM3, the second anode initialization voltage VAINT2is maintained at the first voltage level Va. During the sixth to eighth frames F6to F8in the third multi-frequency mode MFM3, the second anode initialization voltage VAINT2may be changed to the second voltage level Vb. In an embodiment, the second voltage level Vb of the second anode initialization voltage VAINT2is a voltage level lower than the first voltage level Va. In an embodiment, during an inactive level of the vertical synchronization signal VSYNC (i.e., a vertical blank section), the second anode initialization voltage VAINT2may be changed from the first voltage level Va to the second voltage level Vb. In the fourth multi-frequency mode MFM4, the first pixels PX1in the first display area DA1may be driven at a third operating frequency lower than a first operating frequency, and the second pixels PX2in the second display area DA2may be driven at a fourth operating frequency lower than the third operating frequency. In an embodiment, when the first operating frequency is 120 Hz, the third operating frequency may be 30 Hz, and the fourth operating frequency may be 15 Hz. The driving controller100may output the valid data signal “D” as the output image signal DATA during the ninth and seventeenth frames F9and F17in the fourth multi-frequency mode MFM4. The driving controller100may alternately output the valid data signal “D” and the bias signal “B” as the output image signal DATA during the thirteenth frame F13in the fourth multi-frequency mode MFM4. The driving controller100may output the bias signal “B” as the output image signal DATA during each of the tenth, eleventh, twelfth, fourteenth, fifteenth, sixteenth, eighteenth, and nineteenth frames F10, F11, F12, F14, F15, F16, F18, and F19. In the fourth multi-frequency mode MFM4, the first anode initialization voltage VAINT1is set to the first voltage level Va during each of the ninth, thirteenth, and seventeenth frames F9, F13, and F17, during which the valid data signal “D” is provided as the output image signal DATA to the first pixels PX1in the first display area DA1; and, the first anode initialization voltage VAINT1is set to the second voltage level Vb during each of the tenth, eleventh, twelfth, fourteenth, fifteenth, sixteenth, eighteenth, and nineteenth frames F10, F11, F12, F14, F15, F16, F18, and F19. In the fourth multi-frequency mode MFM4, the second anode initialization voltage VAINT2is set to the first voltage level Va during each of the ninth and seventeenth frames F9and F17, during which the valid data signal “D” is provided as the output image signal DATA to the second pixels PX2in the second display area DA2; and, the second anode initialization voltage VAINT2is set to the second voltage level Vb during each of the tenth, eleventh, twelfth, thirteenth, fourteenth, fifteenth, sixteenth, eighteenth, and nineteenth frames F10, F11, F12, F13, F14, F15, F16, F18, and F19. The voltage level change of the anode terminal of the light emitting element ED may be minimized by changing the voltage level of each of the first anode initialization voltage VAINT1and the second anode initialization voltage VAINT2to the second voltage level Vb lower than the first voltage level Va during frames where the valid data signal “D” is not provided. Although an embodiment of the present disclosure has been described for illustrative purposes, those skilled in the art will appreciate that various modifications, and substitutions are possible, without departing from the scope and spirit of the present disclosure as disclosed in the accompanying claims. Accordingly, the technical scope of the present disclosure is not limited to the detailed description of this specification, but should be defined by the claims. A display device having such a configuration may operate in a multi-frequency mode in which a first display area is driven at a first operating frequency and a second display area is driven at a second operating frequency. Accordingly, power consumption of the display device may be reduced. A luminance difference between the first display area and the second display area may be prevented from being visually perceived, by compensating for characteristic changes of pixels in the second display area in the multi-frequency mode. Accordingly, the power consumption of the display device may be reduced and display quality may be prevented from being deteriorated. While the present disclosure has been described with reference to embodiments thereof, it will be apparent to those of ordinary skill in the art that various changes and modifications may be made thereto without departing from the spirit and scope of the present disclosure as set forth in the following claims. | 76,378 |
11862097 | DETAILED DESCRIPTION OF THE EMBODIMENTS Hereinafter, embodiments of the present disclosure will be explained in detail with reference to the accompanying drawings. FIG.1is a block diagram illustrating a display device according to embodiments,FIG.2is a timing diagram illustrating an example in which the display device ofFIG.1operates in a power-on sequence period, andFIG.3is a timing diagram illustrating another example in which the display device ofFIG.1operates in a power-on sequence period. Referring toFIGS.1to3, a display device100may include a display panel110, a display panel driving circuit120, a voltage generating circuit130, and an over-current protecting circuit140. Here, the display device100may be an organic light emitting display device, but the display device100is not limited thereto. The display panel110may include a plurality of pixels each of which includes a pixel circuit111and a light emitting element connected to the pixel circuit111. Here, a plurality of pixels may be arranged in various forms (e.g., a matrix form, etc.) within the display panel110. The pixel circuit111may be connected to a data driving circuit through a data line, connected to a scan driving circuit through a scan line, and connected to an initialization voltage generating circuit included in the voltage generating circuit130through an initialization voltage line. In an embodiment, the pixel circuit111may include at least three transistors (e.g., a switching transistor, a driving transistor, and an initialization transistor) and at least one capacitor (e.g., a storage capacitor). A light emitting element (e.g., an organic light emitting diode) may be connected to the pixel circuit111. The display panel driving circuit120may drive the display panel110. To this end, the display panel driving circuit120may include a data driving circuit (or referred to as a data driver) configured to provide a data signal DS to the display panel110through a data line, a scan driving circuit (or referred to as a scan driver) configured to provide a scan signal SS to the display panel110through a scan line, a timing control circuit (or referred to as a timing controller) configured to control the data driving circuit and the scan driving circuit, and the like. Meanwhile, the display panel driving circuit120may receive display panel voltages P-VOL from the voltage generating circuit130to drive the display panel110. For example, the display panel voltages P-VOL may include a high power supply voltage ELVDD, a low power supply voltage ELVSS, an initialization voltage VINIT, and the like. The data driving circuit may generate the data signal DS to be provided to the display panel110based on a data control signal and image data DATA received from the timing control circuit. Here, the data control signal may include a horizontal start signal and a load signal, but the data control signal is not limited thereto. In an embodiment, the data driving circuit may be implemented as at least one integrated circuit (IC). For example, the data driving circuit may be configured as at least one driving chip mounted on a flexible printed circuit board and connected to the display panel110in a tape carrier package (TCP) scheme, or mounted on the display panel110in a chip-on-glass (COG) scheme. However, since the above configuration has been provided for illustrative purposes, an implementation scheme of the data driving circuit is not limited thereto. The scan driving circuit may generate the scan signal SS to be provided to the display panel110based on a scan control signal received from the timing control circuit. Here, the scan control signal may include a vertical start signal and a scan clock signal SLK, but the scan control signal is not limited thereto. To this end, the scan driving circuit may include shift registers configured to generate the scan signal SS based on the vertical start signal (or a scan start signal generated by level-shifting the vertical start signal) and the scan clock signal SLK. In an embodiment, the scan driving circuit may be implemented as at least one integrated circuit. For example, the scan driving circuit may be configured as at least one driving chip mounted on a flexible printed circuit board and connected to the display panel110in a tape carrier package scheme, or mounted on the display panel110in a chip-on-glass scheme. As another example, the scan driving circuit may be formed simultaneously with the transistors of the pixel circuit in a non-display area (i.e., a peripheral area) of the display panel110in a form of an amorphous silicon TFT gate driver circuit (ASG) or an oxide silicon TFT gate driver circuit (OSG). In this case, transistors of the scan driving circuit may include an amorphous silicon thin film transistor or an oxide thin film transistor. However, since the above configuration has been provided for illustrative purposes, an implementation scheme of the scan driving circuit is not limited thereto. The timing control circuit (e.g., a microcontroller unit (MCU), etc.) may control the data driving circuit and the data driving circuit. To this end, the timing control circuit may generate various signals (e.g., the data control signal, the scan control signal, etc.) for controlling the data driving circuit and the scan driving circuit by using driving circuit voltages D-VOL supplied from the voltage generating circuit130. For example, the driving circuit voltages D-VOL may include a gate-on voltage, a gate-off voltage, an analog power supply voltage, a gamma voltage, and the like. In some embodiments, the timing control circuit may receive the image data DATA from an outside, perform predetermined processing (e.g., data compensation processing, etc.), and provide the image data DATA that has been performed the predetermined processing to the data driving circuit. The voltage generating circuit130may receive an input power supply voltage VIN when the display device100is powered on, and generate display panel voltages P-VOL for driving the display panel110and driving circuit voltages D-VOL for driving the display panel driving circuit120based on the input power supply voltage VIN. In other words, the voltage generating circuit130may generate and output the display panel voltages P-VOL and the driving circuit voltages D-VOL in a power-on sequence period of the display device100(i.e., a period during which voltages and signals required to display an image on the display panel110are sequentially generated and output after the display device100is powered on). Here, the voltage generating circuit130may include an initialization voltage generating circuit131(e.g., a DC-DC converter, an amplifier, or the like having a current sinking structure) which is configured to generate and output the initialization voltage VINIT as shown inFIG.6, which is to be applied to an initialization target node in the pixel circuit111in an initialization operation period of the pixel circuit111. Here, the initialization target node in the pixel circuit111may correspond to an anode of the light emitting element connected to the pixel circuit111. The voltage generating circuit130may output the initialization voltage VINIT for initializing the initialization target node in the pixel circuit111at a first time point TA or TA′ corresponding to a time point (i.e., TA shown inFIG.2) at which the input power supply voltage VIN is received or a time point (i.e., TA′ shown inFIG.3) that is later than the time point at which the input power supply voltage VIN is received by a predetermined time. This will be described in detail below with reference toFIGS.2and3. The over-current protecting circuit140may monitor an over-current generated inside the display device100, and generate a shut-down request signal STS for shutting down at least one of the display panel110, the display panel driving circuit120, and the voltage generating circuit130when the over-current is detected. Here, the over-current protecting circuit140may perform a first over-current protecting operation of detecting whether an initialization voltage current C-VINIT caused by the initialization voltage VINIT is an over-current in a power-on monitoring period PMP that is a period between the first time point TA or TA′ at which the initialization voltage VINIT starts to be output and a second time point TB at which the scan clock signal SLK starts to be output, and perform a second over-current protecting operation of detecting whether the initialization voltage current C-VINIT caused by the initialization voltage VINIT is the over-current in the initialization operation period of the pixel circuit111(e.g., an initialization operation may be sequentially performed on the pixel circuit111for each scan line in a display operation period DP). Meanwhile, the display panel driving circuit120may shut down at least one of the display panel110, the display panel driving circuit120, and the voltage generating circuit130when the shut-down request signal STS is received from the over-current protecting circuit140. Accordingly, the display device100may shut down at least one of the display panel110, the display panel driving circuit120, and the voltage generating circuit130when the over-current is detected due to a short-circuit defect or the like which occurs between voltage lines through which the display panel voltages P-VOL and the driving circuit voltages D-VOL are transmitted, or a burnt defect or the like which occurs due to a foreign substance and the like within the display device100, so that the over-current may be prevented from flowing inside the display device100, and thus the display device100may be prevented from exploding, or a fire may be prevented from occurring in the display device100. In an embodiment, as shown inFIG.2, the voltage generating circuit130may receive the input power supply voltage VIN from outside of the display device, for example, from a power supply (or referred to as a set power), when the display device100is powered on. Here, the voltage generating circuit130may output (i.e., start outputting) the initialization voltage VINIT for initializing the initialization target node in the pixel circuit111at the first time point TA corresponding to the time point at which the input power supply voltage VIN is received. The display panel driving circuit120may output (i.e., start outputting) the scan clock signal SLK for generating the scan signal SS that is to be applied to the pixel circuit111(i.e., denoted by TOGGLE) at the second time point TB that is later than the first time point TA at which the initialization voltage VINIT is output. In other words, the voltage generating circuit130may output the initialization voltage VINIT before the scan clock signal SLK is output. Thereafter, the voltage generating circuit130may supply the high power supply voltage ELVDD to the display panel110after the second time point TB at which the scan clock signal SLK is output and before a third time point TC at which the scan signal SS and the data signal DS are generated and applied to the display panel110. In other words, when the high power supply voltage ELVDD is applied to the display panel110, and the scan signal SS and the data signal DS are generated and applied to the display panel110, the display operation period DP may start. The display panel driving circuit120may apply the scan signal SS and the data signal DS to the display panel110at the third time point TC to start the display operation period DP. Here, the display operation period DP may refer to a period during which an image is displayed on the display panel110, and may include, for example, an initialization operation period during which the initialization voltage VINIT is applied to the initialization target node (e.g., the anode of the light emitting element) in the pixel circuit111, a data write operation period during which a data voltage corresponding to the data signal DS is stored in the storage capacitor in the pixel circuit111, and a light emitting operation period during which the light emitting element in the pixel circuit111emits light based on the data signal DS stored in the storage capacitor. The over-current protecting circuit140may perform the first over-current protecting operation of detecting whether the initialization voltage current C-VINIT caused by the initialization voltage VINIT is the over-current in the power-on monitoring period PMP that is set between the first time point TA at which the initialization voltage VINIT is output and the second time point TB at which the scan clock signal SLK is output. In an embodiment, the power-on monitoring period PMP may be set as an entire period between the first time point TA at which the initialization voltage VINIT is output and the second time point TB at which the scan clock signal SLK is output. In another embodiment, the power-on monitoring period PMP may be set as a partial period between the first time point TA at which the initialization voltage VINIT is output and the second time point TB at which the scan clock signal SLK is output. In addition, the over-current protecting circuit140may perform the second over-current protecting operation of detecting whether the initialization voltage current C-VINIT caused by the initialization voltage VINIT is the over-current in the initialization operation period of the pixel circuit111during which the initialization voltage VINIT is applied to the initialization target node in the pixel circuit111while an image is displayed on the display panel110(i.e., in the display operation period DP). In general, since the initialization voltage VINIT applied to the initialization target node (e.g., the anode of the light emitting element) in the pixel circuit111is lower than the data voltage corresponding to the data signal DS, when the initialization voltage VINIT is applied to the initialization target node in the pixel circuit111, the initialization voltage current C-VINIT may flow from the initialization target node in the pixel circuit111to the initialization voltage generating circuit131(e.g., the DC-DC converter, the amplifier, or the like having the current sinking structure) in the voltage generating circuit130through the initialization transistor T3. Accordingly, the over-current protecting circuit140may detect whether the initialization voltage current C-VINIT caused by the initialization voltage VINIT is the over-current in the initialization operation period of the pixel circuit111to perform the second over-current protecting operation. Meanwhile, in the power-on monitoring period PMP that is set between the first time point TA at which the initialization voltage VINIT is output and the second time point TB at which the scan clock signal SLK is output, the over-current protecting circuit140may generate the shut-down request signal STS for shutting down at least one of the display panel110, the display panel driving circuit120, and the voltage generating circuit130when a state in which the initialization voltage current C-VINIT is greater than a first reference current continues for a first reference time. In other words, in the power-on monitoring period PMP, the over-current protecting circuit140may determine the initialization voltage current C-VINIT as the over-current when the state in which the initialization voltage current C-VINIT is greater than the first reference current continues for the first reference time. In addition, in the initialization operation period of the pixel circuit111during which the initialization voltage VINIT is applied to the initialization target node in the pixel circuit111, the over-current protecting circuit140may generate the shut-down request signal STS for shutting down at least one of the display panel110, the display panel driving circuit120, and the voltage generating circuit130when a state in which the initialization voltage current C-VINIT is greater than a second reference current continues for a second reference time. In other words, in the initialization operation period of the pixel circuit111(e.g., the initialization operation may be sequentially performed on the pixel circuit111for each scan line in the display operation period DP), the over-current protecting circuit140may determine the initialization voltage current C-VINIT as the over-current when the state in which the initialization voltage current C-VINIT is greater than the second reference current continues for the second reference time. In another embodiment, as shown inFIG.3, the voltage generating circuit130may receive the input power supply voltage VIN from outside of the display device, for example, from a power supply, when the display device100is powered on. Here, the voltage generating circuit130may output (i.e., start outputting) the initialization voltage VINIT for initializing the initialization target node in the pixel circuit111at the first time point TA′ that is later than the time point at which the input power supply voltage VIN is received. The display panel driving circuit120may output (i.e., start outputting) the scan clock signal SLK for generating the scan signal SS that is to be applied to the pixel circuit111(i.e., denoted by TOGGLE) at the second time point TB that is later than the first time point TA′ at which the initialization voltage VINIT is output. In other words, the voltage generating circuit130may output the initialization voltage VINIT before the scan clock signal SLK is output. Thereafter, the voltage generating circuit130may supplying the high power supply voltage ELVDD to the display panel110between the second time point TB at which the scan clock signal SLK is output and a third time point TC at which the scan signal SS and the data signal DS are generated and applied to the display panel110to prepare the display operation period DP. In other words, when the high power supply voltage ELVDD is applied to the display panel110, the scan signal SS and the data signal DS are generated and applied to the display panel110, the display operation period DP may start. The display panel driving circuit120may apply the scan signal SS and the data signal DS to the display panel110at the third time point TC to start the display operation period DP. Here, the display operation period DP may refer to a period during which an image is displayed on the display panel110, and may include, for example, an initialization operation period during which the initialization voltage VINIT is applied to the initialization target node (e.g., the anode of the light emitting element) in the pixel circuit111, a data write operation period during which a data voltage corresponding to the data signal DS is stored in the storage capacitor in the pixel circuit111, and a light emitting operation period during which the light emitting element in the pixel circuit111emits light based on the data signal DS stored in the storage capacitor. The over-current protecting circuit140may perform the first over-current protecting operation of detecting whether the initialization voltage current C-VINIT caused by the initialization voltage VINIT is the over-current in the power-on monitoring period PMP that is set between the first time point TA′ at which the initialization voltage VINIT is output and the second time point TB at which the scan clock signal SLK is output. In an embodiment, the power-on monitoring period PMP may be set as an entire period between the first time point TA′ at which the initialization voltage VINIT is output and the second time point TB at which the scan clock signal SLK is output. In another embodiment, the power-on monitoring period PMP may be set as a partial period between the first time point TA′ at which the initialization voltage VINIT is output and the second time point TB at which the scan clock signal SLK is output. In addition, the over-current protecting circuit140may perform the second over-current protecting operation of detecting whether the initialization voltage current C-VINIT caused by the initialization voltage VINIT is the over-current in the initialization operation period of the pixel circuit111during which the initialization voltage VINIT is applied to the initialization target node in the pixel circuit111when a display operation of displaying an image on the display panel110is performed (i.e., in the display operation period DP). In general, since the initialization voltage VINIT applied to the initialization target node (e.g., the anode of the light emitting element) in the pixel circuit111is lower than the data voltage corresponding to the data signal DS, when the initialization voltage VINIT is applied to the initialization target node in the pixel circuit111, the initialization voltage current C-VINIT may flow from the initialization target node in the pixel circuit111to the initialization voltage generating circuit131(e.g., the DC-DC converter, the amplifier, or the like having the current sinking structure) in the voltage generating circuit130through the initialization transistor T3. Accordingly, the over-current protecting circuit140may detect whether the initialization voltage current C-VINIT caused by the initialization voltage VINIT is the over-current in the initialization operation period of the pixel circuit111to perform the second over-current protecting operation. Meanwhile, in the power-on monitoring period PMP that is set between the first time point TA′ at which the initialization voltage VINIT is output and the second time point TB at which the scan clock signal SLK is output, the over-current protecting circuit140may generate the shut-down request signal STS for shutting down at least one of the display panel110, the display panel driving circuit120, and the voltage generating circuit130when a state in which the initialization voltage current C-VINIT is greater than a first reference current continues for a first reference time. In other words, in the power-on monitoring period PMP, the over-current protecting circuit140may determine the initialization voltage current C-VINIT as the over-current when the state in which the initialization voltage current C-VINIT is greater than the first reference current continues for the first reference time. In addition, in the initialization operation period of the pixel circuit111during which the initialization voltage VINIT is applied to the initialization target node in the pixel circuit111, the over-current protecting circuit140may generate the shut-down request signal STS for shutting down at least one of the display panel110, the display panel driving circuit120, and the voltage generating circuit130when a state in which the initialization voltage current C-VINIT is greater than a second reference current continues for a second reference time. In other words, in the initialization operation period of the pixel circuit111(e.g., the initialization operation may be sequentially performed on the pixel circuit111for each scan line in the display operation period DP), the over-current protecting circuit140may determine the initialization voltage current C-VINIT as the over-current when the state in which the initialization voltage current C-VINIT is greater than the second reference current continues for the second reference time. In an embodiment, the first reference current (e.g., at a level of 50 mA) that is set in the power-on monitoring period PMP may be set to be smaller than the second reference current (e.g., at a level of 500 mA) that is set in the initialization operation period of the pixel circuit111. In other words, since no initialization voltage current C-VINIT has to flow in a normal state in the power-on monitoring period PMP, and the initialization voltage current C-VINIT that is generated by the burnt defect and the like caused by the foreign substance and the like in the display device100is relatively small, the first reference current that is set in the power-on monitoring period PMP may be set to be relatively small. Accordingly, the over-current protecting circuit140may detect a minute over-current (e.g., due to the burnt defect, etc.) caused by the initialization voltage in the power-on monitoring period PMP by setting the first reference current to be smaller than the second reference current. Meanwhile, since the initialization voltage current C-VINIT having a predetermined level flows from the initialization target node in the pixel circuit111to the initialization voltage generating circuit131having a current sinking structure even in the normal state in the initialization operation period of the pixel circuit111, when the second reference current is set to be relatively small, the over-current protecting circuit140may detect the initialization voltage current C-VINIT having the predetermined level as the over-current even when the initialization voltage current C-VINIT is not the over-current. Therefore, the second reference current that is set in the initialization operation period of the pixel circuit111may be set to be relatively high. In other words, the over-current protecting circuit140may detect the over-current (e.g., due to a short-circuit defect, etc.) caused by the initialization voltage without an error under a relatively high reference current condition in the initialization operation period of the pixel circuit111. In an embodiment, the first reference current that is set in the power-on monitoring period PMP, the second reference current that is set in the initialization operation period of the pixel circuit111, the first reference time that is set in the power-on monitoring period PMP, and the second reference time that is set in the initialization operation period of the pixel circuit111may be adjustable in consideration of conditions such as an expected magnitude of the over-current and durability of internal circuits against the over-current. In some embodiments, the over-current protecting circuit140may not determine the over-current of the initialization voltage current C-VINIT generated within a very short time (e.g., at a level of 100 μs) as the over-current by applying a filter to prevent a situation in which at least one of the display panel110, the display panel driving circuit120, and the voltage generating circuit130is shut down due to the over-current of the initialization voltage current C-VINIT generated within the very short time (e.g., at a level of 100 μs). As described above, the display device100may include a display panel110including a pixel circuit111, a display panel driving circuit120configured to drive the display panel110, a voltage generating circuit130configured to receive an input power supply voltage VIN when the display device100is powered on and generate display panel voltages P-VOL for driving the display panel110and driving circuit voltages D-VOL for driving the display panel driving circuit120based on the input power supply voltage VIN, and an over-current protecting circuit140configured to monitor an over-current generated inside the display device100, and generate a shut-down request signal STS for shutting down at least one of the display panel110, the display panel driving circuit120, and the voltage generating circuit130when the over-current is detected, wherein the voltage generating circuit130is configured to output an initialization voltage VINIT for initializing an initialization target node in the pixel circuit111at a first time point TA or TA′ corresponding to a time point at which the input power supply voltage VIN is received or a time point that is later than the time point at which the input power supply voltage VIN is received by a predetermined time, the display panel driving circuit120is configured to output a scan clock signal SLK for generating a scan signal SS that is to be applied to the pixel circuit111at a second time point TB that is later than the first time point TA or TA′, and the over-current protecting circuit140is configured to perform a first over-current protecting operation of detecting whether the initialization voltage current C-VINIT caused by the initialization voltage VINIT is the over-current in a power-on monitoring period PMP that is set between the first time point TA or TA′ and the second time point TB, so that a minute over-current (e.g., due to a burnt defect, etc.) caused by the initialization voltage VINIT may be detected under a relatively low reference current condition in the power-on monitoring period PMP. Accordingly, the display device100according to one embodiment of the present inventive concept may prevent an explosion, a fire, and the like which cause a burnt defect and the like by detecting a minute over-current (e.g., due to a burnt defect, etc.) caused by an initialization voltage VINIT in a power-on sequence period which a conventional over-current protecting circuit may not detect. In addition, according to the display device100, the over-current protecting circuit140may perform the second over-current protecting operation of detecting whether the initialization voltage current C-VINIT caused by the initialization voltage VINIT is the over-current in the initialization operation period of the pixel circuit111during which the initialization voltage VINIT is applied to the initialization target node in the pixel circuit111when the display operation of displaying an image on the display panel110is performed (i.e., in the display operation period DP), so that the over-current (e.g., due to a short-circuit defect, etc.) caused by the initialization voltage VINIT may be detected without an error under a relatively high reference current condition in the initialization operation period of the pixel circuit111. FIGS.4A and4Bare diagrams for describing an example in which the display device ofFIG.1performs a first over-current protecting operation in a power-on monitoring period. Referring toFIGS.4A and4B, the display device100(specifically, the voltage generating circuit130) may output the initialization voltage VINIT for initializing the initialization target node in the pixel circuit111at the first time point TA which corresponds to the time point at which the input power supply voltage VIN is received. Therefore, the display device100(specifically, the over-current protecting circuit140) may perform the first over-current protecting operation in the power-on monitoring period PMP that is set between the first time point TA at which the initialization voltage VINIT for initializing the initialization target node (e.g., the anode of the light emitting element) in the pixel circuit111is output and the second time point TB at which the scan clock signal SLK for generating the scan signal SS that is to be applied to the pixel circuit111is output (i.e., denoted by TOGGLE). However, although the power-on monitoring period PMP has been shown inFIGS.4A and4Bas being set as an entire period between the first time point TA at which the initialization voltage VINIT is output and the second time point TB at which the scan clock signal SLK is output, in some embodiments, the power-on monitoring period PMP may be set as a partial period between the first time point TA at which the initialization voltage VINIT is output and the second time point TB at which the scan clock signal SLK is output. As shown inFIG.4A, since a current path through which the initialization voltage current C-VINIT flows is not formed until the initialization voltage VINIT is applied to the initialization target node of the pixel circuit111(i.e., before the third time point TC at which the display operation period DP starts) even when the initialization voltage VINIT is output from the first time point TA, no initialization voltage current C-VINIT has to flow from the first time point TA at which the initialization voltage VINIT is output to the third time point TC at which the display operation period DP starts in a normal state where a burnt defect and the like caused by a foreign substance and the like within the display device100do not occur (i.e., denoted by NORMAL). However, a minute initialization voltage current C-VINIT may flow between the first time point TA at which the initialization voltage VINIT is output and the third time point TC at which the display operation period DP starts in a defect state where the burnt defect and the like caused by the foreign substance and the like in the display device100are present (i.e., denoted by DEFECT). Here, the minute initialization voltage current C-VINIT may be detected at least during the power-on monitoring period PMP in the defect state where the burnt defect and the like caused by the foreign substance and the like in the display device100are present. In particular, since the burnt defect and the like gradually become larger as the minute initialization voltage current C-VINIT continuously flow, the display device100has to perform the first over-current protecting operation of detecting whether the initialization voltage current C-VINIT caused by the initialization voltage VINIT is the over-current in the power-on monitoring period PMP. As shown inFIG.4B, in the power-on monitoring period PMP that is set between the first time point TA at which the initialization voltage VINIT is output and the second time point TB at which the scan clock signal SLK is output, the over-current protecting circuit140may determine whether a state in which the initialization voltage current C-VINIT caused by the initialization voltage VINIT is greater than the first reference current FRC (i.e., a criterion for determining whether the initialization voltage current C-VINIT is the over-current in the power-on monitoring period PMP) continues for the first reference time FRT. Here, when the state in which the initialization voltage current C-VINIT is greater than the first reference current FRC continues for the first reference time FRT, the over-current protecting circuit140may determine the initialization voltage current C-VINIT as the over-current (i.e., determine a state as the defect state where the burnt defect and the like caused by the foreign substance and the like within the display device100are present), and may generate the shut-down request signal STS for shutting down at least one of the display panel110, the display panel driving circuit120, and the voltage generating circuit130. As a result, at least one of the display panel110, the display panel driving circuit120, and the voltage generating circuit130may be shut down in response to the shut-down request signal STS (i.e., denoted by SHUTDOWN). For example, as shown inFIG.4B, as the voltage generating circuit130is shut down, the voltage generating circuit130may immediately stop outputting the initialization voltage VINIT, and may not output the high power supply voltage ELVDD between the second time point TB and the third time point TC. In addition, as the display panel driving circuit120is shut down, the display panel driving circuit120may not output the scan clock signal SLK at the second time point TB. Meanwhile, the first reference current FRC and the first reference time FRT may be adjustable. For example, a user may adjust the first reference current FRC and the first reference time FRT by using an inter-integrated circuit (I2C) interface. FIGS.5A and5Bare diagrams for describing another example in which the display device ofFIG.1performs a first over-current protecting operation in a power-on monitoring period. Referring toFIGS.5A and5B, the display device100(specifically, the voltage generating circuit130) may output the initialization voltage VINIT for initializing the initialization target node in the pixel circuit111at the first time point TA′ that is later than the time point at which the input power supply voltage VIN is received. In other words, while the initialization voltage VINIT for initializing the initialization target node in the pixel circuit111is output at the first time point TA corresponding to the time point at which the input power supply voltage VIN is received inFIGS.4A and4B, the initialization voltage VINIT for initializing the initialization target node in the pixel circuit111may be output at the first time point TA′ that is later than the time point at which the input power supply voltage VIN is received inFIGS.5A and5B. Therefore, the display device100(specifically, the over-current protecting circuit140) may perform the first over-current protecting operation in the power-on monitoring period PMP that is set between the first time point TA′ at which the initialization voltage VINIT for initializing the initialization target node (e.g., the anode of the light emitting element) in the pixel circuit111is output and the second time point TB at which the scan clock signal SLK for generating the scan signal SS that is to be applied to the pixel circuit111is output (i.e., denoted by TOGGLE). However, although the power-on monitoring period PMP has been shown inFIGS.5A and5Bas being set as an entire period between the first time point TA′ at which the initialization voltage VINIT is output and the second time point TB at which the scan clock signal SLK is output, in some embodiments, the power-on monitoring period PMP may be set as a partial period between the first time point TA′ at which the initialization voltage VINIT is output and the second time point TB at which the scan clock signal SLK is output. As shown inFIG.5A, since a current path through which the initialization voltage current C-VINIT flows is not formed until the initialization voltage VINIT is applied to the initialization target node of the pixel circuit111(i.e., before the third time point TC at which the display operation period DP starts) even when the initialization voltage VINIT is output from the first time point TA′, no initialization voltage current C-VINIT has to flow between the first time point TA′ at which the initialization voltage VINIT is output and the third time point TC at which the display operation period DP starts in a normal state where a burnt defect and the like caused by a foreign substance and the like in the display device100do not occur (i.e., denoted by NORMAL). However, a minute initialization voltage current C-VINIT may flow between the first time point TA′ at which the initialization voltage VINIT is output and the third time point TC at which the display operation period DP starts in a defect state where the burnt defect and the like caused by the foreign substance and the like in the display device100are present (i.e., denoted by DEFECT). Therefore, the minute initialization voltage current C-VINIT may be detected at least during the power-on monitoring period PMP in the defect state where the burnt defect and the like caused by the foreign substance and the like in the display device100are present. In particular, since the burnt defect and the like gradually become larger as the minute initialization voltage current C-VINIT continuously flow, the display device100has to perform the first over-current protecting operation of detecting whether the initialization voltage current C-VINIT caused by the initialization voltage VINIT is the over-current in the power-on monitoring period PMP. As shown inFIG.5B, in the power-on monitoring period PMP that is set between the first time point TA′ at which the initialization voltage VINIT is output and the second time point TB at which the scan clock signal SLK is output, the over-current protecting circuit140may determine whether a state in which the initialization voltage current C-VINIT caused by the initialization voltage VINIT is greater than the first reference current FRC (i.e., a criterion for determining whether the initialization voltage current C-VINIT is the over-current in the power-on monitoring period PMP) continues for the first reference time FRT. Here, when the state in which the initialization voltage current C-VINIT is greater than the first reference current FRC continues for the first reference time FRT, the over-current protecting circuit140may determine the initialization voltage current C-VINIT as the over-current (i.e., determine a state as the defect state where the burnt defect and the like caused by the foreign substance and the like within the display device100are present), and may generate the shut-down request signal STS for shutting down at least one of the display panel110, the display panel driving circuit120, and the voltage generating circuit130. As a result, at least one of the display panel110, the display panel driving circuit120, and the voltage generating circuit130may be shut down in response to the shut-down request signal STS (i.e., denoted by SHUTDOWN). For example, as shown inFIG.5B, as the voltage generating circuit130is shut down, the voltage generating circuit130may immediately stop outputting the initialization voltage VINIT, and may not output the high power supply voltage ELVDD between the second time point TB and the third time point TC. In addition, as the display panel driving circuit120is shut down, the display panel driving circuit120may not output the scan clock signal SLK at the second time point TB. Meanwhile, the first reference current FRC and the first reference time FRT may be adjustable. FIG.6is a diagram for describing that an initialization voltage is applied to an initialization target node in a pixel circuit in an initialization operation period of the pixel circuit included in the display device ofFIG.1, andFIG.7is a diagram for describing that the display device ofFIG.1performs a second over-current protecting operation in an initialization operation period of a pixel circuit included in the display device ofFIG.1. Referring toFIGS.6and7, the pixel circuit111may include a driving transistor T1, a switching transistor T2, an initialization transistor T3, and a storage capacitor CST. A light emitting element OLED may be connected to the pixel circuit111. Here, the pixel circuit111may be connected to the initialization voltage generating circuit131in the voltage generating circuit130configured to apply the initialization voltage VINIT to the anode of the light emitting element (i.e., a second node N2). However, since the pixel circuit111shown inFIG.6has been provided for illustrative purposes, a structure of the pixel circuit111is not limited thereto. The driving transistor T1may include a first terminal connected to the high power supply voltage ELVDD, a gate terminal connected to a first node N1, and a second terminal connected to the second node N2. In other words, the driving transistor T1may be connected in series with the light emitting element OLED between the high power supply voltage ELVDD and the low power supply voltage ELVSS. In the light emitting operation period of the pixel circuit111, the driving transistor T1may allow a driving current to flow through the light emitting element OLED based on the data voltage stored in the storage capacitor CST. The switching transistor T2may include a first terminal connected to the data line DL, a gate terminal connected to the scan line SL, and a second terminal connected to the first node N1. In the data write operation period of the pixel circuit111, the switching transistor T2may transmit the data voltage (i.e., corresponding to the data signal DS) applied through the data line DL to the first node N1in response to the scan signal SS applied through the scan line SL. The storage capacitor CST may include a first terminal connected to the first node N1, and a second terminal connected to the second node N2. In the data write operation period of the pixel circuit111, the storage capacitor CST may store the data voltage transmitted to the first node N1. The light emitting element OLED may include an anode connected to the second node N2, and a cathode connected to the low power supply voltage ELVSS. In the light emitting operation period of the pixel circuit111, the light emitting element OLED may emit light based on the driving current provided from the driving transistor T1. In an embodiment, the light emitting element OLED may be an organic light emitting diode. The initialization transistor T3may include a first terminal connected to the second node N2, a gate terminal connected to an initialization control line CL, and a second terminal connected to the initialization voltage line SEL. In the initialization operation period of the pixel circuit111, the initialization transistor T3may transmit the initialization voltage VINIT applied through the initialization voltage line SEL to the anode of the light emitting element OLED (i.e., the second node N2) in response to an initialization control signal applied through the initialization control line CL. As a result, the anode of the light emitting element OLED (i.e., the second node N2) may be initialized to the initialization voltage VINIT. In some embodiments, the initialization transistor T3may also serve as the sensing transistor configured to perform a sensing operation for detecting characteristics of the light emitting element OLED. In this case, in a sensing operation period of the pixel circuit111, the initialization transistor T3(i.e., the sensing transistor) may output a sensing current to the initialization voltage line (i.e., a sensing voltage line) in response to the initialization control signal (i.e., a sensing control signal) applied through the initialization control line CL (i.e., a sensing control line). Meanwhile, as shown inFIG.6, in the initialization operation period of the pixel circuit111, when the initialization voltage VINIT generated by the initialization voltage generating circuit131in the voltage generating circuit130is applied to the anode of the light emitting element OLED (i.e., the second node N2) through the initialization voltage line SEL and the initialization transistor T3which is turned on, a current path may be formed between the anode of the light emitting element OLED (i.e., the second node N2) and the initialization voltage generating circuit131in the voltage generating circuit130. The initialization voltage current C-VINIT caused by the initialization voltage VINIT may flow along the current path. In general, since the initialization voltage VINIT applied to the anode of the light emitting element OLED (i.e., the second node N2) is lower than the data voltage corresponding to the data signal DS, when the initialization voltage VINIT is applied to the anode of the light emitting element OLED (i.e., the second node N2), the initialization voltage current C-VINIT may flow from the anode of the light emitting element OLED (i.e., the second node N2) to the initialization voltage generating circuit131in the voltage generating circuit130. The initialization voltage generating circuit131in the voltage generating circuit130may be implemented as a DC-DC converter, an amplifier, or the like having a current sinking structure. As described above, while the initialization voltage current C-VINIT having a predetermined level flows from the anode of the light emitting element OLED (i.e., the second node N2) to the initialization voltage generating circuit131having the current sinking structure even in the normal state in the initialization operation period of the pixel circuit111, when a short-circuit defect and the like occur in the initialization voltage line SEL, the initialization voltage current C-VINIT exceeding the predetermined level may flow from the anode of the light emitting element OLED (i.e., the second node N2) to the initialization voltage generating circuit131having the current sinking structure. Therefore, the over-current protecting circuit140may perform the second over-current protecting operation of detecting whether the initialization voltage current C-VINIT caused by the initialization voltage VINIT is the over-current in the initialization operation period of the pixel circuit111when the display operation of displaying an image on the display panel110is performed. Meanwhile, since the initialization transistor T3of the pixel circuit111is turned off in the power-on monitoring period PMP described above, no initialization voltage current C-VINIT flow through the initialization voltage line SEL in the normal state. However, since the initialization voltage current C-VINIT may flow to the initialization voltage generating circuit131through the initialization voltage line SEL even in the power-on monitoring period PMP described above when the burnt defect and the like caused by the foreign substance and the like within the display device100are present, the over-current protecting circuit140may perform the first over-current protecting operation of detecting whether the initialization voltage current C-VINIT actually flows through the initialization voltage line SEL in the power-on monitoring period PMP described above. For this reason, the first reference current (e.g., at a level of 50 mA) that is set in the power-on monitoring period PMP described above may be set to be smaller than the second reference current (e.g., at a level of 500 mA) that is set in the initialization operation period of the pixel circuit111. In detail, as shown inFIG.7, when the display operation of displaying an image on the display panel110is performed (i.e., in the display operation period DP), in the initialization operation period of the pixel circuit111during which the initialization voltage VINIT is applied to the anode of the light emitting element OLED (i.e., the second node N2), the over-current protecting circuit140may determine whether a state in which the initialization voltage current C-VINIT caused by the initialization voltage VINIT is greater than the second reference current SRC (i.e., a criterion for determining whether the initialization voltage current C-VINIT is the over-current in the initialization operation period of the pixel circuit111) continues for the second reference time SRT. Here, when the state in which the initialization voltage current C-VINIT is greater than the second reference current SRC continues for the second reference time SRT, the over-current protecting circuit140may determine the initialization voltage current C-VINIT as the over-current (i.e., determine a state as a defect state where the short-circuit defect and the like are present in the initialization voltage line SEL), and may generate the shut-down request signal STS for shutting down at least one of the display panel110, the display panel driving circuit120, and the voltage generating circuit130. As a result, at least one of the display panel110, the display panel driving circuit120, and the voltage generating circuit130may be shut down in response to the shut-down request signal STS. For example, inFIG.7, when the state in which the initialization voltage current C-VINIT is greater than the second reference current SRC continues for the second reference time SRT which starts from a time point (i.e., denoted by VRT) at which the initialization voltage current C-VINIT is equal to the second reference current SRC, at least one of the display panel110, the display panel driving circuit120, and the voltage generating circuit130may be shut down (i.e., denoted by SHUTDOWN). Meanwhile, the second reference current SRC and the second reference time SRT may be adjustable. For example, the user may adjust the second reference current SRC and the second reference time SRT by using the I2C interface. FIG.8is a flowchart illustrating a method of performing an over-current protecting operation of a display device according to embodiments,FIG.9is a flowchart illustrating an example in which the method ofFIG.8performs a first over-current protecting operation in a power-on monitoring period, andFIG.10is a flowchart illustrating an example in which the method ofFIG.8performs a second over-current protecting operation in an initialization operation period of a pixel circuit. Referring toFIGS.8to10, a method of performing an over-current protecting operation ofFIG.8may include receiving an input power supply voltage when the display device is powered on (S110), generating and outputting an initialization voltage for initializing an initialization target node (e.g., an anode of a light emitting element) in a pixel circuit based on the input power supply voltage (S120), performing a first over-current protecting operation of detecting whether an initialization voltage current caused by the initialization voltage is an over-current in a power-on monitoring period that is set between a first time point at which the initialization voltage is output and a second time point at which a scan clock signal for generating a scan signal that is to be applied to the pixel circuit is output (S130), applying the initialization voltage to the initialization target node in the pixel circuit in an initialization operation period of the pixel circuit after the second time point at which the scan clock signal is output (S140), and performing a second over-current protecting operation of detecting whether the initialization voltage current caused by the initialization voltage is the over-current in the initialization operation period of the pixel circuit (S150). In an embodiment, the first time point at which the initialization voltage is output may coincide with a time point at which the input power supply voltage is received. In another embodiment, the first time point at which the initialization voltage is output may be later than the time point at which the input power supply voltage is received. However, according to the above embodiments, the first time point at which the initialization voltage is output may be earlier than the second time point at which the scan clock signal is output, so that the power-on monitoring period may be set between the first time point at which the initialization voltage is output and the second time point at which the scan clock signal is output. Meanwhile, as shown inFIG.9, in the performing of the first over-current protecting operation in the power-on monitoring period that is set between the first time point at which the initialization voltage is output and the second time point at which the scan clock signal is output, the method ofFIG.8may include monitoring the initialization voltage current caused by the initialization voltage (S210) and determining whether a state in which the initialization voltage current is greater than a first reference current continues for a first reference time (S220, S230). Here, the method ofFIG.8may include determining the initialization voltage current as the over-current when the state in which the initialization voltage current is greater than the first reference current continues for the first reference time (S240). In this case, the method ofFIG.8may include shutting down the display device (S250). Meanwhile, the method ofFIG.8may include not determining the initialization voltage current as the over-current when the state in which the initialization voltage current is greater than the first reference current does not continue for the first reference time. Here, the first reference current used for detecting the over-current in the power-on monitoring period may be set to be smaller than the second reference current used for detecting the over-current in the initialization operation period of the pixel circuit. In addition, the first reference current and the first reference time used for detecting the over-current in the power-on monitoring period may be adjusted in consideration of conditions such as an expected magnitude of the over-current and durability of internal circuits against the over-current. Accordingly, the method ofFIG.8may detect a minute over-current (e.g., due to a burnt defect, etc.) caused by the initialization voltage under a relatively low reference current condition in the power-on monitoring period. Meanwhile, as shown inFIG.10, in the performing of the second over-current protecting operation in the initialization operation period of the pixel circuit, the method ofFIG.10may include monitoring the initialization voltage current caused by the initialization voltage (S310) and determining whether a state in which the initialization voltage current is greater than a second reference current continues for a second reference time (S320, S330). Here, the method ofFIG.10may include determining the initialization voltage current as the over-current when the state in which the initialization voltage current is greater than the second reference current continues for the second reference time (S340). In this case, the method ofFIG.10may include shutting down the display device (S350). Meanwhile, the method ofFIG.10may include not determining the initialization voltage current as the over-current when the state in which the initialization voltage current is greater than the second reference current does not continue for the second reference time. Here, the second reference current used for detecting the over-current in the initialization operation period of the pixel circuit may be set to be greater than the first reference current used for detecting the over-current in the power-on monitoring period. In addition, the second reference current and the second reference time used for detecting the over-current in the initialization operation period of the pixel circuit may be adjusted in consideration of conditions such as an expected magnitude of the over-current and durability of internal circuits against the over-current. Accordingly, the method ofFIG.10may detect the over-current (e.g., due to a short-circuit defect, etc.) caused by the initialization voltage without an error under a relatively high reference current condition in the initialization operation period of the pixel circuit. FIG.11is a block diagram illustrating an electronic device according to embodiments, andFIG.12is a diagram illustrating an example in which the electronic device ofFIG.11is implemented as a television. Referring toFIGS.11and12, the electronic device1000may include a processor1010, a memory device1020, a storage device1030, an input/output (I/O) device1040, a power supply1050, and a display device1060. Here, the display device1060may be the display device100ofFIG.1. In addition, the electronic device1000may further include a plurality of ports for communicating with a video card, a sound card, a memory card, a universal serial bus (USB) device, other electronic device, and the like. In an embodiment, as illustrated inFIG.12, the electronic device1000may be implemented as a television. However, the electronic device1000is not limited thereto. For example, the electronic device1000may be implemented as a cellular phone, a video phone, a smart phone, a smart pad, a smart watch, a tablet personal computer (PC), a car navigation system, a computer monitor, a laptop, a head mounted display (HMD) device, and the like. The processor1010may perform various computing functions. In an embodiment, the processor1010may be a microprocessor, a central processing unit (CPU), an application processor (AP), and the like. The processor1010may be coupled to other components via an address bus, a control bus, a data bus, and the like. Further, the processor1010may be coupled to an extended bus such as a peripheral component interconnection (PCI) bus. The memory device1020may store data for operations of the electronic device1000. For example, the memory device1020may include at least one non-volatile memory device such as an erasable programmable read-only memory (EPROM) device, an electrically erasable programmable read-only memory (EEPROM) device, a flash memory device, a phase change random access memory (PRAM) device, a resistance random access memory (RRAM) device, a nano floating gate memory (NFGM) device, a polymer random access memory (PoRAM) device, a magnetic random access memory (MRAM) device, a ferroelectric random access memory (FRAM) device, and the like and/or at least one volatile memory device such as a dynamic random access memory (DRAM) device, a static random access memory (SRAM) device, a mobile DRAM device, and the like. The storage device1030may include a solid state drive (SSD) device, a hard disk drive (HDD) device, a CD-ROM device, and the like. The I/O device1040may include an input device such as a keyboard, a keypad, a mouse device, a touch-pad, a touch-screen, and the like, and an output device such as a printer, a speaker, and the like. In some embodiments, the I/O device1040may include the display device1060. The power supply (or set power)1050may provide power for operations of the electronic device1000. For example, the power supply1050may be a power management integrated circuit (PMIC). The display device1060may display an image corresponding to visual information of the electronic device1000. In an embodiment, the display device1060may be an organic light emitting display device. The display device1060may be connected to other components via the buses or other communication links. The display device1060may include a voltage generating circuit configured to receive an input power supply voltage when the display device is powered on, and generate display panel voltages for driving the display panel and driving circuit voltages for driving the display panel driving circuit based on the input power supply voltage and an over-current protecting circuit configured to monitor an over-current generated inside the display device, and generate a shut-down request signal for shutting down at least one of the display panel, the display panel driving circuit, and the voltage generating circuit when the over-current is detected. Here, according to the display device1060, the voltage generating circuit may be configured to output an initialization voltage for initializing an initialization target node in the pixel circuit at a first time point which corresponds to a time point at which the input power supply voltage is received or a time point that is later than the time point at which the input power supply voltage is received by a predetermined time, the display panel driving circuit may be configured to output a scan clock signal for generating a scan signal that is to be applied to the pixel circuit at a second time point that is later than the first time point, and the over-current protecting circuit may be configured to perform a first over-current protecting operation of detecting whether an initialization voltage current caused by the initialization voltage is the over-current in a power-on monitoring period that is set between the first time point and the second time point. In addition, according to the display device1060, the over-current protecting circuit may perform a second over-current protecting operation of detecting whether the initialization voltage current caused by the initialization voltage is the over-current in an initialization operation period of the pixel circuit during which the initialization voltage is applied to the initialization target node in the pixel circuit when a display operation of displaying an image on the display panel is performed. As a result, the display device1060may detect a minute over-current (e.g., due to a burnt defect, etc.) in the power-on monitoring period, and may detect the over-current (e.g., due to a short-circuit defect, etc.) in the initialization operation period of the pixel circuit which is greater than the minute over-current without an error. Since these are described above, duplicated description related thereto will not be repeated. The present disclosure may be applied to a display device and an electronic device including the display device. For example, the present disclosure may be applied to a cellular phone, a smart phone, a video phone, a smart pad, a smart watch, a tablet PC, a car navigation system, a television, a computer monitor, a laptop, a head mounted display (HMD) device, an MP3 player, etc. The foregoing is illustrative of the inventive concept and is not to be construed as limiting thereof. Although a few embodiments of the inventive concept have been described, those skilled in the art will readily appreciate that many modifications are possible in the embodiments without materially departing from the novel teachings and advantages of the inventive concept. Accordingly, all such modifications are intended to be included within the scope of the inventive concept as defined in the claims. Therefore, it is to be understood that the foregoing is illustrative of the inventive concept and is not to be construed as limited to the predetermined embodiments disclosed, and that modifications to the disclosed embodiments, as well as other embodiments can be made. | 65,068 |
11862098 | DETAILED DESCRIPTION OF THE EMBODIMENTS In order to make the objective, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions of the embodiments of the present disclosure will be described clearly and completely with reference to the drawings of the embodiments of the present disclosure. Obviously, the described embodiments are part of the embodiments of the present disclosure, but not all the embodiments. The embodiments in the present disclosure and features in the embodiments may be mutually combined in the case of no conflict. On the basis of the described embodiments of the present disclosure, all other embodiments obtained by a person of ordinary skill in the art without inventive efforts fall within the protection scope of the present disclosure. Unless otherwise defined, the technical or scientific terms used in the present disclosure shall have the usual meanings understood by a person of ordinary skill in the art to which the present disclosure belongs. The words “first”, “second” and the like used in the present disclosure do not indicate any order, quantity or importance, but are only used to distinguish different components. The word “including” or “comprising” and the like, means that an element or item preceding the word includes an element or item listed after the word and the equivalent thereof, without excluding other elements or items. The word “connection” or “coupling” and the like is not restricted to physical or mechanical connection, but may include electrical connection, whether direct or indirect. It should be noted that the sizes and shapes of all graphs in the drawings do not reflect the true scale, and only intend to illustrate the content of the present disclosure. The same or similar reference numbers represent the same or similar elements or elements with the same or similar functions from beginning to end. Generally, in order to reduce power consumption of a display device, the display device can be driven at a relatively low refresh frequency (such as 1 Hz), and a signal output by a driving signal end is abnormal due to long-time electric leakage accumulation of a transistor. Embodiments of the present disclosure provide some shift registers. As shown inFIG.1, the shift register may include an input control circuit10, a first transistor M1, a cascade output circuit20, and a driving output circuit30. The input control circuit10is respectively coupled with an input signal end IP, a first control clock signal end CK, a second control clock signal end CKB, a first reference signal end VREF1, a second reference signal end VREF2, a pull-down node PD and a first pull-up node PU_1. The input control circuit is configured to provide a signal of the input signal end IP to the first pull-up node PU_1in response to a signal of the first control clock signal end CK, and provide a signal of the second reference signal end VREF2to a signal of the first pull-up node PU_1in response to a signal of the pull-down node PD and a signal of the second control clock signal end CKB, and control the signal of the pull-down node PD according to the signal of the first pull-up node PU_1, the signal of the first control clock signal end CK and a signal of the first reference signal end VREF1. A grid electrode of the first transistor M1is configured to be coupled with the first reference signal end VREF1. A first electrode of the first transistor M1is configured to be coupled with the first pull-up node PU_1. A second electrode of the first transistor M1is configured to be coupled with a second pull-up node PU2. The cascade output circuit20is respectively coupled with the pull-down node PD, the second pull-up node PU_2, the second reference signal end VREF2, the second control clock signal end CKB and a cascade signal end GP. The cascade output circuit20is configured to provide the signal of the second control clock signal end CKB to the cascade signal end GP under control of a signal of the second pull-up node PU_2, and provide the signal of the second reference signal end VREF2to the cascade signal end GP under control of the signal of the pull-down node PD. The driving output circuit30is respectively coupled with the cascade signal end GP, a first noise reduction clock signal end CKO, a second noise reduction clock signal end CKBO, the first reference signal end VREF1, the second reference signal end VREF2and a driving signal end OP. The driving output circuit30is configured to provide the signal of the second reference signal end VREF2to the driving signal end OP in response to a signal of the cascade signal end GP, and provide the signal of the first reference signal end VREF1to the driving signal end OP in response to signals of the first noise reduction clock signal end CKO and the second noise reduction clock signal end CKBO. According to the shift register provided by the embodiments of the present disclosure, the input control circuit, the first transistor and the driving output circuit are mutually matched to work by loading corresponding signals to each signal end, so that the cascade signal end and the driving signal end can respectively output corresponding signals. Moreover, the shift register may further supplement charges at a denoising enhancement stage, so that an output denoising capability is ensured, stable output of the driving signal end is maintained, and therefore, the shift register in the present application may be advantageously applied to a display device with a relatively low refresh frequency. During implementations, as shown inFIG.1, the first pull-up node PU_1is coupled between a second electrode of an eighth transistor M8in the input control circuit10and a first electrode of the first transistor M1. The second pull-up node PU_2is coupled between a grid electrode of a sixth transistor M6in the cascade output circuit20and a second electrode of the first transistor M1. The pull-down node PD is coupled between a second electrode of a ninth transistor M9in the input control circuit10and a grid electrode of a seventh transistor M7in the cascade output circuit20. It should be noted that the first pull-up node PU_1, the second pull-up node PU_2and the pull-down node PD are virtual nodes in the shift register respectively, the three nodes are only used for describing a structure of the shift register and signal transmission conveniently, and the structure of the shift register and signal transmission may be determined according to a coupling mode among each transistor and capacitor in the shift register. During implementations, in the embodiments of the present disclosure, as shown inFIG.1, the driving output circuit30may include: a second transistor M2, a third transistor M3, a fourth transistor M4, a fifth transistor M5, a first capacitor C1and a second capacitor C2. A grid electrode of the second transistor M2is coupled with the cascade signal end GP. A first electrode of the second transistor M2is coupled with the second reference signal end VREF2. A second electrode of the second transistor M2is coupled with a grid electrode of the fifth transistor M5. A grid electrode of the third transistor M3is coupled with the first noise reduction clock signal end CKO. A first electrode of the third transistor M3is coupled with the first reference signal end VREF1. A second electrode of the third transistor M3is coupled with the grid electrode of the fifth transistor M5. A grid electrode of the fourth transistor M4is coupled with the cascade signal end GP. A first electrode of the fourth transistor M4is coupled with the second reference signal end VREF2. A second electrode of the fourth transistor M4is coupled with the driving signal end OP. A first electrode of the fifth transistor M5is coupled with the first reference signal end VREF1. A second electrode of the fifth transistor M5is coupled with the driving signal end OP. A first electrode of the first capacitor C1is coupled with the second noise reduction clock signal end CKBO. A second electrode of the first capacitor C1is coupled with the grid electrode of the fifth transistor M5. A first electrode of the second capacitor C2is coupled with the grid electrode of the fifth transistor M5. A second electrode of the second capacitor C2is coupled with the driving signal end OP. During implementations, in embodiments of the present disclosure, as shown inFIG.1, the cascade output circuit20may include: the sixth transistor M6, the seventh transistor M7, a third capacitor C3and a fourth capacitor C4. The grid electrode of the sixth transistor M6is coupled with the second pull-up node PU_2. A first electrode of the sixth transistor M6is coupled with the second control clock signal end CKB. A second electrode of the sixth transistor M6is coupled with the cascade signal end GP. The grid electrode of the seventh transistor M7is coupled with the pull-down node PD. A first electrode of the seventh transistor M7is coupled with the second reference signal end VREF2. A second electrode of the seventh transistor M7is coupled with the cascade signal end GP. A first electrode of the third capacitor C3is coupled with the second pull-up node PU_2. A second electrode of the third capacitor C3is coupled with the cascade signal end GP. A first electrode of the fourth capacitor C4is coupled with the pull-down node PD. A second electrode of the fourth capacitor C4is coupled with the second reference signal end VREF2. During implementations, in embodiments of the present disclosure, as shown inFIG.1, the input control circuit10may include: the eighth transistor M8, the ninth transistor M9, a tenth transistor M10, an eleventh transistor M11and a twelfth transistor M12. A grid electrode of the eighth transistor M8is coupled with the first control clock signal end CK. A first electrode of the eighth transistor M8is coupled with the input signal end IP. The second electrode of the eighth transistor M8is coupled with the first pull-up node PU_1. A grid electrode of the ninth transistor M9is coupled with the first control clock signal end CK. A first electrode of the ninth transistor M9is coupled with the first reference signal end VREF1. The second electrode of the ninth transistor M9is coupled with the pull-down node PD. A grid electrode of the tenth transistor M10is coupled with the first pull-up node PU_1. A first electrode of the tenth transistor M10is coupled with the first control clock signal end CK. A second electrode of the tenth transistor M10is coupled with the pull-down node PD. A grid electrode of the eleventh transistor M11is coupled with the pull-down node PD. A first electrode of the eleventh transistor M11is coupled with the second reference signal end VREF2. A second electrode of the eleventh transistor M11is coupled with a first electrode of the twelfth transistor M12. A grid electrode of the twelfth transistor M12is coupled with the second control clock signal end CKB. A second electrode of the twelfth transistor M12is coupled with the first pull-up node PU_1. During implementations, the first electrodes of the above transistors may serve as their source electrodes, and the second electrodes of the transistors may serve as their drain electrodes according to a flowing direction of signals; or, the first electrodes serve as their drain electrodes, and the second electrodes serve as their source electrodes, which is not specifically distinguished here. It should be noted that the transistors mentioned in the above embodiments of the present disclosure may be TFTs, or metal oxide semiconductor (MOS) field effect transistors, which is not limited here. In order to simplify a preparation process, during implementations, in embodiments of the present disclosure, as shown inFIG.1andFIG.3, all the transistors may be P-type transistors. The P-type transistor is conducted when a voltage difference Vgsbetween the grid electrode and source electrode of the P-type transistor and a threshold voltage Vthof the P-type transistor meet a relation Vgs<Vth. For example, when the third transistor M3is a P-type transistor, the third transistor M3is conducted when a relation between a voltage difference Vgs3between the grid electrode and the source electrode of the third transistor M3and a threshold voltage Vth3of the third transistor M3meets a formula Vgs3<Vth3. In the embodiments of the present disclosure, it is illustrated only by taking an example that the transistors are the P-type transistors. As for a case that the transistors are N-type transistors, the design principle is the same as that of the present disclosure and also belongs to the protection range of the present disclosure. Moreover, the N-type transistor is conducted when a voltage difference Vgsbetween grid electrode and source electrode of the N-type transistor and a threshold voltage Vthof the N-type transistor meet a relation Vgs>Vth. For example, when the third transistor M3is an N-type transistor, the third transistor M3is conducted when a relation between a voltage difference Vgs3between the grid electrode and the source electrode of the third transistor M3and a threshold voltage Vth3of the third transistor M3meets a formula: Vgs3>Vth3. Further, during implementations, the P-type transistors are cut off under the action of a high-level signal and are conducted under the action of a low-level signal. The N-type transistors are conducted under the action of a high-level signal and are cut off under the action of a low-level signal. During implementations, a width-to-length ratio of a channel region of an active layer of at least one of the fourth transistor M4, the fifth transistor M5, the sixth transistor M6or the seventh transistor M7may be made to be greater than a width-to-length ratio of a channel region of an active layer of at least one of the first transistor M1, the second transistor M2, the third transistor M3, the eighth transistor M8, the ninth transistor M9, the tenth transistor M10, the eleventh transistor M11or the twelfth transistor M12. Exemplarily, the width-to-length ratio of the channel region of the active layer of the fourth transistor M4, the width-to-length ratio of the channel region of the active layer of the fifth transistor M5, the width-to-length ratio of the channel region of the active layer of the sixth transistor M6and the width-to-length ratio of the channel region of the active layer of the seventh transistor M7may be made to be greater than the width-to-length ratio of the channel region of the active layer of the first transistor M1, the width-to-length ratio of the channel region of the active layer of the second transistor M2, the width-to-length ratio of the channel region of the active layer of the third transistor M3, the width-to-length ratio of the channel region of the active layer of the eighth transistor M8, the width-to-length ratio of the channel region of the active layer of the ninth transistor M9, the width-to-length ratio of the channel region of the active layer of the tenth transistor M10, the width-to-length ratio of the channel region of the active layer of the eleventh transistor M11, and the width-to-length ratio of the channel region of the active layer of the twelfth transistor M12. During implementations, the width-to-length ratio of the channel region of the active layer of at least one of the fourth transistor M4, the fifth transistor M5, the sixth transistor M6or the seventh transistor M7may be made to range from 10 μm/2 μm to 100 μm/10 μm. Exemplarily, the width-to-length ratio of the channel region of the active layer of the fourth transistor M4, the width-to-length ratio of the channel region of the active layer of the fifth transistor M5, the width-to-length ratio of the channel region of the active layer of the sixth transistor M6, and the width-to-length ratio of the channel region of the active layer of the seventh transistor M7may be made to range from 10 μm/2 μm to 100 μm/10 μm. For example, the width-to-length ratio of the channel region of the active layer of the fourth transistor M4, the width-to-length ratio of the channel region of the active layer of the fifth transistor M5, the width-to-length ratio of the channel region of the active layer of the sixth transistor M6, and the width-to-length ratio of the channel region of the active layer of the seventh transistor M7may be made to be ranged at 10 μm/2 μm respectively. The width-to-length ratio of the channel region of the active layer of the fourth transistor M4, the width-to-length ratio of the channel region of the active layer of the fifth transistor M5, the width-to-length ratio of the channel region of the active layer of the sixth transistor M6, and the width-to-length ratio of the channel region of the active layer of the seventh transistor M7may also be made to be ranged at 100 μm/10 μm respectively. The width-to-length ratio of the channel region of the active layer of the fourth transistor M4, the width-to-length ratio of the channel region of the active layer of the fifth transistor M5, the width-to-length ratio of the channel region of the active layer of the sixth transistor M6, and the width-to-length ratio of the channel region of the active layer of the seventh transistor M7may also be made to be ranged at 50 μm/5 μm respectively. In practical application, numerical values of the width-to-length ratio of the channel region of the active layer of the fourth transistor M4, the width-to-length ratio of the channel region of the active layer of the fifth transistor M5, the width-to-length ratio of the channel region of the active layer of the sixth transistor M6and the width-to-length ratio of the channel region of the active layer of the seventh transistor M7may be designed according to demands of practical application, which is not limited here. During implementations, the width-to-length ratio of the channel region of the active layer of at least one of the first transistor M1, the second transistor M2, the third transistor M3, the eighth transistor M8, the ninth transistor M9, the tenth transistor M10, the eleventh transistor M11or the twelfth transistor M12may be made to range from 2 μm/2 μm to 20 μm/10 μm. Exemplarily, the width-to-length ratio of the channel region of the active layer of the first transistor M1, the width-to-length ratio of the channel region of the active layer of the second transistor M2, the width-to-length ratio of the channel region of the active layer of the third transistor M3, the width-to-length ratio of the channel region of the active layer of the eighth transistor M8, the width-to-length ratio of the channel region of the active layer of the ninth transistor M9, the width-to-length ratio of the channel region of the active layer of the tenth transistor M10, the width-to-length ratio of the channel region of the active layer of the eleventh transistor M11, and the width-to-length ratio of the channel region of the active layer of the twelfth transistor M12may be made to range from 2 μm/2 μm to 20 μm/10 μm. For example, the width-to-length ratio of the channel region of the active layer of the first transistor M1, the width-to-length ratio of the channel region of the active layer of the second transistor M2, the width-to-length ratio of the channel region of the active layer of the third transistor M3, the width-to-length ratio of the channel region of the active layer of the eighth transistor M8, the width-to-length ratio of the channel region of the active layer of the ninth transistor M9, the width-to-length ratio of the channel region of the active layer of the tenth transistor M10, the width-to-length ratio of the channel region of the active layer of the eleventh transistor M11, and the width-to-length ratio of the channel region of the active layer of the twelfth transistor M12may be made to be ranged at 2 μm/2 μm. The width-to-length ratio of the channel region of the active layer of the first transistor M1, the width-to-length ratio of the channel region of the active layer of the second transistor M2, the width-to-length ratio of the channel region of the active layer of the third transistor M3, the width-to-length ratio of the channel region of the active layer of the eighth transistor M8, the width-to-length ratio of the channel region of the active layer of the ninth transistor M9, the width-to-length ratio of the channel region of the active layer of the tenth transistor M10, the width-to-length ratio of the channel region of the active layer of the eleventh transistor M11, and the width-to-length ratio of the channel region of the active layer of the twelfth transistor M12may also be made to be ranged at 20 μm/10 μm. The width-to-length ratio of the channel region of the active layer of the first transistor M1, the width-to-length ratio of the channel region of the active layer of the second transistor M2, the width-to-length ratio of the channel region of the active layer of the third transistor M3, the width-to-length ratio of the channel region of the active layer of the eighth transistor M8, the width-to-length ratio of the channel region of the active layer of the ninth transistor M9, the width-to-length ratio of the channel region of the active layer of the tenth transistor M10, the width-to-length ratio of the channel region of the active layer of the eleventh transistor M11, and the width-to-length ratio of the channel region of the active layer of the twelfth transistor M12may also be made to be ranged at 10 μm/5 μm. In practical application, numerical values of the width-to-length ratio of the channel region of the active layer of the first transistor M1, the width-to-length ratio of the channel region of the active layer of the second transistor M2, the width-to-length ratio of the channel region of the active layer of the third transistor M3, the width-to-length ratio of the channel region of the active layer of the eighth transistor M8, the width-to-length ratio of the channel region of the active layer of the ninth transistor M9, the width-to-length ratio of the channel region of the active layer of the tenth transistor M10, the width-to-length ratio of the channel region of the active layer of the eleventh transistor M11, and the width-to-length ratio of the channel region of the active layer of the twelfth transistor M12may be designed according to the demands of the practical application, which is not limited here. During implementations, a capacitance value of at least one of the first capacitor C1, the second capacitor C2, the third capacitor C3or the fourth capacitor C4may be made to range from 10 fF to 1 pF. Exemplary, the capacitance value of at least one of the first capacitor C1, the second capacitor C2, the third capacitor C3or the fourth capacitor C4may be made to be ranged at 10 fF. The capacitance value of at least one of the first capacitor C1, the second capacitor C2, the third capacitor C3or the fourth capacitor C4may also be made to be ranged at 50 fF. The capacitance value of at least one of the first capacitor C1, the second capacitor C2, the third capacitor C3or the fourth capacitor C4may also be made to be ranged at 1 pF. In the practical application, the capacitance value of the first capacitor C1, the capacitance value of the second capacitor C2, the capacitance value of the third capacitor C3and the capacitance value of the fourth capacitor C4may be designed according to the demands of the practical application, which is not limited here. The specific structure of the shift register provided by the embodiments of the present disclosure is only illustrated above. During implementations, the structure of each of the above circuit is not limited to the above structure provided by the embodiments of the present disclosure, and may further be other structures known by those skilled in the art, which is not limited here. Based on the same inventive concept, embodiments of the present disclosure further provide a driving method of a shift register. As shown in combination withFIG.2, the driving method may include the following operations S210, S220, and S230. At a first refresh frequency, one display frame includes a data refresh stage T10and a data retention stage T20. The data retention stage T20includes a denoising retention stage and a denoising enhancement stage alternately arranged. S210, at the data refresh stage T10, an input signal with a pulse level is loaded to an input signal end IP, a control clock pulse signal is loaded to a control clock signal end, a noise reduction clock pulse signal is loaded to a noise reduction clock signal end, a fixed voltage signal is loaded to a first reference signal end VREF1, and a fixed voltage signal is loaded to a second reference signal end VREF2, so that a cascade signal end GP of the shift register is controlled to output a cascade signal with a pulse level, and a driving signal end OP of the shift register is controlled to output a driving signal with a pulse level. S220, at the denoising retention stage, a fixed voltage signal is loaded to the input signal end IP, a fixed voltage signal is loaded to the control clock signal end, a fixed voltage signal is loaded to the noise reduction clock signal end, a fixed voltage signal is loaded to the first reference signal end VREF1, and a fixed voltage signal is loaded to the second reference signal end VREF2, so that the cascade signal end GP is controlled to output a fixed voltage signal, and the driving signal end OP is controlled to output a fixed voltage signal. S230, at the denoising enhancement stage, a fixed voltage signal is loaded to the input signal end IP, a fixed voltage signal is loaded to the control clock signal end, a clock pulse signal is loaded to the noise reduction clock signal end, a fixed voltage signal is loaded to the first reference signal end VREF1, and a fixed voltage signal is loaded to the second reference signal end VREF2, so that the cascade signal end GP is controlled to output a fixed voltage signal, and the driving signal end OP is controlled to output a fixed voltage signal. According to the driving method of the shift register provided by the embodiments of the present disclosure, at the data refresh stage T10, the input signal with the pulse level is loaded to the input signal end IP, the control clock pulse signal is loaded to the control clock signal end, the noise reduction clock pulse signal is loaded to the noise reduction clock signal end, the fixed voltage signal is loaded to the first reference signal end VREF1, and the fixed voltage signal is loaded to the second reference signal end VREF2, so that the cascade signal end GP can be controlled to output the cascade signal with the pulse level, and the driving signal end OP can be controlled to output the driving signal with the pulse level. In this way, cascade output and driving output of the shift register can be realized, and therefore, a display device may perform data refreshing. At the denoising retention stage, the fixed voltage signal is loaded to the input signal end IP, the fixed voltage signal is loaded to the control clock signal end, the fixed voltage signal is loaded to the noise reduction clock signal end, the fixed voltage signal is loaded to the first reference signal end VREF1, and the fixed voltage signal is loaded to the second reference signal end VREF2, so that the cascade signal end GP can be controlled to output the fixed voltage signal, and the driving signal end OP can be controlled to output the fixed voltage signal. In this way, output retention of the shift register can be realized. At the denoising enhancement stage, the fixed voltage signal is loaded to the input signal end IP, the fixed voltage signal is loaded to the control clock signal end, the clock pulse signal is loaded to the noise reduction clock signal end, the fixed voltage signal is loaded to the first reference signal end VREF1, and the fixed voltage signal is loaded to the second reference signal end VREF2, so that the cascade signal end GP can be controlled to output the fixed voltage signal, and the driving signal end OP can be controlled to output the fixed voltage signal. In this way, the shift register can supplement charges, the output denoising capability is ensured, and output of the driving signal end OP is kept stable. Moreover, the display device generally may be in a static picture display state or a standby state for a long time, and in order to reduce power consumption, the display device may work at a low refresh frequency (such as 1 Hz and 30 Hz). According to the shift register in the embodiments of the present disclosure, the shift register can supplement the charges at the denoising enhancement stage, so that the output denoising capability is ensured, the output of the driving signal end OP is kept stable, and the shift register in the present application can be advantageously applied to a display device with the low refresh frequency. During implementations, in embodiments of the present disclosure, the first level may be a low level, and the second level may be a high level. Alternatively, the first level may also be a high level, and the second level may also be a low level. In practical application, it may be designed and determined according to practical application demands, which is not limited here. During implementations, in embodiments of the present disclosure, the driving method further includes: at a second refresh frequency, one display frame includes a data refresh stage T10; at the data refresh stage T10, an input signal with a pulse level is loaded to the input signal end IP, a control clock pulse signal is loaded to the control clock signal end, a noise reduction clock pulse signal is loaded to the noise reduction clock signal end, a fixed voltage signal is loaded to the first reference signal end VREF1, and a fixed voltage signal is loaded to the second reference signal end VREF2, so that the cascade signal end GP of the shift register is controlled to output a cascade signal with a pulse level, and the driving signal end OP of the shift register is controlled to output a driving signal with a pulse level. The display device generally may be in a static picture display state or a standby state for a long time, and in order to reduce the power consumption, the display device may work at a relatively low refresh frequency (such as 1 Hz and 30 Hz). The display device may also display a video picture, and in order to improve a display effect of the video picture, the display device may work at a relatively high refresh frequency (such as 60 Hz and 120 Hz). During implementations, in embodiments of the present disclosure, the first refresh frequency may be the relatively low refresh frequency, for example 1 Hz and 30 Hz. The second refresh frequency may be the relatively high refresh frequency, for example 60 Hz and 120 Hz. During implementations, in embodiments of the present disclosure, the control clock signal end includes a first control clock signal end CK and a second control clock signal end CKB; and the control clock pulse signal includes a first control clock pulse signal and a second control clock pulse signal. Cycles of the first control clock pulse signal and the second control clock pulse signal are the same, and a phase difference between the first control clock pulse signal and the second control clock pulse signal is ½ cycle. Moreover, at the data refresh stage T10, the loading the control clock pulse signal to the control clock signal end includes: the first control clock pulse signal is loaded to the first control clock signal end CK, and the second control clock pulse signal is loaded to the second control clock signal end CKB. Exemplarily, as shown inFIG.1andFIG.3, ck represents a signal loaded to the first control clock signal end CK, and ckb represents a signal loaded to the second control clock signal end CKB. At the data refresh stage T10, the first control clock pulse signal loaded to the first control clock signal end CK is a high-low level switching clock pulse signal, and the second control clock pulse signal loaded to the second control clock signal end CKB is also a high-low level switching clock pulse signal. Moreover, the cycles of the first control clock pulse signal and the second control clock pulse signal are the same, and the phase difference is the ½ cycle. For example, duty cycles of the first control clock pulse signal and the second control clock pulse signal are the same, and the duty cycle is greater than 50%. In the practical application, the implementations of the first control clock pulse signal and the second control clock pulse signal can be designed and determined according to the practical application demands, which is not limited here. During implementations, in embodiments of the present disclosure, in the denoising retention stage and the denoising enhancement stage, the loading the fixed voltage signal to the control clock signal end may include: a fixed voltage signal with a second level is loaded to the first control clock signal end CK, and a fixed voltage signal with the second level is loaded to the second control clock signal end CKB. Exemplarily, as shown inFIG.1andFIG.3, when transistors in the shift register are P-type transistors, a fixed voltage signal with a high level may be loaded to the first control clock signal end CK, and a fixed voltage signal with a high level may be loaded to the second control clock signal end CKB. When the transistors in the shift register are N-type transistors, a fixed voltage signal with a low level may be loaded to the first control clock signal end CK, and a fixed voltage signal with a low level may be loaded to the second control clock signal end CKB. During implementations, in embodiments of the present disclosure, in the denoising retention stage and the denoising enhancement stage, the loading the fixed voltage signal to the input signal end IP may include: a fixed voltage signal with the second level is loaded to the input signal end IP. Exemplarily, as shown inFIG.1andFIG.3, ip represents a signal loaded to the input signal end IP. When the transistors in the shift register are the P-type transistors, a fixed voltage signal with a high level may be loaded to the input signal end IP. When the transistors in the shift register are the N-type transistors, a fixed voltage signal with a low level may be loaded to the input signal end IP. During implementations, in embodiments of the present disclosure, in the denoising retention stage and the denoising enhancement stage, the controlling the cascade signal end GP to output the fixed voltage signal and the controlling the driving signal end OP to output the fixed voltage signal may include: the cascade signal end GP is controlled to output a fixed voltage signal with the second level, and the driving signal end OP is controlled to output a fixed voltage signal with the first level. Exemplarily, as shown inFIG.1andFIG.3, gp represents a signal output by the cascade signal end GP, and op represents a signal output by the driving signal end OP. When the transistors in the shift register are the P-type transistors, the cascade signal end GP may be controlled to output a fixed voltage signal with a high level, and the driving signal end OP may be controlled to output a fixed voltage signal with a low level. When the transistors in the shift register are the N-type transistors, the cascade signal end GP may be controlled to output a fixed voltage signal with a low level, and the driving signal end OP may be controlled to output a fixed voltage signal with a high level. During implementations, in embodiments of the present disclosure, the pulse level of the input signal may be made to be the first level. In this way, when an eighth transistor M8is conducted, the pulse level of the input signal may be input to a first pull-up node PU_1, so that a level of the first pull-up node PU_1is the first level, and thus a tenth transistor M10may be controlled to be conducted through the level of the first pull-up node PU_1. Exemplarily, as shown inFIG.1andFIG.3, when the transistors in the shift register are the P-type transistors, the pulse level of the input signal is a low level. When the transistors in the shift register are the N-type transistors, the pulse level of the input signal is a high level. During implementations, in embodiments of the present disclosure, the pulse level of the cascade signal may be made to be the first level. In this way, a fourth transistor M4may be conducted under the control of the pulse level of the cascade signal so as to provide the signal of the second reference signal end VREF2to the driving signal end OP. Exemplarily, as shown inFIG.1andFIG.3, when the transistors in the shift register are the P-type transistors, the pulse level of the cascade signal is a low level. When the transistors in the shift register are the N-type transistors, the pulse level of the cascade signal is a high level. During implementations, in embodiments of the present disclosure, the fixed voltage signal of the first reference signal end VREF1may be made to be the first level, the fixed voltage signal of the second reference signal end VREF2may be made to be the second level, and the pulse level of the driving signal may be made to be the second level. Exemplarily, as shown inFIG.1andFIG.3, when the transistors in the shift register are the P-type transistors, the first level is a low level, and the second level is a high level. When the transistors in the shift register are the N-type transistors, the first level is a high level, and the second level is a low level. During implementations, in embodiments of the present disclosure, the noise reduction clock signal end may include a first noise reduction clock signal end CKO and a second noise reduction clock signal end CKBO. The noise reduction clock pulse signal includes a first noise reduction clock pulse signal and a second noise reduction clock pulse signal. Cycles of the first noise reduction clock pulse signal and the second noise reduction clock pulse signal are the same, and a phase difference between the first noise reduction clock pulse signal and the second noise reduction clock pulse is ½ cycle. Moreover, at the data refresh stage T10, the loading the noise reduction clock pulse signal to the noise reduction clock signal end may include: the first noise reduction clock pulse signal is loaded to the first noise reduction clock signal end CKO, and the second noise reduction clock pulse signal is loaded to the second noise reduction clock signal end CKBO. Exemplarily, as shown inFIG.1andFIG.3, cko represents a signal loaded to the first noise reduction clock signal end CKO, and ckbo represents a signal loaded to the second noise reduction clock signal end CKBO. At the data refresh stage T10, the first noise reduction clock pulse signal loaded to the first noise reduction clock signal end CKO is a high-low level switching clock pulse signal, and the second noise reduction clock pulse signal loaded to the second noise reduction clock signal end CKBO is also a high-low level switching clock pulse signal. Moreover, the cycles of the first noise reduction clock pulse signal and the second noise reduction clock pulse signal are the same, and the phase difference is the ½ cycle. For example, duty cycles of the first noise reduction clock pulse signal and the second noise reduction clock pulse signal are the same, and the duty cycle is greater than 50%. In the practical application, the implementations of the first noise reduction clock pulse signal and the second noise reduction clock pulse signal can be designed and determined according to the practical application demands, which is not limited here. In some examples, as shown inFIG.3, the cycle of the first noise reduction clock pulse signal and the cycle of the first control clock pulse signal may be made to be the same. Further, the duty cycle of the first noise reduction clock pulse signal and the duty cycle of the first control clock pulse signal may be made to be the same. Exemplarily, a falling edge of the first noise reduction clock pulse signal is aligned with a rising edge of the second control clock pulse signal. A falling edge of the second noise reduction clock pulse signal is aligned with a rising edge of the first control clock pulse signal. In the practical application, a relationship among the first noise reduction clock pulse signal, the second noise reduction clock pulse signal, the first control clock pulse signal and the second control clock pulse signal can be designed and determined according to practical demands, which is not limited here. During implementations, in embodiments of the present disclosure, at the denoising retention stage, the loading the fixed voltage signal to the noise reduction clock signal end may include: a fixed voltage signal with the first level is loaded to the first noise reduction clock signal end CKO, and a fixed voltage signal with the first level is loaded to the second noise reduction clock signal end CKBO. Exemplarily, as shown inFIG.1andFIG.3, when the transistors in the shift register are the P-type transistors, at the denoising retention stage, a fixed voltage signal with a low level is loaded to the first noise reduction clock signal end CKO, and a fixed voltage signal with a low level is loaded to the second noise reduction clock signal end CKBO. When the transistors in the shift register are the N-type transistors, at the denoising retention stage, a fixed voltage signal with a high level is loaded to the first noise reduction clock signal end CKO, and a fixed voltage signal with a high level is loaded to the second noise reduction clock signal end CKBO. During implementations, in embodiments of the present disclosure, at the denoising enhancement stage, the loading the clock pulse signal to the noise reduction clock signal end include: the first noise reduction clock pulse signal is loaded to the first noise reduction clock signal end CKO, and the second noise reduction clock pulse signal is loaded to the second noise reduction clock signal end CKBO. The first level of the first noise reduction clock pulse signal in the denoising enhancement stage is adjacent to the denoising retention stage appearing before the denoising enhancement stage, and a second level of the second noise reduction clock pulse signal in the denoising enhancement stage is adjacent to the denoising retention stage appearing before the denoising enhancement stage. Exemplarily, as shown inFIG.1andFIG.3, at the denoising enhancement stage, the first noise reduction clock pulse signal loaded to the first noise reduction clock signal end CKO is a high-low level switching clock pulse signal, and the second noise reduction clock pulse signal loaded to the second noise reduction clock signal end CKBO is also a high-low level switching clock pulse signal. Moreover, when the transistors in the shift register are the P-type transistors, a low level of the first noise reduction clock pulse signal in the denoising enhancement stage is adjacent to the denoising retention stage appearing before the noise reduction enhancement stage, and a high level of the second noise reduction clock pulse signal in the denoising enhancement stage is adjacent to the denoising retention stage appearing before the denoising enhancement stage. When the transistors in the shift register are the N-type transistors, a high level of the first noise reduction clock pulse signal in the denoising enhancement stage is adjacent to the denoising retention stage appearing before the denoising enhancement stage, and a low level of the second noise reduction clock pulse signal in the denoising enhancement stage is adjacent to the denoising retention stage appearing before the denoising enhancement stage. During implementations, in embodiments of the present disclosure, in the denoising enhancement stage, the quantity of clock cycles of the first noise reduction clock pulse signal and the quantity of clock cycles of the second noise reduction clock pulse signal are the same, and the quantity of the clock cycles is at least one. Exemplarily, as shown inFIG.3, in the denoising enhancement stage, the quantity of the clock cycles of the first noise reduction clock pulse signal and the quantity of the clock cycles of the second noise reduction clock pulse signal are both one. The quantity of the clock cycles of the first noise reduction clock pulse signal and the quantity of the clock cycles of the second noise reduction clock pulse signal may also be made to be both two, three, four or more, which is not limited here. During implementations, in embodiments of the disclosure, as shown inFIG.3, in the same denoising enhancement stage, the falling edge of the first noise reduction clock pulse signal and the falling edge of the second noise reduction clock pulse signal are respectively aligned with a starting moment of a denoising retention stage appearing after the denoising enhancement stage, and a rising edge of the second noise reduction clock pulse signal is aligned with an end moment of the denoising retention stage appearing before the denoising enhancement stage. In the data refresh stage T10and the denoising enhancement stage, maintaining durations of the second level of the second noise reduction clock pulse signal are the same. For example, in the data refresh stage T10and the denoising enhancement stage, maintaining durations of the high level of the second noise reduction clock pulse signal are the same, and maintaining durations of the low level of the second noise reduction clock pulse signal are also the same. A working process of the above shift register provided by embodiments of the present disclosure at the first refresh frequency is described below by taking the shift register shown inFIG.1as an example in combination with a signal sequence diagram as shown inFIG.3. In the following description, 1 represents a high-level signal, 0 represents a low-level signal, and it should be noted that 1 and 0 are logic levels and are only used for better explaining the working process of the embodiments of the present disclosure instead of voltages applied to a grid electrode of each transistor during implementations. For example, as shown inFIG.3, at the first refresh frequency, one display frame may include the data refresh stage T10and the data retention stage T20. The data retention stage T20includes the denoising retention stage T21-1and the denoising enhancement stage T22-1alternately arranged. It should be noted that the signal sequence diagram shown inFIG.3is only the working process of one shift register in one current display frame. The working processes of the shift register in other display frames are basically the same as the working process in the current display frame respectively, which is not repeated here. The data refresh stage T10includes a T11stage, a T12stage, a T13stage and a T14stage. For example, in the T11stage, ip=0, ckb=1, ck=0, cko=0, and ckbo=1. Because ckb=1, a twelfth transistor M12is cut off. Because ck=0, a ninth transistor M9is conducted to provide a low-level signal of the first reference signal end VREF1to a pull-down node PD, and a signal of the pull-down node PD is made to be a low-level signal, so that a seventh transistor M7is controlled to be conducted. The conducted seventh transistor M7provides a high-level signal of the second reference signal end VREF2to the cascade signal end GP, so that the cascade signal end GP outputs a high-level signal. Because ck=0, the eighth transistor M8is conducted to provide a low-level signal of the input signal end IP to the first pull-up node PU_1, the first pull-up node PU_1is made to be a low-level signal, thus the tenth transistor M10is controlled to be conducted to provide a low-level signal of the first control clock signal end CK to the pull-down node PD, and the signal of the pull-down node PD is further made to be a low-level signal. As a first transistor M1meets a formula: Vgs1<Vth1, the first transistor M1is conducted. A second pull-up node PU_2and the first pull-up node PU_1are conducted through the conducted first transistor M1, so that a signal of the second pull-up node PU_2may be made to be a low-level signal in time to control a sixth transistor M6to be conducted to provide a high-level signal of the second control clock signal end CKB to the cascade signal end GP, and the cascade signal end GP outputs a high-level cascade signal. As the cascade signal end GP outputs the high-level signal, a second transistor M2and the fourth transistor M4may be controlled to be cut off. Because cko=0, a third transistor M3is conducted to provide a low-level signal of the first reference signal end VREF1to a grid electrode of a fifth transistor M5, so that the fifth transistor M5is controlled to be conducted to provide the low-level signal of the first reference signal end VREF1to the driving signal end OP, and the driving signal end OP outputs a low-level driving signal. In the T12stage, ip=1, ckb=0, ck=1, cko=1, and ckbo=0. Because ck=1, the ninth transistor M9and the eighth transistor M8are both cut off. The second pull-up node PU_2is kept as the low-level signal under the action of a third capacitor C3so as to control the sixth transistor M6to be conducted to provide a low-level signal of the second control clock signal end CKB to the cascade signal end GP, and the cascade signal end GP outputs a low-level cascade signal. Due to the action of the third capacitor C3, a level of the second pull-up node PU_2is further pulled down, so that the sixth transistor M6is controlled to be fully conducted as far as possible to provide the low-level signal of the second control clock signal end CKB to the cascade signal end GP, and the cascade signal end GP outputs the low-level cascade signal. Moreover, in this stage, one electrode, coupled with the first pull-up node PU_1, of the first transistor M1serves as a source electrode of the first transistor M1, so that the first transistor M1cannot meet the formula: Vgs1<Vth1, the first transistor M1is cut off, the level of the second pull-up node PU_2can be kept stable, and a situation that the level of the second pull-up node PU_2is increased due to electric leakage, and consequently, output of the cascade signal end GP is unstable is avoided. Moreover, the tenth transistor M10provides a high-level signal of the first control clock signal end CK to the pull-down node PD under the control of a signal of the first pull-up node PU_1so as to control the seventh transistor M7to be cut off, and adverse effects on a signal output by the cascade signal end GP are avoided. Because cko=1, the third transistor M3is cut off. As the cascade signal end GP outputs the low-level signal, the second transistor M2and the fourth transistor M4may be controlled to be conducted. The conducted second transistor M2may provide the high-level signal of the second reference signal end VREF2to the grid electrode of the fifth transistor M5so as to control the fifth transistor M5to be cut off. The conducted fourth transistor M4may provide the high-level signal of the second reference signal end VREF2to the driving signal end OP, so that the driving signal end OP outputs a high-level driving signal. After the T12stage and before the T13stage, because ckb=1, the twelfth transistor M12is cut off. Because ck=1, the ninth transistor M9and the eighth transistor M8are both cut off. The second pull-up node PU_2is kept as the low-level signal under the action of the third capacitor C3so as to control the sixth transistor M6to be conducted to provide the high-level signal of the second control clock signal end CKB to the cascade signal end GP, and the cascade signal end GP outputs the high-level cascade signal so as to control the second transistor M2and the fourth transistor M4to be both cut off. As the signal cko of the first noise reduction clock signal end CKO is converted from a high level to a low level, the third transistor M3is conducted to be able to provide the low-level signal of the first reference signal end VREF1to the grid electrode of the fifth transistor M5, so that the fifth transistor M5is controlled to be conducted to provide the low-level signal of the first reference signal end VREF1to the driving signal end OP, and the driving signal end OP outputs the low-level driving signal. In the T13stage, ip=1, ckb=1, ck=0, cko=0, and ckbo=1. Because ckb=1, the twelfth transistor M12is cut off. Because ck=0, the eighth transistor M8and the ninth transistor M9are both conducted. The conducted eighth transistor M8provides the high-level signal of the input signal end IP to the first pull-up node PU_1, so that the first pull-up node PU_1is a high-level signal, and the tenth transistor M10is controlled to be cut off. Because the first reference signal end VREF1is the low-level signal, the first transistor M1is conducted to provide the high-level signal of the first pull-up node PU_1to the second pull-up node PU_2, so that the sixth transistor M6is controlled to be cut off. The conducted ninth transistor M9provides the low-level signal of the first reference signal end VREF1to the pull-down node PD, so that the signal of the pull-down node PD is a low-level signal to control the seventh transistor M7to be conducted. The conducted seventh transistor M7provides the high-level signal of the second reference signal end VREF2to the cascade signal end GP, so that the cascade signal end GP outputs the high-level signal to control the second transistor M2and the fourth transistor M4to be both cut off. Because cko=0, the third transistor M3is conducted to be able to provide the low-level signal of the first reference signal end VREF1to the grid electrode of the fifth transistor M5, then the fifth transistor M5is controlled to be conducted to provide the low-level signal of the first reference signal end VREF1to the driving signal end OP, and the driving signal end OP outputs the low-level driving signal. Moreover, a voltage difference between two ends is kept stable through the first capacitor C1and the second capacitor C2. In the T14stage, ip=1, ckb=0, ck=1, cko=1, and ckbo=0. Because ck=1, the eighth transistor M8and the ninth transistor M9are both cut off, and due to the action of a fourth capacitor C4, the signal of the pull-down node PD may be kept as the low-level signal. The seventh transistor M7is controlled to be conducted to provide the high-level signal of the second reference signal end VREF2to the cascade signal end GP, thus the cascade signal end GP outputs the high-level signal, and the second transistor M2and the fourth transistor M4are controlled to be both cut off. Because cko=0, the third transistor M3is conducted to be able to provide the low-level signal of the first reference signal end VREF1to the grid electrode of the fifth transistor M5, then the fifth transistor M5is controlled to be conducted to provide the low-level signal of the first reference signal end VREF1to the driving signal end OP, and the driving signal end OP outputs the low-level driving signal. Moreover, an eleventh transistor M11and the twelfth transistor M12are both conducted, so that the first pull-up node PU_1may be made to be the high-level signal, and the second pull-up node PU_2may be made to be high-level signal, thereby controlling the sixth transistor M6to be cut off. After the T14stage, the working processes of the T13stage and the T14stage are repeatedly executed all the time until entering into the denoising retention stage T21-1. At the denoising retention stage T21-1, ip=1, ckb=1, ck=1, cko=0, and ckbo=0. Because ck=1, the eighth transistor M8and the ninth transistor M9are both cut off, and due to the action of the fourth capacitor C4, the signal of the pull-down node PD may be kept as the low-level signal. The seventh transistor M7is controlled to be conducted to provide the high-level signal of the second reference signal end VREF2to the cascade signal end GP, the cascade signal end GP outputs the high-level signal, and the second transistor M2and the fourth transistor M4are controlled to be both cut off. Because cko=0, the third transistor M3is conducted to be able to provide the low-level signal of the first reference signal end VREF1to the grid electrode of the fifth transistor M5, then the fifth transistor M5is controlled to be conducted to provide the low-level signal of the first reference signal end VREF1to the driving signal end OP, and the driving signal end OP outputs the low-level driving signal. However, in the practical application, because cko=0 in the denoising retention stage T21-1, a threshold value of the third transistor M3is made to drift. A gate-source voltage difference of the third transistor M3cannot be smaller than a threshold voltage of the third transistor M3due to the fact that the first reference signal end VREF1is also the low level and a first electrode of the third transistor M3is a source electrode. In this way, the third transistor M3is made to be cut off, so that a grid voltage of the fifth transistor M5is possibly increased, an opening degree of the fifth transistor M5is reduced, and pull-up noise occurs to the low level output by the driving signal end OP. Based on this, in the denoising enhancement stage T22-1, the first noise reduction clock pulse signal is loaded to the first noise reduction clock signal end CKO, and the second noise reduction clock pulse signal is loaded to the second noise reduction clock signal end CKBO, so that the third transistor M3may be normally started, the grid electrode of the fifth transistor M5is discharged, the opening degree of the fifth transistor M5is improved, and the output stability of the driving signal end OP is improved. For example, in the denoising enhancement stage T22-1, firstly, ip=1, ckb=1, ck=1, cko=0, and ckbo=1. Because ck=1, the eighth transistor M8and the ninth transistor M9are both cut off, and due to the action of the fourth capacitor C4, the signal of the pull-down node PD may be kept as the low-level signal. The seventh transistor M7is controlled to be conducted to be able to provide the high-level signal of the second reference signal end VREF2to the cascade signal end GP, thus the cascade signal end GP outputs the high-level signal, and the second transistor M2and the fourth transistor M4are controlled to be both cut off. Because ckbo is switched from the low level to the high level, the grid voltage of the fifth transistor M5is pulled up due to a coupling effect of the first capacitor C1. At the moment, a second electrode of the third transistor M3is a source electrode. Because cko=0, the gate-source voltage difference of the third transistor M3is smaller than the threshold voltage of the third transistor M3, and in this way, the third transistor M3may be made to start. Because the third transistor M3is conducted to be able to provide the low-level signal of the first reference signal end VREF1to the grid electrode of the fifth transistor M5, the grid electrode of the fifth transistor M5may be discharged, a first electrode of the first capacitor C1is made to be a high level, and a second electrode of the first capacitor C1is made to be a low level. Moreover, the fifth transistor M5is also controlled to be conducted to provide the low-level signal of the first reference signal end VREF1to the driving signal end OP, so that the driving signal end OP outputs the low-level driving signal. Then, ip=1, ckb=1, ck=1, cko=1, and ckbo=0. Because ck=1, the eighth transistor M8and the ninth transistor M9are both cut off, and due to the action of the fourth capacitor C4, the signal of the pull-down node PD may be kept as the low-level signal. The seventh transistor M7is controlled to be conducted to provide the high-level signal of the second reference signal end VREF2to the cascade signal end GP, so that the cascade signal end GP outputs the high-level signal, and the second transistor M2and the fourth transistor M4are controlled to be both cut off. Because cko=1, the third transistor M3is cut off. Because the ckbo is switched from the high level to the low level, due to the coupling effect of the first capacitor C1, the grid voltage of the fifth transistor M5is further pulled down, so that the fifth transistor M5may be controlled to be completely conducted as far as possible to provide the low-level signal of the first reference signal end VREF1to the driving signal end OP as far as possible without voltage loss, and the driving signal end OP outputs the low-level driving signal. Then, ip=1, ckb=1, ck=1, cko=0, and ckbo=1. The above working process when ip=1, ckb=1, ck=1, cko=0 and ckbo=1 is repeated again, so that the grid electrode of the fifth transistor M5is discharged, the first electrode of the first capacitor C1is made to be the high level, and the second electrode of the first capacitor C1is made to be the low level. And after the denoising enhancement stage T22-1, the working processes of the denoising retention stage T21-1and the denoising enhancement stage T22-1are repeatedly executed all the time until the level of the signal of the input signal end IP becomes the high level again. It should be noted that in the data refresh stage T10, there are buffer stages (for example, stages when the signal ckb, the signal ck, the signal cko and the signal ckbo are all the high levels) between the T11stage and the T12stage, between the T12stage and the T13stage, as well as between the T13stage and the T14stage respectively. In the buffer stages, the characteristics of the transistors in the shift register may be stabilized, so that the shift register enters into the next working stage after being stabilized. Moreover, due to the existence of the buffer stages, rising edges and falling edges of the signal ckb and the signal ck do not completely correspond to each other, and rising edges and falling edges of the signal ckbo and the signal cko do not completely correspond to each other as well. In this way, the falling edge of the signal ckb may be prevented from being aligned with the rising edge of the signal ck, the rising edge of the signal ckb may be prevented from being aligned with the falling edge of the signal ck, the falling edge of the signal cko may be prevented from being aligned with the rising edge of the signal ckbo, and the rising edge of the signal cko may be prevented from being aligned with the falling edge of the signal ckbo, so that the stability of the shift register can be improved. It should be noted that in the data retention stage T20and in the denoising enhancement stage T22-1, the signal cko and the signal ckbo also have buffer stages (namely, stages when the signal cko and the signal ckbo are both high levels), and in the buffer stages, the characteristics of the transistors in the shift register can be stabilized so that the shift register can enter into the next working stage after being stabilized. Moreover, due to the existence of the buffer stages, the rising edges and the falling edges of the signal ckbo and the signal cko do not completely correspond to each other as well. In this way, the falling edge of the signal cko may be prevented from being aligned with the rising edge of the signal ckbo, and the rising edge of the signal cko may be prevented from being aligned with the falling edge of the signal ckbo, so that the stability of the shift register can be improved. And the signal cko and the signal ckbo have the buffer stages, so that the signal cko has a wave crest with a small duration at the end of the denoising enhancement stage T22-1. It should be noted that in the practical application, a voltage value of each above signal can be designed and determined according to a practical application environment, which is not limited here. Moreover, analogue simulation is performed on the signal output by the driving signal end OP of the shift register shown inFIG.1according to the signal sequence diagram shown inFIG.3, and an analogue simulation diagram is as shown inFIG.4. The abscissa represents time, and the ordinate represents voltage. S1represents a signal for performing analogue simulation on the driving signal end OP of the shift register shown inFIG.1by adopting the signal sequence diagram shown inFIG.3. S0represents a signal for performing analogue simulation on the driving signal end OP of the shift register when there is only the denoising retention stage in the data retention stage T20. According to the embodiments of the present disclosure, in combination with theFIG.3, the denoising enhancement stage is set, so that the driving signal end OP can stably output the signal, and the problem of instability caused by electric leakage can be improved. Moreover, the shift register shown inFIG.1is further driven to work according to the signal sequence diagram shown inFIG.3, and it is detected that power consumption of the shift register is 0.5 mW when the shift register works at the data retention stage T20. Therefore it can be known that even if a clock pulse is inserted in the data retention stage T20, the power consumption of the shift register may also be within an acceptable range. A working process of the above shift register provided by embodiments of the present disclosure at the second refresh frequency is described by taking the shift register shown inFIG.1as an example in combination with a signal sequence diagram as shown inFIG.5. In the following description, 1 represents a high-level signal, 0 represents a low-level signal, and it needs to be explained that 1 and 0 are logic levels and are only used for better explaining the working process of the embodiments of the present disclosure instead of the voltages applied to a grid electrode of each transistor during implementations. For example, as shown inFIG.5, at the second refresh frequency, one display frame may include a data refresh stage T10. It should be noted that the signal sequence diagram shown inFIG.5is only the working process of one shift register in one current display frame. The working processes of the shift register in other display frames are basically the same as the working process in the current display frame respectively, which is not repeated here. The data refresh stage T10includes a T11stage, a T12stage, a T13stage and a T14stage. Moreover, the working process of the above shift register provided by the embodiments of the present disclosure in the signal sequence diagram shown inFIG.5is basically the same as the working process of the shift register in the data refresh stage T10in the signal sequence diagram shown inFIG.3, which is not repeated here. Embodiments of the present disclosure also provide some other driving methods which are deformed aiming at the implementation in the above embodiments. Only the difference between the embodiments and the above embodiments is illustrated below, and the same point is not repeated here. During implementations, in embodiments of the present disclosure, in the denoising enhancement stage, the quantity of the clock cycles of the first noise reduction clock pulse signal is an even number. Exemplarily, as shown inFIG.6, the quantity of the clock cycles of the first noise reduction clock pulse signal may be made to be two. The quantity of the clock cycles of the first noise reduction clock pulse signal may also be four, six or more, which is not limited here. During implementations, in embodiments of the present disclosure, in the same denoising enhancement stage, a falling edge of the first nose reduction clock pulse signal is aligned with a starting moment of a denoising retention stage appearing after the denoising enhancement stage, and in the first noise reduction clock pulse signal, a signal between a rising edge close to the denoising retention stage appearing before the denoising enhancement stage and the denoising retention stage appearing before the denoising enhancement stage is the first level. Exemplarily, as shown inFIG.6, in the same denoising enhancement stage T22-1, the falling edge of the first noise reduction clock pulse signal of the signal cko may be made to be aligned with the starting time of the denoising retention stage T21-2appearing after the denoising enhancement stage T22-1, and in the first noise reduction clock pulse signal of the signal cko, the signal between the rising edge close to the denoising retention stage T21-1appearing before the denoising enhancement stage T22-1and the denoising retention stage T21-1appearing before the denoising enhancement stage T22-1is a low level. During implementations, in embodiments of the present disclosure, in the same denoising enhancement stage, a rising edge of the second noise reduction clock pulse signal is aligned with an end moment of the denoising retention stage appearing before the denoising enhancement stage, and in the second noise reduction clock pulse signal, a signal between a falling edge close to the denoising retention stage appearing after the denoising enhancement stage and the denoising retention stage appearing after the denoising enhancement stage is the first level. Exemplarily, as shown inFIG.6, in the same denoising enhancement stage, the rising edge of the second noise reduction clock pulse signal of the signal ckbo may be made to be aligned with the end moment of the denoising retention stage T21-1appearing before the denoising enhancement stage T22-1; and in the second noise reduction clock pulse signal of the signal ckbo, the signal between the falling edge close to the denoising retention stage T21-2appearing after the denoising enhancement stage T22-1and the denoising retention stage T21-2appearing after the denoising enhancement stage T22-1is a low level. The working process of the above shift register provided by the embodiments of the present disclosure at the first refresh frequency is described below by taking the shift register shown inFIG.1as an example in combination with a signal sequence diagram shown inFIG.6. In the following description, 1 represents a high-level signal, 0 represents a low-level signal, and it should be noted that 1 and 0 are logic levels and are only used for better explaining the specific working process of the embodiments of the present disclosure instead of the voltages applied to a grid electrode of each transistor during specific implementation. For example, as shown inFIG.6, at the first refresh frequency, one display frame may include the data refresh stage T10and the data retention stage T20. The data retention stage T20includes a denoising retention stage and a denoising enhancement stage alternately arranged. It should be noted that the signal sequence diagram shown inFIG.6is only the working process of one shift register in one current display frame. The working processes of the shift register in other display frames are basically the same as the working process in the current display frame respectively, which is not repeated here. The working processes at the data refresh stage T10and the denoising retention stage T21-1may refer to the above working process, which is not repeated here. At the denoising enhancement stage T22-1, firstly ip=1, ckb=1, ck=1, cko=0, and ckbo=1. Because ck=1, the eighth transistor M8and the ninth transistor M9are both cut off, and due to the action of the fourth capacitor C4, the signal of the pull-down node PD may be kept as a low-level signal. The seventh transistor M7is controlled to be conducted to provide a high-level signal of the second reference signal end VREF2to the cascade signal end GP, so that the cascade signal end GP outputs a high-level signal, and the second transistor M2and the fourth transistor M4are controlled to be both cut off. Because the ckbo is switched from the low level to the high level, the grid voltage of the fifth transistor M5is pulled up due to the coupling effect of the first capacitor C1. At the moment, the second electrode of the third transistor M3is a source electrode. Because cko=0, the gate-source voltage difference of the third transistor M3is smaller than the threshold voltage of the third transistor M3, and in this way, the third transistor M3may be to start. Because the third transistor M3is conducted to be able to provide a low-level signal of the first reference signal end VREF1to the grid electrode of the fifth transistor M5, the grid electrode of the fifth transistor M5may be discharged, the first electrode of the first capacitor C1is made to be a high level, and the second electrode of the first capacitor C1is made to be a low level. Moreover, the fifth transistor M5is also controlled to be conducted to provide the low-level signal of the first reference signal end VREF1to the driving signal end OP, so that the driving signal end OP outputs a low-level driving signal. Then, ip=1, ckb=1, ck=1, cko=1, and ckbo=0. Because ck=1, the eighth transistor M8and the ninth transistor M9are both cut off, and due to the action of the fourth capacitor C4, the signal of the pull-down node PD may be kept as a low-level signal. The seventh transistor M7is controlled to be conducted to provide a high-level signal of the second reference signal end VREF2to the cascade signal end GP, so that the cascade signal end GP outputs a high-level signal, and the second transistor M2and the fourth transistor M4are controlled to be both cut off. As cko=1, the third transistor M3is cut off. Because the ckbo is switched from the high level to the low level, due to the coupling effect of the first capacitor C1, the grid voltage of the fifth transistor M5is further pulled down, so that the fifth transistor M5may be controlled to be completely conducted as far as possible to provide the low-level signal of the first reference signal end VREF1to the driving signal end OP as far as possible without voltage loss, and the driving signal end OP outputs a low-level driving signal. Then, the above working processes when ip=1, ckb=1, ck=1, cko=0 and ckbo=1 and when ip=1, ckb=1, ck=1, cko=1 and ckbo=0 are repeated again, which is not repeated here. Moreover, analogue simulation is further performed on a signal output by the driving signal end OP of the shift register shown inFIG.1according to the signal sequence diagram shown inFIG.6, and an analogue simulation diagram is as shown inFIG.7. The abscissa represents time, and the ordinate represents voltage. S2represents a signal for performing analogue simulation on the driving signal end OP of the shift register shown inFIG.1by adopting the signal sequence diagram shown inFIG.6. S0represents a signal for performing analogue simulation at the driving signal end OP of the shift register when there is only the denoising retention stage in the data retention stage T20. It can be known in combination withFIG.6, according to the embodiments of the present disclosure, the denoising enhancement stage is set, so that the driving signal end OP can stably output the signal, and the problem of instability caused by electric leakage can be relieved. Moreover, the shift register shown in theFIG.1is further driven to work according to the signal sequence diagram shown inFIG.6, and it is detected that power consumption of the shift register is 0.5 mW when the shift register works at the data retention stage T20. Therefore it can be known that even if a clock pulse is inserted in the data retention stage T20, the power consumption of the shift register may also be within an acceptable range. It should be noted that maintaining durations of different denoising retention stages can be the same or different, which can be designed and determined according to the practical application demands and is not limited here. Based on the same inventive concept, embodiments of the present disclosure further provide a driving control circuit, as shown inFIG.8, including the plurality of any above cascaded shift registers SR(1), SR(2) . . . SR(n−1), SR(n) . . . SR(N−1) and SR(N) (N shift registers in total, 1≤n≤N, and n is an integer) provided by the embodiments of the present disclosure. An input signal end IP of the first-stage shift register SR(1) is configured to be coupled with a frame trigger signal end STV. In every two adjacent shift registers, an input signal end IP of the next stage of shift register SR(n) is configured to be coupled with a cascade signal output end GP of the previous stage of shift register SR(n−1). The structure of each shift register in the above driving control circuit is the same as the above shift register in the present disclosure in function and structure, and the repetitions are omitted. The driving control circuit may be configured in a liquid crystal display panel or an electroluminescent display panel, which is not limited here. For example, in the above driving control circuit provided by the embodiments of the present disclosure, first reference signal ends VREF1of all stage of shift registers are coupled with the same first direct current signal end, and second reference signal ends VREF2of all stage of shift registers are coupled with the same second direct current signal end. For example, in the driving control circuit provided by the embodiments of the present disclosure, first control clock signal ends CK of odd-numbered stages of shift registers and second control clock signal ends CKB of even-numbered stages of shift registers are coupled with the same clock end, namely a first control clock end. Second control clock signal ends CKB of the odd-numbered stages of shift registers and first control clock signal ends CK of the even-numbered stages of shift registers are coupled with the same clock end, namely a second control clock end. For example, in the above driving control circuit provided by the embodiments of the present disclosure, first noise reduction clock signal ends CKO of the odd-numbered stages of shift registers and second noise reduction clock signal ends CKBO of the even-numbered stages of shift registers are coupled with the same clock end, namely a first noise reduction clock end. Second noise reduction clock signal ends CKBO of the odd-numbered stages of shift registers and first noise reduction clock signal ends CKO of the even-numbered stages of shift registers are coupled with the same clock end, namely a second noise reduction clock end. Based on the same inventive concept, embodiments of the present disclosure further provide a display device, including the above gate driving control circuit provided by the embodiments of the present disclosure. Principles of the display device for solving the problems are similar to that of the above shift register, therefore, implementation of the display device may refer to that of the above shift register, and repetitions are omitted. During implementations, the above display device provided by the embodiments of the present disclosure may be: any product or part with a display function, such as a mobile phone, a tablet personal computer, a television, a display, a notebook computer, a digital photo frame, a navigator and the like. It should be understood by a person of ordinary skill in the art that the display device should have other essential constituent parts, which is not repeated here and may also not be regarded as limitation to the present disclosure. During implementations, the display device may include a plurality of pixel units, a plurality of grid lines, and data lines, and each pixel unit may include a plurality of sub-pixels, such as a red sub-pixel, a green sub-pixel, and a blue sub-pixel. The above display device provided by the embodiments of the present disclosure may be an organic light-emitting display device or a liquid crystal display device, which is not limited here. In the liquid crystal display device, as shown inFIG.9, one row of sub-pixels spx are coupled with one grid line GA, and one column of sub-pixels spx are coupled with one data line DA. Each sub-pixel spx may include a scanning transistor N00and a pixel electrode200. A grid electrode of the scanning transistor N00may be coupled with the grid line GA, a source electrode of the scanning transistor N00is coupled with the data line DA, and a drain electrode of the scanning transistor N00is coupled with the pixel electrode200. Moreover, a driving signal end OP of one shift register is coupled with one grid line GA. In this way, the driving signal end OP of the shift register may be made to provide a signal to the grid electrode of the scanning transistor N00in each sub-pixel, and a cascade signal end GP of the shift register is configured to transmit a starting signal for the next stage of shift register. In this way, when the above display device provided by the embodiments of the present disclosure is the liquid crystal display device, the above driving control circuit may serve as a gate driving control circuit and is applied to providing a gate scanning signal of the scanning transistor N00. It should be noted that the scanning transistor N00may be an N-type transistor or a P-type transistor, which is not limited here. Further, two different types of transistors may also be arranged in the sub-pixels. As shown inFIG.10, the display device may include a plurality of first gate lines GA1and a plurality of second gate lines GA2. One row of sub-pixels are coupled with one first grid line GA1and one second grid line GA2. Each sub-pixel spx may include a first scanning transistor N01, a second scanning transistor P01, and a pixel electrode200. The first scanning transistor N01is an N-type transistor, and the second scanning transistor P01is a P-type transistor. A grid electrode of the first scanning transistor N01is coupled with the first grid line GA1, and the second scanning transistor P01is coupled with the second grid line GA2. A source electrode of the second scanning transistor P01is coupled with the data line DA, a drain electrode of the second scanning transistor P01is coupled with a source electrode of the first scanning transistor N01, and a drain electrode of the first scanning transistor N01is coupled with the pixel electrode200. Moreover, the driving signal end OP of one shift register is coupled with one first grid line GA1, and the cascade signal end GP of one shift register is coupled with one second grid line GA2. In this way, the driving signal end OP of the shift register may be made to provide signals to the grid electrodes of the N-type transistors in the sub-pixels. And the cascade signal end GP of the shift register is made to provide signals to the grid electrodes of the P-type transistors in the sub-pixels, and the cascade signal end GP is further configured to transmit a starting signal for the next stage of shift register. In this way, when the display device provided by the embodiments of the present disclosure is the liquid crystal display device, the above driving control circuit may serve as a gate driving control circuit and is applied to providing a gate scanning signal. In an organic light-emitting display device, a plurality of organic light-emitting diodes and pixel circuits connected with the organic light-emitting diodes are generally arranged. A general pixel circuit is provided with a light-emitting control transistor configured to control the organic light-emitting diodes to emit light and a scanning control transistor configured to control data signal input. During implementations, when the above display device provided by the embodiments of the present disclosure is the organic light-emitting display device, the organic light-emitting display device may include one above driving control circuit provided by the embodiments of the present disclosure, and the driving control circuit may serve as a light-emitting driving control circuit and is applied to providing a light-emitting control signal of the light-emitting control transistor; or the light-emitting control circuit may also serve as the gate driving control circuit and is applied to providing the gate scanning signal of the scanning control transistor. The organic light-emitting display device may also include the two above driving control circuits provided by the embodiments of the present disclosure, and one driving control circuit can serve as the light-emitting driving control circuit and is applied to providing the light-emitting control signal of the light-emitting control transistor; and the other driving control circuit serves as the gate driving control circuit and is applied to providing the gate scanning signal of the scanning control transistor, which is not limited here. Although the preferred embodiments of the present disclosure have been described, those skilled in the art can make additional modifications and variations to the embodiments once they know the basic creative concept. Therefore, the appended claim intends to be explained as including the preferred embodiments and all modifications and variations falling within the scope of the present disclosure. Obviously, those skilled in the art can make various modifications and variations to the present disclosure without departing from the spirit and scope of the present disclosure. In this way, if these modifications and variations of the present disclosure fall within the scope of the claims of the present disclosure and their equivalent art, the present disclosure also intends to include these modifications and variations. | 86,028 |
11862099 | DESCRIPTION OF EMBODIMENTS The specific embodiments of the present disclosure will be described in detail below in combination with the accompanying drawings. It should be understood that the specific embodiments described herein are only used to illustrate and interpret the disclosure and are not used to limit the disclosure. As a first aspect of the present disclosure, there is provided a shift register unit, as shown inFIG.1, including a signal input circuit, an output circuit130, a pull-down control circuit140, and a signal output terminal OUT<N>. The output circuit130includes a pull-up sub-circuit131and a pull-down sub-circuit132. An output terminal of the signal input circuit is electrically coupled to a pull-up control terminal Q of the pull-up sub-circuit131, and is configured to provide signals to the pull-up control terminal Q of the pull-up sub-circuit131at different time periods. The pull-up sub-circuit131is configured to output a scanning signal via the signal output terminal OUT<N> under control of a signal received by the pull-up control terminal Q of the pull-up sub-circuit131. The pull-down sub-circuit132includes a plurality of pull-down control terminals, and the pull-down sub-circuit132is configured to pull down a potential of the signal output terminal OUT<N> under control of a valid pull-down control signal received by any one or any several pull-down control terminals of the pull-down sub-circuit132. The pull-down control circuit140includes a selection sub-circuit141and a plurality of pull-down control sub-circuits, the pull-down control sub-circuits correspond to the pull-down control terminals in a one-to-one correspondence, and the selection sub-circuit141is configured to selectively configure one of the pull-down control sub-circuits to provide the valid pull-down control signal to a corresponding one of the pull-down control terminals. In the shift register unit, the signal input circuit may include a plurality of signal input sub-circuits, the output circuit130and the pull-down control circuit140may be shared by the signal input sub-circuits, and the signal input sub-circuits output a signal to the pull-up control terminal Q of the pull-up sub-circuit131by turns, and a waveform of a signal output by the signal output terminal OUT<N> is a superposition of waveforms input by the signal input sub-circuits. It will be readily understood that each signal input sub-circuit is cascaded with a corresponding signal input sub-circuit in a shift register unit of a previous stage. In the present disclosure, the pull-up sub-circuit131outputs different scanning signals under control of signals output by different ones of the signal input sub-circuits. Since the pull-down sub-circuit132includes the plurality of pull-down control terminals, and the pull-down control sub-circuits provide pull-down control signals for the pull-down sub-circuit132in turn, it is possible to avoid electric leakage due to a thin film transistor being in a bias state for a long time, which is caused by “providing a pull-down control signal for a single pull-down control terminal of a pull-down sub-circuit by using a single pull-down control sub-circuit” in the related art, and further to prolong a service life of the shift register unit. FIG.1illustrates an embodiment of the shift register unit provided by the present disclosure, in which it is shown how the various sub-circuits are controlled. An input terminal of the pull-up sub-circuit131is electrically coupled to a third clock signal terminal CLKD, an output terminal of the pull-up sub-circuit131is electrically coupled to the signal output terminal, and the pull-up sub-circuit131is configured to control the input terminal of the pull-up sub-circuit131to be electrically coupled to or decoupled from the output terminal of the pull-up sub-circuit131under control of the signal received by the pull-up control terminal Q of the pull-up sub-circuit131. The signal received by the pull-up control terminal Q of the pull-up sub-circuit131is a signal processed by the signal input sub-circuit, in other words, the signal input sub-circuit can control the timing of the pull-up sub-circuit131outputting a third clock signal provided by the third clock signal terminal CLKD. When the pull-up control terminal Q of the pull-up sub-circuit131receives a high level signal, the input terminal of the pull-up sub-circuit131is electrically coupled to the output terminal of the pull-up sub-circuit131; alternatively, when the pull-up control terminal Q of the pull-up sub-circuit131receives a low level signal, the input terminal of the pull-up sub-circuit131is electrically decoupled from the output terminal of the pull-up sub-circuit131. The input terminal of the pull-down sub-circuit132is electrically coupled to a third level signal terminal VGL2, the output terminal of the pull-down sub-circuit132is electrically coupled to the signal output terminal, and the pull-down sub-circuit132has a plurality of pull-down control terminals. In the embodiment shown inFIG.1, the pull-down sub-circuit132has two pull-down control terminals QB_1and QB_2, but the present disclosure is not limited thereto, and the pull-down sub-circuit132may have more pull-down control terminals as needed. The pull-down control circuit140includes a selection sub-circuit141and a plurality of pull-down control sub-circuits. In the embodiment shown inFIG.1, the pull-down control circuit140includes two pull-down control sub-circuits, which are a first pull-down control sub-circuit142and a second pull-down control sub-circuit143, respectively. First input terminals of the pull-down control sub-circuits are electrically coupled to a second level signal terminal VGL1, and output terminals of the pull-down control sub-circuits are electrically coupled to the pull-down control terminals of the pull-down sub-circuit132in a one-to-one correspondence. For example, in the embodiment shown inFIG.1, the output terminal of the first pull-down control sub-circuit142is electrically coupled to the pull-down control terminal QB_1, and the output terminal of the second pull-down control sub-circuit143is electrically coupled to the pull-down control terminal QB_2. Second input terminals of the pull-down control sub-circuits are electrically coupled to multiple fourth clock signal terminals in a one-to-one correspondence. In the embodiment shown inFIG.1, the second input terminal of the first pull-down control sub-circuit142is electrically coupled to the fourth clock signal terminal CLKM, and the second input terminal of the second pull-down control sub-circuit142is electrically coupled to the fourth clock signal terminal CLKN. A control terminal of the selection sub-circuit141is electrically coupled to a fifth clock signal terminal CLKL, and the selection sub-circuit141can control a signal input from the input terminal of one of the pull-down control sub-circuits to enter the pull-down control sub-circuit under control of a fifth clock signal input through the fifth clock signal terminal CLKL. In this embodiment, the signal output from the signal output terminal is the third clock signal input through the third clock signal terminal CLKD. The pull-up sub-circuit131outputs different scanning signals under control of signals output by different signal input sub-circuits. Since the pull-down sub-circuit132includes the plurality of pull-down control terminals, and the pull-down control signals are provided to the pull-down sub-circuit132by the pull-down control sub-circuits in turn, it is possible to avoid electric leakage due to a thin film transistor being in a bias state for a long time, which is caused by “providing a pull-down control signal to a single pull-down control terminal of a pull-down sub-circuit by using a single pull-down control sub-circuit” in the related art, and further to prolong a service life of the shift register unit. It should be noted that the fourth clock signal input through the fourth clock signal terminal is a constant voltage signal. And, only when any pull-down control sub-circuit is operating, a valid fourth clock signal is provided to the fourth clock signal terminal corresponding to such pull-down control sub-circuit. That is to say, when the shift register unit operates, only one fourth clock signal terminal can receive the fourth clock signal, and none of the remaining clock signal terminals can receive the fourth clock signal. In the present disclosure, there is no particular requirement on a specific type of the plurality of signal input sub-circuits, and as an implementation, the shift register unit may be configured to output a detection scanning signal and a display scanning signal, and accordingly, the plurality of signal input sub-circuits may include a detection signal input sub-circuit110and a display signal input sub-circuit120. An output terminal of the detection signal input sub-circuit110is electrically coupled to the pull-up control terminal Q of the pull-up sub-circuit131, a first control terminal of the detection signal input sub-circuit110is electrically coupled to a first clock signal terminal CLKB, and a second control terminal of the detection signal input sub-circuit110is electrically coupled to a second clock signal terminal CLKC, so that the detection signal input sub-circuit110provides a detection scanning control signal to the pull-up control terminal Q of the pull-up sub-circuit131under control of a first clock signal input through the first clock signal terminal CLKB and a second clock signal input through the second clock signal terminal CLKC. An output terminal of the display signal input sub-circuit120is electrically coupled to the pull-up control terminal Q of the pull-up sub-circuit131, an input terminal of the display signal input sub-circuit120is electrically coupled to a first level signal terminal VDD, and the input terminal of the display signal input sub-circuit120and the output terminal of the display signal input sub-circuit120are configured to provide a display scanning control signal to the pull-up control terminal Q of the pull-up sub-circuit131under control of a signal received at a control terminal of the display signal input sub-circuit120. It should be noted that the detection signal input sub-circuit110and the display signal input sub-circuit120are configured to alternately provide signals to the pull-up control terminal Q of the pull-up sub-circuit131. An output state of the detection signal input sub-circuit110is controlled by three signals: the first clock signal input through the first clock signal terminal CLKB, the second clock signal input through the second clock signal terminal CLKC, and a signal input through an input terminal of the detection signal input sub-circuit110. The signal input through the input terminal of the detection signal input sub-circuit110is an output signal of the previous stage of shift register unit cascaded with the current shift register unit. An output state of the display signal input sub-circuit120is controlled only by a state of the signal received at the control terminal of the display signal input sub-circuit120. As an implementation, the control terminal of the display signal input sub-circuit120may receive the signal output by the previous stage of shift register unit cascaded with the current shift register unit. As another implementation of the present disclosure, a sixth clock signal CLKA may further be used to control the input terminal of the display signal input sub-circuit120to be electrically coupled to or decoupled from the output terminal of the display signal input sub-circuit120. The pull-up sub-circuit131outputs the detection scanning signal under control of the detection scanning control signal. The pull-up sub-circuit131outputs the display scanning signal under control of the display scanning control signal. In the present disclosure, the pull-down control circuit140is not particularly limited to a specific structure. In the embodiment shown inFIG.1, the pull-down sub-circuit132includes two pull-down control terminals, i.e., the pull-down control terminal QB_1and the pull-down control terminal QB_2, and accordingly, the pull-down control circuit140includes two pull-down control sub-circuits, which are the first pull-down control sub-circuit142and the second pull-down control sub-circuit143, respectively. The first pull-down control sub-circuit142corresponds to the pull-down control terminal QB_1, and the second pull-down control sub-circuit143corresponds to the pull-down control terminal QB_2. The selection sub-circuit141is also not particularly limited to a specific structure. In the particular implementation shown inFIG.2, the selection sub-circuit141includes a selection transistor M19, and a gate electrode of the selection transistor M19is electrically coupled to the fifth clock signal terminal CLKL. First and second electrodes of the selection transistor M19are electrically coupled to or decoupled from each other according to a signal received by the gate electrode of the selection transistor M19. For example, when the selection transistor M19is an N-type transistor, the first electrode of the selection transistor M19and the second electrode of the selection transistor M19are electrically coupled to each other in response to that the gate electrode of the selection transistor M19receives a high level signal; when the selection transistor M19is a P-type transistor, the first electrode of the selection transistor M19and the second electrode of the selection transistor M19are electrically coupled to or decoupled from each other in response to that the gate electrode of the selection transistor M19receives a low level signal. Accordingly, the first pull-down control sub-circuit142may include a first pull-down control transistor M8, a second pull-down control transistor M9, and a third pull-down control transistor M10. A gate electrode and a first electrode of the first pull-down control transistor M8are electrically coupled to the corresponding fourth clock signal terminal CLKM, and a second electrode of the first pull-down control transistor M8is electrically coupled to a first electrode of the third pull-down control transistor M10and one of the two pull-down control terminals of the pull-down sub-circuit132. In the particular implementation shown inFIG.2, the second electrode of the first pull-down control transistor M8is electrically coupled to the first electrode of the third pull-down control transistor M10and the pull-down control terminal QB_1. A gate electrode of the second pull-down control transistor M9is electrically coupled to the second electrode of the first pull-down control transistor M8, a second electrode of the second pull-down control transistor M9is electrically coupled to the second level signal terminal VGL1directly or indirectly, and a first electrode of the second pull-down control transistor M9is electrically coupled to the pull-up control terminal Q of the pull-up sub-circuit131. A second electrode of the third pull-down control transistor M10is electrically coupled to the second level signal terminal VGL1, and a gate electrode of the third pull-down control transistor M10is electrically coupled to the pull-up control terminal Q of the pull-up sub-circuit131. The first and third pull-down control transistors M8and M10may implement a resistive voltage division, in particular, the third pull-down control transistor M10has a larger resistance than the first pull-down control transistor M8. When the pull-up control terminal Q of the pull-up sub-circuit131and the signal output terminal are pulled down by the first pull-down control sub-circuit142, the first pull-down control sub-circuit142outputs a high level signal, i.e., the gate electrode of the second pull-down control transistor M9should be provide with a high level signal. Since the third pull-down control transistor M10has a resistance greater than that of the first pull-down control transistor M8, a potential of the gate electrode of the second pull-down control transistor M9is not pulled down when both the first pull-down control transistor M8and the third pull-down control transistor M10are turned on. Accordingly, the second pull-down control sub-circuit143includes a fourth pull-down control transistor M11, a fifth pull-down control transistor M12, and a sixth pull-down control transistor M13. A gate electrode and a first electrode of the fourth pull-down control transistor M11are electrically coupled to the corresponding fourth clock signal terminal CLKN, a second electrode of the fourth pull-down control transistor M11is electrically coupled to a first electrode of the sixth pull-down control transistor M13and the other one of the two control terminals of the pull-down sub-circuit132, and in the implementation shown inFIG.2, the second electrode of the fourth pull-down control transistor M11is electrically coupled to the first electrode of the sixth pull-down control transistor M13and the pull-down control terminal QB_1. A gate electrode of the fifth pull-down control transistor M12is electrically coupled to the second electrode of the fourth pull-down control transistor M11, a second electrode of the fifth pull-down control transistor M12is electrically coupled to the second level signal terminal VGL1directly or indirectly, and a first electrode of the fifth pull-down control transistor M12is electrically coupled to the pull-up control terminal Q of the pull-up sub-circuit131. A second electrode of the sixth pull-down control transistor M13is electrically coupled to the second level signal terminal VGL1, and a gate electrode of the sixth pull-down control transistor M13is electrically coupled to the pull-up control terminal Q of the pull-up sub-circuit131. The second electrode of the selection transistor M19is electrically coupled, directly or indirectly, to the second electrode of the first pull-down control transistor M8, and the first electrode of the selection transistor M19is electrically coupled, directly or indirectly, to the second electrode of the fourth pull-down control transistor M11. Likewise, the sixth pull-down control transistor M13has a resistance greater than that of the fourth pull-down control transistor M11. When the first pull-down control sub-circuit142operates normally, a valid fourth clock signal is supplied to the fourth clock signal terminal CLKM coupled to the first pull-down control transistor M8, the valid fourth clock signal is a high level signal inFIG.3, an invalid fourth clock signal is supplied to the fourth clock signal terminal CLKN coupled to the fourth pull-down control transistor M11, and the invalid fourth clock signal is a low level signal inFIG.3. When the first pull-up control transistor M8in the first pull-down control sub-circuit142fails under control of a long-term positive bias, the supply of the valid fourth clock signal to the fourth clock signal terminal CLKM coupled to the first pull-up control transistor M8is stopped, and a valid fifth clock signal is started to be supplied to the fifth clock signal terminal CLKL, so that the first and second electrodes of the selection transistor M19are electrically coupled to each other. At the same time, the fourth clock signal terminal CLKN coupled to the fourth pull-down control transistor M11is provided with a valid fifth clock signal, so that the first and second electrodes of the fourth pull-down control transistor M11are electrically coupled to each other. Therefore, in the shift register unit provided by the present disclosure, the pull-down control circuit can be ensured to function all the time, so that the service life of the shift register unit can be prolonged. In implementations shown inFIG.2, andFIG.4throughFIG.8, the second electrode of the selection transistor M19is directly electrically coupled to the second electrode of the first pull-down control transistor M8, and the first electrode of the selection transistor M19is directly electrically coupled to the second electrode of the fourth pull-down control transistor M11. In the implementation shown inFIG.9, the second electrode of the selection transistor M19is indirectly electrically coupled to the second electrode of the first pull-down control transistor M8, and the first electrode of the selection transistor M19is indirectly electrically coupled to the second electrode of the fourth pull-down control transistor M11. Specifically, when the first pull-down control transistor M8is not failed, the selection transistor M19is turned off, the fourth clock signal terminal CLKM receives a valid clock signal, corresponding to the implementation inFIG.2, the valid clock signal is a high level signal, the fourth clock signal terminal CLKN receives an invalid clock signal, corresponding to the implementation inFIG.2, the invalid clock signal is a low level signal. At this time, the valid clock signal makes the first and second electrodes of the first pull-down control transistor M8electrically coupled to each other, and thus, the valid clock signal is directly transmitted to the pull-down control terminal QB_1of the pull-down sub-circuit132. Also, the first electrode and the second electrode of the second pull-down control transistor M9are electrically coupled to each other, and the signal output by the second level signal terminal VGL1is transmitted to the pull-up control terminal Q of the pull-up sub-circuit131, so as to pull down a potential at the pull-up control terminal Q of the pull-up sub-circuit131. At this time, the control terminal QB_1of the pull-down sub-circuit132receives the valid clock signal input through the fourth clock signal terminal CLKM, so that the input terminal and the output terminal of the pull-down sub-circuit132are electrically coupled to each other, and the potential at the signal output terminal is pulled down through the signal input through the third level signal terminal VGL2. At this time, the pull-down control terminal QB_2of the pull-down sub-circuit132is in a floating state. When the invalid clock signal is supplied to the fourth clock signal terminal CLKM, the valid signal is supplied to the fourth clock signal terminal CLKN, and the valid signal is supplied to the fifth clock signal terminal CLKL, at this time, the first and second electrodes of the selection transistor M19are electrically coupled to each other, and the valid clock signal input through the fourth clock signal terminal CLKN is input to the pull-down control terminals QB_1and QB_2of the pull-down sub-circuit132through the fourth pull-down control transistor M11. Therefore, the input terminal and the output terminal of the pull-down sub-circuit132can be controlled to be electrically coupled to each other, and the potential at the signal output terminal is pulled down. Meanwhile, the potential at the pull-up control terminal Q of the pull-up sub-circuit131can be pulled down by the second pull-down control transistor M9and the fifth pull-down control transistor M12. As an implementation of the present disclosure, a signal input through the first level signal terminal VDD is a high level signal, and both a signal input through the second level signal terminal VGL1and a signal input through the third level signal terminal VGL2are low level signals. Alternatively, a voltage input through the second level signal terminal VGL1is less than a voltage input through the third level signal terminal VGL2, that is, a level of the signal input through the third level signal terminal VGL2is higher than a level of the signal input through the second level signal terminal VGL1, so that the potential at the pull-up control terminal Q of the pull-up sub-circuit131can be pulled down more thoroughly. In the particular implementation shown inFIG.2, the second electrode of the selection transistor M19is directly and electrically coupled to the second electrode of the first pull-down control transistor M8, and the first electrode of the selection transistor M19is directly and electrically coupled to the second electrode of the fourth pull-down control transistor M11. As an implementation, the second electrode of the selection transistor M19may be indirectly and electrically coupled to the second electrode of the first pull-down control transistor M8, and the first electrode of the selection transistor M19may be indirectly and electrically coupled to the second electrode of the fourth pull-down control transistor M11. It should be noted that, whether directly or indirectly coupled, when the selection transistor M19is turned on, the signal input from the fourth clock signal terminal CLKN can be transmitted to the first pull-down control sub-circuit142through the selection transistor M19. FIG.9shows a case where the second electrode of the selection transistor M19is indirectly and electrically coupled to the second electrode of the first pull-down control transistor M8, and the first electrode of the selection transistor M19is indirectly and electrically coupled to the second electrode of the fourth pull-down control transistor M11, specifically, the second electrode of the selection transistor M19is electrically coupled to the second electrode of the first pull-down control transistor M8through a first anti-interference transistor M22, and the first electrode of the selection transistor M19is electrically coupled to the second electrode of the fourth pull-down control transistor M11through a second anti-interference transistor M23. As shown inFIG.9, a gate electrode of the first anti-interference transistor M22and a second electrode of the first anti-interference transistor M22are electrically coupled to the second electrode of the first pull-down control transistor M8, and a first electrode of the first anti-interference transistor M22is electrically coupled to the second electrode of the selection transistor M19. As shown inFIG.9, a gate electrode of the second anti-interference transistor M23and a second electrode of the second anti-interference transistor M23are electrically coupled to the first electrode of the selection transistor M19, and a first electrode of the second anti-interference transistor M23is electrically coupled to the second electrode of the fourth pull-down control transistor M11. The first anti-interference transistor M22and the second anti-interference transistor M23are provided to prevent the first pull-down control transistor M8and the fourth pull-down control transistor M11from affecting each other, thereby improving stability of output of the pull-down control circuit140. In the particular implementations shown inFIG.2, andFIG.6throughFIG.9, the second electrode of the second pull-down control transistor M9is directly and electrically coupled to the second level signal terminal VGL1, and the second electrode of the fifth pull-down control transistor M12is directly and electrically coupled to the second level signal terminal VGL1. In the implementations shown inFIG.4andFIG.5, the second electrode of the second pull-down control transistor M9is electrically coupled to the second level signal terminal VGL1through a first anti-leakage transistor M9′, and the second electrode of the fifth pull-down control transistor M12is electrically coupled to the second level signal terminal VLG1through a second anti-leakage transistor M12′. Specifically, a gate electrode of the first anti-leakage transistor M9′ is electrically coupled to the gate electrode of the second pull-down control transistor M9, a first electrode of the first anti-leakage transistor M9′ is electrically coupled to the second electrode of the second pull-down control transistor M9, and a second electrode of the first anti-leakage transistor M9′ is electrically coupled to the second level signal terminal VGL1. Similarly, a gate electrode of the second anti-leakage transistor M12′ is electrically coupled to the gate electrode of the fifth pull-down control transistor M12, the second electrode of the fifth pull-down control transistor M12is electrically coupled to a first electrode of the second anti-leakage transistor M12′, and a second electrode of the second anti-leakage transistor M12′ is electrically coupled to the second level signal terminal VGL1. It is easily understood that the first anti-leakage transistor M9′ is turned on or off in synchronization with the second pull-down control transistor M9, so that the signal input through the second level signal terminal VGL1can be prevented from interfering with the output signal of the pull-down control circuit140. Likewise, the second anti-leakage transistor M12′ is turned on or off in synchronization with the fifth pull-down control transistor M12, so that the signal input through the second level signal terminal VGL1can be prevented from interfering with the output signal of the pull-down control circuit140. In the present disclosure, the shift register unit further includes a reset circuit150, as shown inFIG.1, an input terminal of the reset circuit150is electrically coupled to the second level signal terminal VGL1, an output terminal of the reset circuit150is electrically coupled to the pull-up control terminal Q of the pull-up sub-circuit131, a control terminal of the reset circuit150is electrically coupled to a reset signal terminal, and the reset circuit150is configured to control the input terminal of the reset circuit150to be electrically coupled to or decoupled from the output terminal of the reset circuit150according to a signal provided by the reset signal terminal. Specifically, when the control terminal of the reset circuit150receives a first reset control signal, the input terminal of the reset circuit150and the output terminal of the reset circuit150are electrically coupled to each other, so that the pull-up control terminal Q of the pull-up sub-circuit131can be reset by using the low level signal input through the second level signal terminal VGL1; when the control terminal of the reset circuit150receives a second reset control signal, the input terminal of the reset circuit150and the output terminal of the reset circuit150are decoupled from each other, so that the pull-up control terminal Q of the pull-up sub-circuit131is prevented from being affected. As an implementation, as shown inFIG.2, the reset circuit150may include a first reset transistor M7, a second electrode of the first reset transistor M7is electrically coupled, directly or indirectly, to the second level signal terminal VGL1, and a first electrode of the first reset transistor M7is electrically coupled to the pull-up control terminal Q of the pull-up sub-circuit131. When a gate electrode of the first reset transistor M7receives the first reset control signal, the first and second electrodes of the first reset transistor M7are electrically coupled to each other, and when the gate electrode of the first reset transistor M7receives the second reset control signal, the first and second electrodes of the first reset transistor M7are electrically decoupled from each other. In the implementation shown inFIG.2, the second electrode of the first reset transistor M7is directly electrically coupled to the second level signal terminal VGL1. In the implementation shown inFIG.4, the second electrode of the first reset transistor M7is indirectly electrically coupled to the second level signal terminal VGL1. Specifically, the reset circuit150further includes a third anti-leakage transistor M7′, a gate electrode of the third anti-leakage transistor M7′ is electrically coupled to the gate electrode of the first reset transistor M7, a first electrode of the third anti-leakage transistor M7′ is electrically coupled to the second electrode of the first reset transistor M7, and a second electrode of the third anti-leakage transistor M7′ is electrically coupled to the second level signal terminal VGL1. The third anti-leakage transistor M7′ is turned on or off in synchronization with the first reset transistor M7, so that it is possible to prevent the level of the second level signal terminal VGL1from affecting the pull-up control terminal Q of the pull-up sub-circuit131at a time when reset is not required. As an implementation, the shift register unit may further include a noise reduction circuit160, an input terminal of the noise reduction circuit160is electrically coupled to the second level signal terminal VGL1, an output terminal of the noise reduction circuit160is electrically coupled to the pull-up control terminal Q of the pull-up sub-circuit131, and a control terminal of the noise reduction circuit160is electrically coupled to a noise reduction signal terminal TRST2. The noise reduction circuit160is configured to control the input terminal of the noise reduction circuit160and the output terminal of the noise reduction circuit160to be electrically coupled to or decoupled from each other according to a signal received by the control terminal of the noise reduction circuit160. Specifically, when the control terminal of the noise reduction circuit160receives a first noise reduction signal, the input terminal of the noise reduction circuit160and the output terminal of the noise reduction circuit160are electrically coupled to each other; when the control terminal of the noise reduction circuit160receives a second noise reduction signal, the input terminal of the noise reduction circuit160and the output terminal of the noise reduction circuit160are electrically decoupled from each other. The noise reduction circuit160is used to reduce the noise of the pull-up control terminal Q of the pull-up sub-circuit131, so as to ensure that the next cycle can be performed normally. Before each detection stage is ended, the noise reduction circuit160is required to reduce the noise of the pull-up control terminal Q of the pull-up sub-circuit131, so as to ensure that the display stage of the next scanning period can be performed smoothly. In the present disclosure, the noise reduction circuit160is not particularly limited to a specific structure, for example, in the implementation shown inFIG.2, the noise reduction circuit160includes a noise reduction transistor M6, a second electrode of the noise reduction transistor M6is electrically coupled, directly or indirectly, to the second level signal terminal VGL1, and a first electrode of the noise reduction transistor M6is electrically coupled to the pull-up control terminal Q of the pull-up sub-circuit131. When a gate electrode of the noise reduction transistor M6receives the first noise reduction signal, the first electrode and the second electrode of the noise reduction transistor M6are electrically coupled to each other; when the gate electrode of the noise reduction transistor M6receives the second noise reduction signal, the first and second electrodes of the noise reduction transistor M6are electrically decoupled from each other. As an implementation, as shown inFIG.5, the noise reduction circuit160may further include a fourth anti-leakage transistor M6′, a gate electrode of the fourth anti-leakage transistor M6′ is electrically coupled to the gate electrode of the noise reduction transistor M6, a first electrode of the fourth anti-leakage transistor M6′ is electrically coupled to the second electrode of the noise reduction transistor M6, and a second electrode of the fourth anti-leakage transistor M6′ is electrically coupled to the second level signal terminal VGL1. In the present disclosure, the noise reduction transistor M6is turned on or off in synchronization with the fourth anti-leakage transistor M6′, so that it is possible to effectively prevent the potential at the pull-up control terminal Q of the pull-up sub-circuit131from being pulled down to reduce noise by the signal input through the second level signal terminal VGL1when it is not required to pull down the potential at the pull-up control terminal Q of the pull-up sub-circuit131to reduce noise. In the present disclosure, the detection signal input sub-circuit110is not particularly limited to a specific structure, as long as the detection scanning control signal capable of electrically coupling the input terminal of the pull-up sub-circuit131to the output terminal of the pull-up sub-circuit131can be provided to the pull-up control terminal Q of the pull-up sub-circuit131in the detection sub-phase. As shown inFIG.2, the detection signal input sub-circuit110may include a detection trigger signal input sub-circuit111, a detection signal output sub-circuit112, a switch sub-circuit113, a detection signal reset sub-circuit114, and a first storage sub-circuit C1. As shown inFIG.2, a control terminal of the detection trigger signal input sub-circuit111is formed as the first control terminal of the detection signal input sub-circuit110, that is, the control terminal of the detection trigger signal input sub-circuit111is electrically coupled to the first clock signal terminal CLKB, an output terminal of the detection trigger signal input sub-circuit111is electrically coupled to a control terminal of the detection signal output sub-circuit112, and an input terminal of the detection trigger signal input sub-circuit111is formed as an input terminal of the detection signal input sub-circuit110. The detection trigger signal input sub-circuit111is configured to control the input terminal of the detection trigger signal input sub-circuit111and the output terminal of the detection trigger signal input sub-circuit111to be electrically coupled to or decoupled from each other according to a signal received at the control terminal of the detection trigger signal input sub-circuit111. The input terminal of the detection trigger signal input sub-circuit111and the output terminal of the detection trigger signal input sub-circuit111may be electrically coupled to each other when the control terminal of the detection trigger signal input sub-circuit111receives a valid first clock signal, and the input terminal of the detection trigger signal input sub-circuit111and the output terminal of the detection trigger signal input sub-circuit111can be electrically decoupled from each other when the control terminal of the detection trigger signal input sub-circuit111receives an invalid first clock signal. The “valid” and “invalid” herein are merely used to distinguish between high and low levels of the first clock signal. The “valid first clock signal” may represent one of a high level signal and a low level signal, and the “invalid first clock signal” may represent the other of the high level signal and the low level signal. Here, it should be explained that a signal input through the input terminal of the detection trigger signal input sub-circuit111is used to control the detection signal output sub-circuit112. Specifically, an input terminal of the detection signal output sub-circuit112is electrically coupled to the second clock signal terminal CLKC, and an output terminal of the detection signal output sub-circuit112is electrically coupled to an input terminal of the switch sub-circuit113. The detection signal output sub-circuit112is configured to control the input terminal of the detection signal output sub-circuit112and the output terminal of the detection signal output sub-circuit112to be electrically coupled to or decoupled from each other according to a signal received by the control terminal of the detection signal output sub-circuit112. When the control terminal of the detection signal output sub-circuit112receives a valid detection output control signal, the input terminal of the detection signal output sub-circuit112and the output terminal of the detection signal output sub-circuit112are electrically coupled to each other, and when the control terminal of the detection signal output sub-circuit112receives an invalid detection output control signal, the input terminal of the detection signal output sub-circuit112and the output terminal of the detection signal output sub-circuit112are electrically decoupled from each other. An output terminal of the switch sub-circuit113is electrically coupled to the control terminal of the pull-up sub-circuit131, and a control terminal of the switch sub-circuit113is formed as the second control terminal of the detection signal input sub-circuit110, that is, the control terminal of the switch sub-circuit113is electrically coupled to the second clock signal terminal CLKC. The switch sub-circuit113is configured to control the input terminal of the switch sub-circuit113and the output terminal of the switch sub-circuit113to be electrically coupled to or decoupled from each other according to a signal received by the control terminal of the switch sub-circuit113. Specifically, when the control terminal of the switch sub-circuit113receives the valid second clock signal, the input terminal of the switch sub-circuit113and the output terminal of the switch sub-circuit113are electrically coupled to each other; when the control terminal of the switch sub-circuit113receives the invalid second clock signal, the input terminal of the switch sub-circuit113is electrically decoupled from the output terminal of the switch sub-circuit113. An input terminal of the detection signal reset sub-circuit114is electrically coupled to the second level signal terminal VGL1, and an output terminal of the detection signal reset sub-circuit114is electrically coupled to the control terminal of the detection signal output sub-circuit112. A control terminal of the detection signal reset sub-circuit114is electrically coupled to a detection reset signal terminal TRST1. The detection signal reset sub-circuit114is configured to control the input terminal of the detection signal reset sub-circuit114and the output terminal of the detection signal reset sub-circuit114to be electrically coupled to or decoupled from each other according to a signal received by the control terminal of the detection signal reset sub-circuit114. When the control terminal of the detection signal reset sub-circuit114receives a first detection reset signal, the input terminal of the detection signal reset sub-circuit114and the output terminal of the detection signal reset sub-circuit114are electrically coupled to each other; when the control terminal of the detection signal reset sub-circuit114receives a second detection reset signal, the input terminal and the output terminal of the detection signal reset sub-circuit114are electrically decoupled from each other. A first terminal of the first storage sub-circuit C1is electrically coupled to the control terminal H of the detection signal output sub-circuit112, and a second terminal of the first storage sub-circuit C1is electrically coupled to the second level signal terminal VGL1. In the present disclosure, the first storage sub-circuit C1is configured to store a signal input through the detection trigger signal input sub-circuit111, and maintain the voltage of the control terminal of the detection signal output sub-circuit112when the input and output terminals of the detection trigger signal input sub-circuit111are electrically decoupled from each other, and maintain the output state. The working principle of the detection signal input sub-circuit110will be described subsequently in detail with reference to the signal timing diagram. As an implementation of the present disclosure, as shown inFIG.2, the detection trigger signal input sub-circuit111includes a detection signal input transistor M1, a gate electrode of the detection signal input transistor M1is electrically coupled to the first clock signal terminal CLKB, a first electrode of the detection signal input transistor M1is formed as the input terminal of the detection trigger signal input sub-circuit111, and a second electrode of the detection signal input transistor M1is formed as the output terminal of the detection trigger signal input sub-circuit111. When the gate electrode of the detection signal input transistor M1receives a valid first clock signal, the first electrode of the detection signal input transistor M1and the second electrode of the detection signal input transistor M1are electrically coupled to each other; when the gate electrode of the sensing signal input transistor M1receives an invalid first clock signal, the first electrode of the sensing signal input transistor M1and the second electrode of the sensing signal input transistor M1are electrically decoupled from each other. In the implementation shown inFIG.2, the second electrode of the detection signal input transistor M1is directly electrically coupled to the control terminal H of the detection signal output sub-circuit112. In the implementation shown inFIG.4, the second electrode of the detection signal input transistor M1is indirectly electrically coupled to the control terminal H of the detection signal output sub-circuit112. Specifically, in order to avoid an electric leakage, the detection trigger signal input sub-circuit111may further include a fifth anti-leakage transistor M1′, a gate electrode of the fifth anti-leakage transistor M1′ is electrically coupled to the gate electrode of the detection signal input transistor M1, a first electrode of the fifth anti-leakage transistor M1′ is electrically coupled to the second electrode of the detection signal input transistor M1, and a second electrode of the fifth anti-leakage transistor M1′ is electrically coupled to the control terminal H of the detection signal output sub-circuit112. The fifth anti-leakage transistor M1′ is turned on or off in synchronization with the detection signal input transistor M1, so that it is possible to prevent the output terminal of the detection trigger signal input sub-circuit111from leaking electricity. In the present disclosure, the detection signal reset sub-circuit114is not particularly limited to a specific structure. As an implementation, the detection signal reset sub-circuit114includes a second reset transistor M2, a first electrode of the second reset transistor M2is electrically coupled to the output terminal of the detection trigger signal input sub-circuit111, a second electrode of the second reset transistor M2is electrically coupled, directly or indirectly, to the second level signal terminal VGL1, and a gate electrode of the second reset transistor M2is formed as the control terminal of the detection signal reset sub-circuit114. When the gate electrode of the second reset transistor M2receives the first detection reset control signal, the first electrode of the second reset transistor M2and the second electrode of the second reset transistor M2are electrically coupled to each other, and when the gate electrode of the second reset transistor M2receives the second detection reset control signal, the first electrode of the second reset transistor M2and the second electrode of the second reset transistor M2are electrically decoupled from each other. Regardless of whether the second electrode of the second reset transistor M2is directly electrically coupled to the second level signal terminal VGL1or indirectly electrically coupled to the second level signal terminal VGL1, when the first electrode and the second electrode of the second reset transistor M2are electrically coupled to each other, the input terminal of the detection signal reset sub-circuit114including the second reset transistor M2is electrically coupled to the output terminal of the detection signal reset sub-circuit114. In the particular implementation shown inFIG.4, the second electrode of the second reset transistor M2is indirectly electrically coupled to the second level signal terminal VGL1. Specifically, the detection signal reset sub-circuit114further includes a sixth anti-leakage transistor M2′, a gate electrode of the sixth anti-leakage transistor M2′ is electrically coupled to the gate electrode of the second reset transistor M2, a first electrode of the sixth anti-leakage transistor M2′ is electrically coupled to the second electrode of the second reset transistor M2, and a second electrode of the sixth anti-leakage transistor M2′ is electrically coupled to the second level signal terminal VGL1. The sixth anti-leakage transistor M2′ is turned on or off in synchronization with the second reset transistor M2, so that the second level signal terminal VGL1can be prevented from resetting the output terminal of the detection trigger signal input sub-circuit111at a phase other than a scanning reset sub-phase. In the present disclosure, the switch sub-circuit113is not particularly limited to a specific structure. In an implementation, the switch sub-circuit113includes a switch transistor M4, a gate electrode of the switch transistor M4is formed as the control terminal of the switch sub-circuit113, a first electrode of the switch transistor M4is formed as the input terminal of the switch sub-circuit113, and a second electrode of the switch transistor M4is electrically coupled, directly or indirectly, to the pull-up control terminal Q of the pull-up sub-circuit131. When the gate electrode of the switching transistor M4receives a valid second clock signal, the first electrode of the switching transistor M4and the second electrode of the switching transistor M4are electrically coupled to each other; when the gate electrode of the switching transistor M4receives an invalid second clock signal, the first electrode of the switching transistor M4and the second electrode of the switching transistor M4are electrically decoupled from each other. FIG.2shows a case where the second electrode of the switching transistor M4, as the output terminal of the switching sub-circuit113, is directly electrically coupled to the pull-up control terminal Q of the pull-up sub-circuit131. The present disclosure is not limited thereto, and as an implementation, the switch sub-circuit113may further include a seventh anti-leakage transistor M4′, as shown inFIG.4, a gate electrode of the seventh anti-leakage transistor M4′ is electrically coupled to the gate electrode of the switch transistor M4, a first electrode of the seventh anti-leakage transistor M4′ is electrically coupled to the second electrode of the switch transistor M4, and a second electrode of the seventh anti-leakage transistor M4′ is electrically coupled to the pull-up control terminal Q of the pull-up sub-circuit131. The seventh leakage preventing transistor M4′ is turned on or off in synchronization with the switching transistor M4, so that the signal of the second clock signal terminal CLKC can be prevented from affecting the pull-up control terminal Q of the pull-up sub-circuit131. In a case where the shift register unit includes the noise reduction circuit160, the detection trigger signal input sub-circuit111includes the detection signal input transistor M1and the fifth anti-leakage transistor M1′, the detection signal reset sub-circuit114includes the second reset transistor M2and the sixth anti-leakage transistor M2′, and the switch sub-circuit113includes the switch transistor M4and the seventh anti-leakage transistor M4′, as an implementation, as shown inFIG.4andFIG.5, the shift register unit may further include an eighth anti-leakage transistor M20and a ninth anti-leakage transistor M21. As shown inFIG.4, a gate electrode of the eighth anti-leakage transistor M20is electrically coupled to the output terminal of the detection trigger signal input sub-circuit111, a first electrode of the eighth anti-leakage transistor M20is electrically coupled to the second electrode of the detection input transistor M1, and a second electrode of the eighth anti-leakage transistor M20is electrically coupled to the input terminal of the display signal input sub-circuit120. Accordingly, a gate electrode of the ninth anti-leakage transistor M21is electrically coupled to the pull-up control terminal Q of the pull-up sub-circuit131, a first electrode of the ninth anti-leakage transistor M21is electrically coupled to the input terminal of the display signal input sub-circuit120, and a second electrode of the ninth anti-leakage transistor M21is electrically coupled to the output terminal of the noise reduction circuit160. The eighth anti-leakage transistor M20and the ninth anti-leakage transistor M21are provided to prevent interference between the display signal input sub-circuit120and the detection signal input sub-circuit110. The shift register unit provided by the present disclosure may be applied to a shift register, and particularly, a plurality of shift register units may be cascaded to form the shift register. The term “cascade” or any variant thereof means that the output terminal of the previous stage of shift register unit is electrically coupled to the input terminal of the next stage of shift register unit. Since the signal output by the output terminal of the shift register unit is mainly used for driving thin film transistors coupled to gate lines to be turned on or off, in order to ensure that the gate lines can validly turn on or off the thin film transistors, in an implementation, the output circuit may include a cascade output sub-circuit and at least one scanning signal output sub-circuit. The pull-up sub-circuit includes a cascade pull-up sub-circuit and a scanning signal output pull-up sub-circuit, a control terminal of the cascade pull-up sub-circuit is electrically coupled to a control terminal of the scanning signal output pull-up sub-circuit and forms the control terminal of the pull-up sub-circuit. The signal output terminal includes a cascade signal output terminal of the cascade output sub-circuit and a scanning signal output terminal of the scanning signal output sub-circuit. The pull-down sub-circuit includes a cascade pull-down sub-circuit and a scanning signal output pull-down sub-circuit, a control terminal of the cascade pull-down sub-circuit and a control terminal of the scanning signal output pull-down sub-circuit are electrically coupled and form the control terminal of the pull-down sub-circuit. The output terminal of the pull-up sub-circuit is electrically coupled to the corresponding scanning signal output terminal, the output terminal of the pull-down sub-circuit is electrically coupled to the corresponding scanning signal output terminal, the output terminal of the cascade pull-down sub-circuit is electrically coupled to the cascade signal output terminal, and the output terminal of the cascade pull-up sub-circuit is electrically coupled to the cascade signal output terminal. In implementations shown inFIGS.2,4,5,6,7,8, and9, the output circuit includes one cascade output sub-circuit and one scanning signal output sub-circuit. In the implementation shown inFIG.5, the output circuit includes one cascade output sub-circuit and two scanning signal output sub-circuits. The shift register unit shown in each of figures of the present disclosure is the Nthstage of shift register unit in a shift register, the cascade signal output terminal is denoted by a reference sign CR<N>, and scanning signal output terminals are respectively denoted by reference signs OUT<N>, OUT1<N>, OUT2<N>. The output terminals OUT<N>, OUT1<N>, OUT2<N> may be electrically coupled to different gate lines of the display panel, respectively. In the figures, CR<N−1> represents the cascade signal output terminal of the (N−1)thstage of shift register unit, CR<N−2> represents the cascade signal output terminal of the (N−2)thstage of shift register unit, and CR<N+3> represents the cascade signal output terminal of the (N+3)thstage of shift register unit. In the present disclosure, the pull-up sub-circuit131is not particularly limited to a specific structure. As an implementation, the cascade pull-up sub-circuit of the pull-up sub-circuit131includes a cascade pull-up transistor M15and a storage capacitor C2, each scanning signal output pull-up sub-circuit of the pull-up sub-circuit131includes a scanning signal output pull-up transistor. For example, in the implementations shown inFIGS.2,4,6,7,8, and9, there is only one scanning signal output pull-up sub-circuit which includes a scanning signal output pull-up transistor M17. In the particular implementation shown inFIG.5, two scanning signal output pull-up sub-circuits, i.e., a scanning signal output pull-up transistor M17and a scanning signal output pull-up transistor M24, are included. A gate electrode of cascade pull-up transistor M15forms the pull-up control terminal Q of pull-up sub-circuit131, a first electrode of cascade pull-up transistor M15forms the input terminal of the cascade pull-up sub-circuit, and a second electrode of cascaded pull-up transistor M15forms the output terminal of the cascade pull-up sub-circuit. One terminal of the storage capacitor C2is electrically coupled to the gate electrode of the cascade pull-up transistor M15, and the other terminal of the storage capacitor C2is electrically coupled to the second electrode of the cascade pull-up transistor M15. The gate electrode of the scanning signal output pull-up transistor is electrically coupled to the gate electrode of the cascade pull-up transistor, the first electrode of the scanning signal output pull-up transistor is formed as the input terminal of the scanning signal output pull-up sub-circuit, and the second electrode of the scanning signal output pull-up transistor is formed as the output terminal of the scanning signal output pull-up sub-circuit. In the implementations shown inFIGS.2and4, the pull-up sub-circuit131has only one input terminal, i.e., the input terminal of the pull-up sub-circuit131is electrically coupled to the third clock signal terminal CLKD. In the implementation shown inFIG.5, the pull-up sub-circuit131includes three input terminals, and the three input terminals of the pull-up sub-circuit131are electrically coupled to three third clock signal terminals, respectively. Specifically, the first electrode of the cascade pull-up transistor M15is electrically coupled to the third clock signal terminal CLKD, a first electrode of the scanning signal output pull-up transistor M17is electrically coupled to the third clock signal terminal CLKE, and a first electrode of the scanning signal output pull-up transistor M24is electrically coupled to the third clock signal terminal CLKF. In the implementations shown inFIGS.6through9, the first electrode of the cascade pull-up transistor M15is electrically coupled to the third clock signal terminal CLKD, and the first electrode of the scanning signal output pull-up transistor M17is electrically coupled to the third clock signal terminal CLKE. In the present disclosure, the three third clock signal terminals CLKD, CLKE, CLKF may be the same as or different from each other. In the present disclosure, there is no particular limitation on a specific structure of the pull-down sub-circuit132. In the implementations shown inFIGS.2,4to9, the cascade pull-down sub-circuit of the pull-down sub-circuit132includes a plurality of cascade pull-down transistors, and the number of the cascade pull-down transistors is the same as the number of the control terminals of the pull-down sub-circuit and the cascade pull-down transistors correspond to the control terminals of the pull-down sub-circuit in a one-to-one mode, the scanning signal output pull-down sub-circuit includes a plurality of scanning signal output pull-down transistors, and the number of the scanning signal pull-down output transistors is the same as the number of the control terminals of the pull-down sub-circuit and the scanning signal pull-down output transistors correspond to the control terminals of the pull-down sub-circuit in a one-to-one mode. The gate electrodes of the cascade pull-down transistors are respectively formed as the control terminals of the pull-down sub-circuit, the first electrodes of the cascade pull-down transistors are formed as the input terminal of the cascade pull-down sub-circuit, and the second electrodes of the cascade pull-down transistors are formed as the output terminal of the cascade pull-down sub-circuit. Specifically, in an implementation where the pull-down sub-circuit132includes the pull-down control terminals QB_1and QB_2, the cascade pull-down sub-circuit includes a cascade pull-down transistor M16and a cascade pull-down transistor M16′. As can be seen from the figures that, a gate electrode of the cascade pull-down transistor M16is electrically coupled to the pull-down control terminal QB_1, and a gate electrode of the cascade pull-down transistor M16′ is electrically coupled to the pull-down control terminal QB_2. The gate electrodes of the scanning signal output pull-down transistors are respectively electrically coupled to the corresponding control terminals of the pull-down sub-circuit, the first electrodes of the scanning signal output pull-down transistors are formed as the input terminal of the scanning signal pull-down output sub-circuit, and the second electrodes of the scanning signal output pull-down transistors are formed as the output terminal of the scanning signal pull-down output sub-circuit. In the implementations shown inFIGS.2,4,6,7,8and9, there is only one scanning signal output pull-down sub-circuit, and accordingly, the scanning signal output pull-down sub-circuit includes a scanning signal output pull-down transistor M18and a scanning signal output pull-down transistor M18′. A gate electrode of the scanning signal output pull-down transistor M18is electrically coupled to the pull-down control terminal QB_1, and a gate electrode of the scanning signal output pull-down transistor M18′ is electrically coupled to the pull-down control terminal QB_2. In the implementation shown inFIG.5, two scanning signal output pull-down sub-circuits are included, and accordingly, one scanning signal output pull-down sub-circuit includes the scanning signal output pull-down transistor M18and the scanning signal output pull-down transistor M18′, and the other scanning signal output pull-down sub-circuit includes a scanning signal output pull-down transistor M25and a scanning signal output pull-down transistor M25′. The gate electrode of the scanning signal output pull-down transistor M18is electrically coupled to the pull-down control terminal QB_1, and the gate electrode of the scanning signal output pull-down transistor M18′ is electrically coupled to the pull-down control terminal QB_2. A gate electrode of the scanning signal output pull-down transistor M25is electrically coupled to the pull-down control terminal QB_1, and a gate electrode of the scanning signal output pull-down transistor M25′ is electrically coupled to the pull-down control terminal QB_2. As a second aspect of the present disclosure, a gate driving circuit is provided, where the gate driving circuit includes at least one shift register unit group, each shift register unit group includes a plurality of shift register units which are cascaded, and at least one of the shift register units is the shift register unit provided in the present disclosure. In a single shift register unit group, for two adjacent stages of shift register units, the output signal of the next stage of shift register unit is used for reset the previous stage of shift register unit. In the present disclosure, the gate driving circuit may include one shift register unit group, or may include a plurality of shift register unit groups. When the gate driving circuit includes one shift register unit group, all stages of shift register units are cascaded. For two adjacent stages of shift register units, the output signal of the previous stage of shift register unit is the input signal of the next stage of shift register unit, and the output signal of the next stage of shift register unit is the reset signal of the previous stage of shift register unit. In an implementation, the gate driving circuit includes M shift register unit groups, the first M stages of shift register units of the gate driving circuit are respectively shift register units at first stages of the M shift register unit groups, and the nthstage of shift register unit is cascaded with the (n−M)thstage of shift register unit, where M is a constant value and is a natural number not less than 1, n is a variable, and n is a natural number not less than M. In the present disclosure, the number of the shift register unit groups included in the gate driving circuit is not particularly limited, and for example, each gate driving circuit may include two shift register unit groups, where odd-numbered stages of shift register units are in one group, and even-numbered stages of shift register units are in the other group. That is, in the implementation shown inFIG.10, M is equal to 2, the third stage of shift register unit A3is cascaded with the first stage of shift register unit A1, and the fourth stage of shift register unit A4is cascaded with the second stage of shift register unit A2. As a third aspect of the present disclosure, there is provided a display panel including a gate driving circuit. The gate driving circuit is the gate driving circuit provided by the present disclosure. For the display panel, the shift register unit of the gate driving circuit includes two signal input sub-circuits, which are a detection signal input sub-circuit and a display signal input sub-circuit, respectively. As described above, in a single stage of shift register unit, the output terminal of the detection signal input sub-circuit is electrically coupled to the control terminal of the pull-up sub-circuit, the first control terminal of the detection signal input sub-circuit is electrically coupled to the first clock signal terminal, and the second control terminal of the detection signal input sub-circuit is electrically coupled to the second clock signal terminal, so that the detection signal input sub-circuit provides the detection scanning control signal to the control terminal of the pull-up sub-circuit under control of the first clock signal input through the first clock signal terminal and the second clock signal input through the second clock signal terminal. In a single stage of shift register unit, the output terminal of the display signal input sub-circuit is electrically coupled to the control terminal of the pull-up sub-circuit, the input terminal of the display signal input sub-circuit is electrically coupled to the first level signal terminal, and the input terminal of the display signal input sub-circuit and the output terminal of the display signal input sub-circuit are configured to provide a display scanning control signal to the control terminal of the pull-up sub-circuit under control of a signal received by the control terminal of the display signal input sub-circuit. The display panel may include a plurality of data lines, a plurality of gate lines and a plurality of detection lines, the gate lines and the data lines are arranged to cross each other to divide the display panel into a plurality of pixel units. As shown inFIG.11, each pixel unit is provided with a pixel circuit and a detection switch transistor T3, and each pixel unit includes a data writing transistor T2. In a single row of pixel units, gate electrodes of data writing transistors T2and gate electrodes of the detection switch transistors T3are electrically coupled to a corresponding one of the gate lines (for example, in the implementation shown inFIG.11, the gate electrode of the data writing transistor T2and the gate electrode of the detection switch transistor T3are electrically coupled to the gate line GL1). When the gate line receives a display scanning signal, the data writing transistors T2are turned on, and data input through the corresponding data line DL can be written into pixel circuits, for driving the pixel units to emit light. When the gate line receives the detection scanning signal, the detection switch transistor T3is turned on, so that the detection signal can be collected through a detection line SL. In the present disclosure, the pixel circuit is also not particularly limited to a specific structure. For example, in the particular implementation shown inFIG.11, the pixel circuit includes a driving transistor T1, a data writing transistor T2, and an organic light emitting diode OLED. A first electrode of the data writing transistor T2is electrically coupled to the corresponding data line DL, and a second electrode of the data writing transistor T2is electrically coupled to a gate electrode of the driving transistor T1. A first electrode of the driving transistor T1is electrically coupled to a high-level signal terminal ELVDD, a second electrode of the driving transistor T2is electrically coupled to an anode electrode of the organic light emitting diode OLED, and a cathode electrode of the organic light emitting diode OLED is grounded. The gate electrode of the detection switch transistor T3is electrically coupled to the gate line GL1, a first electrode of the detection switch transistor T3is electrically coupled to the anode electrode of the organic light emitting diode OLED, and a second electrode of the detection switch transistor T3is electrically coupled to the detection line SL. As a fourth aspect of the present disclosure, a driving method for driving a display panel is provided, where the display panel is the display panel provided in the present disclosure. As shown inFIG.3, the driving method includes a plurality of frame periods (four frame periods, i.e., a first frame period1F, a second frame period2F, a third frame period3F, and a fourth frame period4F, are respectively shown inFIG.3), each of which includes a display scanning signal output phase T1and a detection scanning signal output phase T2. The driving method includes: controlling one of the pull-down control sub-circuits through the selection sub-circuit to provide a signal for the pull-down control terminal of the pull-down sub-circuit; in the display scanning signal output phase T1of each frame period, providing a display trigger signal to the control terminal of the display signal input sub-circuit in the first stage of shift register unit of each shift register unit group, so as to provide a signal for the pull-up control terminal of the pull-up sub-circuit by using the display signal input sub-circuit; in the detection scanning signal output phase T2of the first frame period, providing a detection initial signal to the detection signal input sub-unit in the first stage of shift register unit of each shift register unit group, so as to provide a signal for the pull-up control terminal of the pull-up sub-circuit by using the detection signal input sub-unit, and in different frame periods, respectively controlling detection signal input sub-units in different stages of shift register units to output, so that in a predetermined number of frame periods, each of the shift register units outputs a valid signal to the pull-up control terminal of the pull-up sub-circuit in the detection scanning signal output phase. It should be noted that in the driving method provided by the present disclosure, the last stage of shift register unit of each shift register unit group enters the detection scanning signal output phase T2after outputting the display scanning signal. In the driving method provided by the present disclosure, no matter in the display scanning signal output phase or the detection scanning signal output phase, only the pull-down control sub-circuit in each shift register unit works so that a failure of a thin film transistor caused by a long-term bias voltage can be avoided. For the particular implementation shown inFIG.1, the driving method includes: for any shift register unit, providing a valid fourth clock signal for one of fourth clock signal terminals of the shift register unit, and providing invalid fourth clock signals for the rest fourth clock signal terminals; in the display scanning signal output phase T1of each frame period, providing a display trigger signal STU2to the control terminal of the display signal input sub-circuit in the first stage of shift register unit of each shift register unit group, providing a third clock signal to the third clock signal terminal of each shift register unit of the shift register unit group, and in a single shift register unit group, the third clock signal of the odd-numbered stage of shift register unit is complementary to the third clock signal of the even-numbered stage of shift register unit; in the display scanning signal output phase T1of the first frame period, providing a detection initialization signal STU1to the input terminal of the detection signal input sub-circuit in the first stage of shift register unit of each shift register unit group; in any two adjacent frame periods except the first frame period, providing a valid first clock signal to the first clock signal terminal CLKB of each shift register unit in the detection scanning signal output phase of the previous frame period, providing a valid second clock signal to the second clock signal terminal CLKC of each shift register unit in the detection scanning signal output phase of the next frame period, providing a valid first clock signal to the first clock signal terminal CLKB of each shift register unit in the display scanning signal output phase of the first frame period, and providing a valid second clock signal to the second clock signal terminal CLKC of each shift register unit in the detection scanning signal output phase of the first frame period. In different frame periods, valid third clock signals are provided to third clock signal terminals of different stages of shift register units respectively, so that in a preset number of frame periods, each of the shift register units receives the valid third clock signal in the detection scanning signal output phase. In the present disclosure, for example, STU1of the Nthstage of shift register unit is a signal output from the cascade output signal terminal CR<N−1> of the (N−1)thstage of shift register unit, and STU2of the Nthstage of shift register unit is a signal output from the cascade output signal terminal CR<N−2> of the (N−2)thstage of shift register unit. Moreover, in the present disclosure, as an example, valid STU1and STU2may last for a first predetermined period of time t1, as shown inFIG.3, a duration of the first predetermined period of time t1is less than a duration of the display scanning signal output phase T1. It should be noted that in the driving method provided by the present disclosure, the last stage of shift register unit of each shift register unit group enters the detection scanning signal output phase T2after outputting the display scanning signal. In the driving method provided by the present disclosure, no matter in the display scanning signal output phase or the detection scanning signal output phase, only the pull-down control sub-circuit in each shift register unit works, so that a failure of a thin film transistor caused by a long-term bias voltage can be avoided. In the display scanning signal output phase T1of each frame period, the display trigger signal STU2is provided to the control terminal of the display input sub-circuit in the first stage of shift register unit, so that the input terminal of the display input sub-circuit and the output terminal of the display input sub-circuit can be electrically coupled to each other, and the pull-up control terminal Q of the pull-up sub-circuit is charged. At this time, the third clock signal terminal CLKD receives the invalid third clock signal, and thus, the shift register unit outputs the invalid third clock signal. Then, the signal at the control terminal of the display input sub-circuit jumps, the pull-up control terminal Q of the pull-up sub-circuit is in a floating state, and the pull-up sub-circuit couples the voltage of the control terminal thereof to a higher potential, so that the input terminal and the output terminal of the pull-up sub-circuit are kept to be electrically coupled to each other. At this time, the third clock signal received by the third clock signal terminal CLKD jumps to a valid third clock signal, so that the shift register unit outputs the valid third clock signal (i.e. display scanning signal), and the display scanning signal output by the signal output terminal of the first stage of shift register unit is used as an input to the input terminal of the second stage of shift register unit. After outputting of the first stage of shift register unit, the pull-down control circuit and the pull-down sub-circuit pull down the potential of the pull-up control terminal Q of the pull-up sub-circuit to the level of the second level signal terminal VGL1, and pull down the potential of the signal output terminal OUT<1> of the first stage of shift register unit to the level of the third level signal terminal VGL2. After the input terminal of the second stage of shift register unit in the same shift register unit group receives the signal output by the output terminal of the first stage of shift register unit, the second stage of shift register unit outputs a signal according to the same process as the first stage of shift register unit, and the rest stages of shift register units operate in the same manner until an output of the last stage of shift register unit is finished. In the detection scanning signal output phase T2of each frame period, only one stage of shift register unit outputs a detection scanning signal, in other words, only one row of pixel units is detected in each frame period. In the implementation shown inFIG.3, only the third clock signal terminal CLKD of the first stage of shift register unit is provided with the valid third clock signal during the detection scanning signal output phase T2of the first frame period1F, so that only the first stage of shift register unit outputs the detection scanning signal to detect only the first row of pixel units of the corresponding display panel during the first frame period1F. In the detection scanning signal output phase T2of the second frame period2F, only the third clock signal terminal CLKD of the second stage of shift register unit is provided with the valid third clock signal, so that in the second frame period2F, only the second stage of shift register unit outputs the detection scanning signal. Accordingly, in the third frame period3F, only the third stage of shift register unit outputs the detection scanning signal, and in the fourth frame period4F, only the fourth stage of shift register unit outputs the detection scanning signal, and so on. In the present disclosure, the number of stages of shift register units that output the detection scanning signals in each frame period is not particularly limited, as long as each of the shift register units can output the detection scanning signal in predetermined number of frame periods. In the present disclosure, the valid third clock signal output in the detection scanning signal output phase T2lasts for the second predetermined period t2, as shown inFIG.3, a duration of the second predetermined period t2is less than a duration of the detection scanning signal output phase T2. In some implementations, the driving method further includes providing a reset signal, and before the first frame period1F, first resetting the detection input sub-circuit, that is, providing a valid detection reset signal to the detection reset signal terminal TRST1. In each frame period, before the detection scanning signal output phase T2of the frame period ends, a valid noise reduction control signal is provided to the control terminal TRST2of the noise reduction circuit of the display signal input sub-circuit to perform noise reduction and reset on the pull-up control terminal Q of the pull-up sub-circuit in each stage of shift register unit. The driving method provided by the present disclosure will be described with reference toFIGS.2,3and10. As shown inFIG.3, one period of the driving method is one frame of the display panel, which is shown as a first frame period1F, a second frame period2F, a third frame period3F, and a fourth frame period4F inFIG.3. As shown inFIG.10, the gate driving circuit includes two shift register unit groups, the odd-numbered stages of shift register units are in one shift register unit group, the even-numbered stages of shift register units are in one shift register unit group, and correspondingly, corresponding to the implementation inFIG.10, the gate driving circuit includes four third clock signal lines, i.e., a third clock signal line CLKD1and a third clock signal line CLKD3for providing third clock signals to the shift register unit group including the odd-numbered stages of shift register units, and a third clock signal line CLKD2and a third clock signal line CLKD4for providing third clock signals to the shift register unit group including the even-numbered stages of shift register units. InFIG.3, H<1> represents a potential of the control terminal of the detection signal output sub-circuit in the first stage of shift register unit, H<2> represents a potential of the control terminal of the detection signal output sub-circuit in the second stage of shift register unit, H<3> represents a potential of the control terminal of the detection signal output sub-circuit in the third stage of shift register unit, and H<4> represents a potential of the control terminal of the detection signal output sub-circuit in the fourth stage of shift register unit. Each frame period of the driving method includes the display scanning signal output phase T1and the detection scanning signal output phase T2. In the first frame period F1, the display trigger signal STU2is provided to the control terminal of the display signal input sub-circuit in the first stage of shift register unit, the detection trigger signal STU1is provided to the input terminal of the detection signal input sub-circuit in the first stage of shift register unit, an invalid fourth clock signal (i.e., a low level signal) is provided to the fourth clock signal terminal CLKN, a valid fourth clock signal (i.e., a high level signal) is provided to the fourth clock signal terminal CLKM, a valid first clock signal is provided to the first clock signal terminal CLKB, and accordingly, an invalid third clock signal is provided to each of the third clock signal terminals. After receiving the detection trigger signal STU1, the first stage of shift register unit goes through a charging phase, an output phase, and a pull-down phase. In the charging phase, when the first control terminal of the detection signal input sub-circuit110receives a valid first clock signal, the detection signal input sub-circuit110receives and stores the detection trigger signal input through the input terminal of the detection signal input sub-circuit110, and the second control terminal of the detection signal input sub-circuit110receives an invalid second clock signal, so that the input terminal of the detection signal input sub-circuit and the output terminal of the detection signal input sub-circuit are electrically decoupled from each other, when the control terminal of the display signal input sub-circuit receives the display trigger signal STU2, the input terminal of the display signal input sub-circuit and the output terminal of the display signal input sub-circuit are electrically coupled to each other to transmit the display scanning signal input through the first level signal terminal VDD to the control terminal of the pull-up sub-circuit, the input terminal and the output terminal of the pull-up sub-circuit are electrically coupled to each other, since the signal provided by the third clock signal terminal is an invalid signal at this time, each signal output terminal also outputs an invalid signal (i.e., OUT<1>, OUT<2>, OUT<3>, and OUT<4> inFIG.3all output invalid signals). Corresponding to the implementation inFIG.2, the detection signal input transistor M1is turned on, the second reset transistor M2and the switching transistor M4are turned off, and the detection scanning trigger signal input through the detection signal input transistor M1is written to the first storage sub-circuit C1. At the same time, the display signal input transistor M5(i.e., the display signal input sub-circuit120) is turned on, and the display scanning control signal input through the input terminal of the display signal input sub-circuit is written into the pull-up control terminal Q of the pull-up sub-circuit131, so that both the pull-up transistor M15and the pull-up transistor M17are turned on, and since the signal input through the third clock signal terminal is a low level signal, the signal output terminal also outputs a low level signal at this time. In the output phase, a third clock signal is provided to four third clock signal lines, wherein the third clock signal is a square wave signal, in two adjacent stages of shift register units, the third clock signal received by the next stage of shift register unit is delayed by a predetermined time than the third clock signal received by the previous stage of shift register unit, an invalid first clock signal is provided to the first clock signal terminal CLKB, and an invalid second clock signal is provided to the second clock signal terminal CLKC, and potentials at control terminals of pull-up sub-circuits in stages of shift register units (the control terminal Q<1> of the first stage of shift register unit and the control terminal Q<2> of the second stage of shift register unit shown inFIG.3) are sequentially coupled to a higher potential, so that the input terminal and the output terminal of the corresponding pull-up sub-circuit are electrically coupled to each other, and an output is performed. Corresponding to the implementation shown inFIG.2, the storage capacitor C2couples the control terminal of the pull-up sub-circuit131to a higher potential, so that the pull-up transistor M15and the pull-up transistor M17are both turned on, and the valid signal is input to the third clock signal terminal CLKD, therefore, the signal output from the signal output terminal is the valid third clock signal. In the pull-down stage, an invalid third clock signal is provided to the corresponding third clock signal terminal, and by the pull-down control circuit, the input terminal and the output terminal of the pull-down sub-circuit can be electrically coupled to each other, and the input terminal of the pull-down control circuit and the control terminal of the pull-up sub-circuit can be electrically coupled to each other, so that the control terminal and the signal output terminal of the pull-up sub-circuit can be pulled down. Corresponding to the implementation shown inFIG.3, during the pull-down stage, the signal input from the third clock signal terminal CLKD is an invalid signal, at this time, the second pull-down control transistor M9is turned on to pull down the potential of the pull-up control terminal Q of the pull-up sub-circuit to the low level input through the second level signal terminal VGL1, the signal input through the first pull-down control transistor M8makes the pull-down control terminal QB_1of the pull-down sub-circuit have a high level signal, therefore, the input terminal and the output terminal of the pull-down sub-circuit are electrically coupled to each other, so that the pull-down transistors M16and M18can be turned on to pull down the potential of the signal output terminal to the low level signal input through the third level signal terminal VGL2, and the pull-down control terminal QB_2of the pull-down sub-circuit is in the floating state. The signal output by the first stage of shift register unit can be used as a trigger signal of the display signal input sub-circuit in the third stage of shift register unit. The signal output by the second stage of shift register unit can be used as a trigger signal of the display signal input sub-circuit in the fourth stage of shift register units. For each of other stages of shift register units, after receiving a valid display trigger signal, all transistors therein operate the same as those in the first stage of shift register unit, which will be not described in detail herein. In an implementation provided by the present disclosure, the first stage of shift register unit and the second stage of shift register unit share the display trigger signal STU2. During outputting of the display scanning signal, the switch transistor M4is always turned off. An influence of the detection trigger signal stored in the first storage sub-circuit C1on displaying is avoided. After all the odd-numbered stages of shift register units complete outputting display scanning signals, the detection scanning signal output phase is entered. At this time, the second clock signal terminal CLKC of the first stage of shift register unit receives the valid second clock signal, the signal stored in the first storage sub-circuit C1is output to the control terminal of the pull-up sub-circuit, and the storage capacitor C2is charged, and at the same time, the valid third clock signal is provided to the third clock signal terminal of the first stage of shift register unit, so that the input terminal and the output terminal of the pull-up sub-circuit are electrically coupled to each other, and the first stage of shift register unit outputs the detection scanning signal. Similarly, after all the even-numbered stages of shift register units complete outputting display scanning signals, the detection scanning signal output phase T2is entered. The process is similar to that of the odd-numbered stages of shift register units in the detection scanning signal output phase, and thus is not described herein again. Before each frame period is ended, a valid signal is provided to the control terminal TRST2of the noise reduction circuit of the shift register unit, so that the control terminal of the pull-up sub-circuit can be reset, thereby facilitating a continuation of the next frame period. In the present disclosure, proportions for superposition of output waveforms are accomplished by adjusting a pulse width of the third clock signal and a pulse width of the input signal. In the present disclosure, the first level signal terminal VDD may provide a high level signal, the second level signal terminal VGL1and the third level signal terminal VGL2may provide a low level signal, and the low level signals provided by the second level signal terminal VGL1and the third level signal terminal VGL2may be the same as or different from each other, e.g., the potential of the low level signal provided by the second level signal terminal VGL1may be lower than the potential of the low level signal provided by the third level signal terminal VGL2. It will be understood that the above implementations are merely exemplary implementations employed to illustrate the principle of the present disclosure, and the present disclosure is not limited thereto. It will be apparent to those skilled in the art that various changes and modifications may be made therein without departing from the spirit and scope of the present disclosure, and these changes and modifications are to be considered within the scope of the present disclosure. | 91,736 |
11862100 | DETAILED DESCRIPTION Embodiments of the present disclosure will be described more fully hereinafter with reference to the accompanying drawings. Like reference numerals may refer to like elements throughout the accompanying drawings. It will be understood that when a component such as a film, a region, a layer, or an element, is referred to as being “on”, “connected to”, “coupled to”, or “adjacent to” another component, it can be directly on, connected, coupled, or adjacent to the other component, or intervening components may be present. It will also be understood that when a component is referred to as being “between” two components, it can be the only component between the two components, or one or more intervening components may also be present. It will also be understood that when a component is referred to as “covering” another component, it can be the only component covering the other component, or one or more intervening components may also be covering the other component. Other words used to describe the relationships between components should be interpreted in a like fashion. The term “and/or” includes one or more combinations of the associated listed items. The terms “first”, “second”, etc. are used to describe various components, but the components are not limited by the terms. The terms are used only to differentiate one component from another component. For example, without departing from the scope and spirit of the present disclosure, a first component may be referred to as a second component, and similarly, the second component may be referred to as the first component. As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. Spatially relative terms, such as “beneath”, “below”, “lower”, “under”, “above”, “upper”, etc., may be used herein for ease of description to describe one element or feature's relationship to another element(s) or feature(s) as illustrated in the figures. It will be understood that the spatially relative terms are intended to encompass different orientations of the device in use or operation in addition to the orientation depicted in the figures. For example, if the device in the figures is turned over, elements described as “below” or “beneath” or “under” other elements or features would then be oriented “above” the other elements or features. Thus, the example terms “below” and “under” can encompass both an orientation of above and below. It will be understood that the terms “include”, “comprise”, “have”, etc. specify the presence of features, numbers, steps, operations, elements, or components, described in the specification, or a combination thereof, not precluding the presence or additional possibility of one or more other features, numbers, steps, operations, elements, or components or a combination thereof. FIG.1is a perspective view of a display device, according to an embodiment of the present disclosure.FIG.2is a cross-sectional view of a display device, according to an embodiment of the present disclosure. Referring toFIGS.1and2, a display device DD may have a shape of a rectangle having a short side parallel to a first direction DR1and a long side parallel to a second direction DR2intersecting the first direction DR1. However, embodiments of the present disclosure are not limited thereto, and the display device DD may have various shapes such as, for example, a circle and a polygon. The display device DD may be a device that is activated depending on an electrical signal. The display device DD may include various embodiments. For example, the display device DD may be applied to an electronic device such as a smartwatch, a tablet PC, a notebook computer, a computer, a smart television, etc. Hereinafter, a normal direction substantially perpendicular to a plane defined by the first direction DR1and the second direction DR2is defined as a third direction DR3. In the specification, the meaning of “when viewed from above a plane” may mean “when viewed in the third direction DR3”. A top surface of the display device DD may be defined as a display surface IS, and may have a plane defined by the first direction DR1and the second direction DR2. Images IM generated by the display device DD may be provided to a user through the display surface IS. The display surface IS may be divided into a transparent area TA and a bezel area BZA. The transparent area TA may be an area in which the images IM are displayed. A user visually perceives the images IM through the transparent area TA. In an embodiment, the transparent area TA is illustrated in a shape of a quadrangle whose vertexes are rounded. However, this is illustrated only as an example. For example, according to embodiments, the transparent area TA may have various shapes. The bezel area BZA is adjacent to the transparent area TA. The bezel area BZA may have a predetermined color. The bezel area BZA may surround the transparent area TA. Accordingly, the shape of the transparent area TA may be substantially defined by the bezel area BZA. However, this is illustrated by way of example. For example, according to embodiments, the bezel area BZA may be disposed adjacent to only one side of the transparent area TA or may be omitted. The display device DD may detect an external input applied from outside of the display device DD. The external input may include various inputs applied from outside of the display device DD. For example, as well as a contact by a part of a body such as the user's hand (including a user's finger) US_F, the external input may include an external input (e.g., hovering) applied when the user's hand US_F approaches the display device DD or is adjacent to the display device DD within a predetermined distance (e.g., without physically contacting the display device DD). In addition, the external input may have various forms such as, for example, force, pressure, temperature, light, etc. The display device DD may detect the user's biometric information applied from outside of the display device DD. A biometric information sensing area capable of detecting the user's biometric information may be provided on the display surface IS of the display device DD. The biometric information sensing area may be provided in the entire area of the transparent area TA or may be provided in a partial area of the transparent area TA. The display device DD may include a window WM, a display module DM, and housing EDC. In an embodiment, an appearance of the display device DD may be implemented by coupling the window WM and the housing EDC. A front surface of the window WM defines the display surface IS of the display device DD. The window WM may include an optically transparent material. For example, the window WM may include glass or plastic. The window WM may include a multi-layer structure or a single-layer structure. For example, the window WM may include a plurality of plastic films bonded to each other by an adhesive or may have a glass substrate and a plastic film bonded to each other by an adhesive. The display module DM includes a display panel DP and an input sensing layer ISL. The display panel DP may display an image depending on an electrical signal. The input sensing layer ISL may sense an external input applied from outside of the display module DM. The external input may be provided in various forms. The display panel DP according to an embodiment of the present disclosure may be a light emitting display panel, but is not particularly limited thereto. For example, the display panel DP may be an organic light emitting display panel, an inorganic light emitting display panel, or a quantum dot light emitting display panel. A light emitting layer of the organic light emitting display panel may include an organic light emitting material. A light emitting layer of the inorganic light emitting display panel may include an inorganic light emitting material. An emission layer of the quantum dot light emitting display panel may include a quantum dot, a quantum rod, etc. Hereinafter, it is described that the display panel DP is an organic light emitting display panel. Referring toFIG.2, the display panel DP includes a base layer BL, a circuit layer DP_CL, an element layer DP_ED, and an encapsulation layer TFE. The display panel DP according to an embodiment of the present disclosure may be a flexible display panel. However, the present disclosure is not limited thereto. For example, the display panel DP may be a foldable display panel, which is folded with respect to a folding axis, or a rigid display panel. The base layer BL may include a synthetic resin layer. The synthetic resin layer may be a polyimide-based resin layer. However, the material thereof is not particularly limited. For example, according to embodiments, the base layer BL may include a glass substrate, a metal substrate, an organic/inorganic composite substrate, etc. The circuit layer DP_CL is disposed on the base layer BL. The circuit layer DP_CL includes at least one insulating layer and a circuit element. Hereinafter, the insulating layer included in the circuit layer DP_CL is referred to as an “intermediate insulating layer”. The intermediate insulating layer includes at least one intermediate inorganic film and at least one intermediate organic film. The circuit element may include a pixel driving circuit, which is included in each of the plurality of pixels and is used to display an image, and a sensor driving circuit, which is included in each of the plurality of sensors and is used to recognized external information. The external information may be biometric information. In an embodiment of the present disclosure, the sensor may be a fingerprint recognition sensor, a proximity sensor, an iris recognition sensor, etc. Furthermore, the sensor may be an optical sensor that recognizes the biometric information in an optical scheme. The circuit layer DP_CL may further include signal lines connected to the pixel driving circuit and the sensor driving circuit. The element layer DP_ED may include a light emitting element included in each of the pixels and a light sensing element included in each of the sensors. In an embodiment of the present disclosure, the light sensing element may be a photodiode. An optical fingerprint sensor may detect light reflected by a user's fingerprint. The circuit layer DP_CL and the element layer DP_ED will be described in further detail below with reference toFIGS.12,13A, and13B. The encapsulation layer TFE encapsulates the element layer DP_ED. The encapsulation layer TFE may include at least one organic film and at least one inorganic film. The inorganic film may include inorganic materials and may protect the element layer DP_ED from moisture/oxygen. The inorganic film may include, for example, a silicon nitride layer, a silicon oxynitride layer, a silicon oxide layer, a titanium oxide layer, an aluminum oxide layer, etc. However, the inorganic film is not limited thereto. The organic film may include organic materials and may protect the element layer DP_ED from foreign objects such as dust particles. The input sensing layer ISL may be formed on the display panel DP. The input sensing layer ISL may be disposed directly on the encapsulation layer TFE. According to an embodiment of the present disclosure, the input sensing layer ISL may be formed on the display panel DP through a subsequent process. For example, in an embodiment, when the input sensing layer ISL is directly disposed on the display panel DP, an adhesive film is not interposed between the input sensing layer ISL and the encapsulation layer TFE. However, alternatively, in an embodiment, an inner adhesive film may be interposed between the input sensing layer ISL and the display panel DP. In this case, the input sensing layer ISL is not manufactured by a process continuous to that of the display panel DP. That is, the input sensing layer ISL may be manufactured through a process separate from that of the manufacturing process of the display panel DP and may then be fixed on an upper surface of the display panel DP by the inner adhesive film. The input sensing layer ISL may sense an external input (e.g., a user's touch), may change the sensed input into a predetermined input signal, and may provide the input signal to the display panel DP. The input sensing layer ISL may include a plurality of sensing electrodes that sense an external input. The sensing electrodes may sense the external input in a capacitive scheme. The display panel DP may receive an input signal from the input sensing layer ISL and may generate an image corresponding to the input signal. The display module DM may further include a color filter layer CFL. In an embodiment of the present disclosure, the color filter layer CFL may be disposed on the input sensing layer ISL. However, the present disclosure is not limited thereto. For example, according to embodiments, the color filter layer CFL may be interposed between the display panel DP and the input sensing layer ISL. The color filter layer CFL may include a plurality of color filters and a black matrix. Details of the structure of the input sensing layer ISL and the color filter layer CFL will be described in further detail below. The display device DD according to an embodiment of the present disclosure may further include an adhesive layer AL. The window WM may be attached to the input sensing layer ISL by the adhesive layer AL. The adhesive layer AL may include, for example, an optical clear adhesive, an optically clear adhesive resin, or a pressure sensitive adhesive (PSA). The housing EDC is coupled to the window WM. The housing EDC is coupled to the window WM so as to provide predetermined inner space. The display module DM may be accommodated in the inner space. The housing EDC may include a material having relatively high rigidity. For example, the housing EDC may include glass, plastic, or metal or may include a plurality of frames and/or plates that are composed of a combination thereof. The housing EDC may stably protect configurations of the display device DD accommodated in the inner space from an external impact. In an embodiment, a battery module for supplying power utilized for overall operations of the display device DD may be interposed between the display module DM and the housing EDC. FIG.3is a block diagram of a display device, according to an embodiment of the present disclosure. Referring toFIG.3, the display device DD includes the display panel DP, a driving controller100, a data driving circuit200, a scan driving circuit300, an emission driving circuit400, a readout circuit500, and a voltage generator600. The driving controller100receives an image signal RGB and a control signal CTRL. The driving controller100generates an image data signal DATA by converting a data format of the image signal RGB so as to be suitable for the interface specification of the data driving circuit200. The driving controller100outputs a scan control signal SCS, a data control signal DCS, and an emission signal ECS. The data driving circuit200receives the data control signal DCS and the image data signal DATA from the driving controller100. The data driving circuit200converts the image data signal DATA into data signals and then outputs the data signals to a plurality of data lines DL1to DLm, which are described in further detail below. The data signals are analog voltages corresponding to grayscale values of the image data signal DATA. The voltage generator600generates voltages utilized to operate the display panel DP. In an embodiment, the voltage generator600generates a first driving voltage ELVDD, a second driving voltage ELVSS, a first initialization voltage VINT1, a second initialization voltage VINT2, and a reset voltage VRST. The display panel DP includes scan lines GIL1to GILn, GCL1to GCLn, and GWL1to GWLn+1, emission control lines EML1to EMLn, the data lines DL1to DLm, readout lines RL1to RLm, and pixels PX, in which n and m are positive integers. The display panel DP may include a display area DA corresponding to the transparent area TA (seeFIG.1) and a non-display area NDA corresponding to the bezel area BZA (shown inFIG.1). The pixels PX and sensors FX may be disposed in the display area DA. The scan driving circuit300and the emission driving circuit400may be disposed in the non-display area NDA of the display panel DP. In an embodiment, the scan driving circuit300may be arranged on a first side of the display panel DP. The scan lines GIL1to GILn, GCL1to GCLn, and GWL1to GWLn+1 extend from the scan driving circuit300in the first direction DR1. The scan lines GIL1to GILn, GCL1to GCLn, and GWL1to GWLn+1 and the emission control lines EML1to EMLn are spaced apart from one another in the second direction DR2. The data lines DL1to DLm extend from the data driving circuit200in a direction opposite to the second direction DR2, and are spaced apart from one another in the first direction DR1. In the example shown inFIG.3, the scan driving circuit300and the emission driving circuit400face each other with the pixels PX interposed therebetween, but the present disclosure is not limited thereto. For example, according to embodiments, the scan driving circuit300and the emission driving circuit400may be disposed adjacent to each other on one of the first side and the second side of the display panel DP. In an embodiment, the scan driving circuit300and the emission driving circuit400may be implemented with one circuit. The plurality of pixels PX are electrically connected to the scan lines GIL1to GILn, GCL1to GCLn, and GWL1to GWLn+1, the emission control lines EML1to EMLn, and the data lines DL1to DLm. Each of the plurality of pixels PX may be electrically connected to four scan lines and one emission control line. For example, as shown inFIG.3, pixels PX in a first row may be connected to the scan lines GILL GCL1, GWL1, and GWL2and the emission control line EML1. Furthermore, pixels in a j-th row may be connected to the scan lines GILj, GCLj, GWLj, and GWLj+1 and the emission control line EMLj, where j is a positive integer. Each of the plurality of pixels PX includes a light emitting element ED (seeFIG.6) and a pixel driving circuit PDC (seeFIG.6) that control the light emission of the light emitting element ED. The pixel driving circuit PDC may include one or more transistors and one or more capacitors. The scan driving circuit300and the emission driving circuit400may include transistors formed through the same process as the pixel driving circuit PDC. Each of the plurality of pixels PX receives the first driving voltage ELVDD, the second driving voltage ELVSS, the first initialization voltage VINT1, and the second initialization voltage VINT2from the voltage generator600. The scan driving circuit300receives the scan control signal SCS from the driving controller100. The scan driving circuit300may output scan signals to the scan lines GIL1to GILn, GCL1to GCLn, and GWL1to GWLn+1 in response to the scan control signal SCS. The emission driving circuit400is arranged on a second side of the display panel DP. The emission control lines EML1to EMLn extend from the emission driving circuit400in a direction opposite to the first direction DR1. The emission driving circuit400may output emission control signals to the emission control lines EML1to EMLn. Each of the sensors FX includes a light sensing element OPD (seeFIG.6) and a sensor driving circuit SDC (seeFIG.6). The sensor driving circuit SDC may include one or more transistors. The sensor driving circuit SDC may include transistors formed through the same process as the pixel driving circuit PDC. Each of the sensors FX may be connected to a corresponding scan line among the scan lines GWL1to GWLn+1 and a corresponding readout line among the readout lines RL1to RLm. In an embodiment, the number of the sensors FX may be smaller than the number of pixels PX. The readout circuit500receives a readout control signal RCS from the driving controller100. The readout circuit500may receive a detection signal from the readout lines RL1to RLm in response to the readout control signal RCS and then may provide a biometric sensing signal FSS to the driving controller100. The biometric sensing signal FSS provided from the readout circuit500to the driving controller100may be a fingerprint sensing signal corresponding to a user's fingerprint. According to embodiments, the readout circuit500may provide a reset signal RST (seeFIG.6) to the sensors FX. In an embodiment, the reset signal RST is a signal commonly provided to the sensors FX. FIGS.4A and4Bare diagrams illustrating a display area of a display panel, according to embodiments of the present disclosure. Referring toFIG.4A, the display area DA includes a first display area DA1, a second display area DA2, and a third display area DA3. The pixels PX shown inFIG.3may be disposed in the first display area DA1, the second display area DA2, and the third display area DA3. The sensors FX shown inFIG.3may be disposed in the second display area DA2. In an embodiment, the area sizes of the first display area DA1, the second display area DA2, and the third display area DA3may be different from one another. In an embodiment, the area size of each of the second display area DA2and the third display area DA3may be smaller than the area size of the first display area DA1. The second display area DA2may be an area where the sensors FX are disposed, and may be referred to as a “biometric sensing area” or a “fingerprint sensing area”. Referring toFIG.4B, the display area DA includes the first display area DA1and the second display area DA2. The pixels PX shown inFIG.3may be disposed in the first display area DA1and the second display area DA2. The sensors FX shown inFIG.3may be disposed in the second display area DA2. In an embodiment, the area sizes of the first display area DA1and the second display area DA2may be different from each other. In an embodiment, the area size of the second display area DA2may be smaller than the area size of the first display area DA1. The second display area DA2may be an area where the sensors FX are disposed, and may be referred to as a “biometric sensing area” or a “fingerprint sensing area”. The area size and location of the second display area DA2in which the sensors FX are disposed are not limited to those illustrated inFIGS.4A and4Band may be changed variously.FIG.4Billustrates that the first display area DA1is disposed above the second display area DA2. However, the present disclosure is not limited thereto. For example, in an embodiment, the second display area DA2may be above the first display area DA1. In an embodiment, the display area DA may include the two or more second display areas DA2in which the sensors FX are disposed. FIGS.5A,5B, and5Care enlarged plan views of a partial area of a display panel, according to embodiments of the present disclosure. FIG.5Ais an enlarged plan view of the first display area DA1shown inFIGS.4A and4B. A plan view of the third display area DA3illustrated inFIG.4Amay be the same as a plan view of the first display area DA1. FIGS.5B and5Care enlarged plan views of the second display area DA2shown inFIGS.4A and4B. Referring toFIG.5A, pixels PXR, PXG, PXB are arranged in the first display area DA1of the display panel DP. The pixel PXR includes a first light emitting element ED_R and the pixel driving circuit PDC, the pixel PXG includes a second light emitting element ED_G and the pixel driving circuit PDC, and the pixel PXB includes a third light emitting element ED_B and the pixel driving circuit PDC. The pixels PXR, PXG, PXB and the sensors FX are alternately arranged in the first direction DR1and alternately arranged in the second direction DR2. The pixels PXR, PXG, PXB include the first pixels PXR including a light emitting element (hereinafter referred to as a “first light emitting element ED_R”) that outputs light of a first color (e.g., red (R)), the second pixels PXG including a light emitting element (hereinafter referred to as a “second light emitting element ED_G”) that outputs light of a second color (e.g., green (G)), and the third pixels PXB including a light emitting element (hereinafter referred to as a “third light emitting element ED_B”) that outputs light of a third color (e.g., blue (B)). As shown inFIG.5A, the first pixels PXR and the third pixels PXB may be alternately and repeatedly arranged in the second direction DR2as well as in the first direction DR1. The second pixels PXG may be arranged in the first direction DR1and the second direction DR2. An arrangement structure of the pixels PX is not limited to the embodiment illustrated inFIG.5A. In an embodiment of the present disclosure, the first light emitting element ED_R may have a size greater than the second light emitting element ED_G. Moreover, the third light emitting element ED_B may have a size greater than or about equal to that of the first light emitting element ED_R. The size of each of the first to third light emitting elements ED_R, ED_G, ED_B is not limited thereto, and may be variously modified. For example, in an embodiment of the present disclosure, the first to third light emitting elements ED_R, ED_G, ED_B may have the same size as one another. Furthermore, although it is illustrated that each of the first to third light emitting elements ED_R, ED_G, and ED_B has a quadrangular shape, embodiments of the present disclosure are not limited thereto. For example, according to embodiments, a shape of each of the first to third light emitting elements ED_R, ED_G, and ED_B may be variously transformed into a polygon, a circle, an oval, etc. As another example, the shapes of the first to third light emitting elements ED_R, ED_G, and ED_B may be different from one another. For example, the second light emitting element ED_G may have a circular shape, and the first and third light emitting elements ED_R and ED_B may have a quadrangular shape. Referring toFIG.5B, the pixels PXR, PXG, PXB and the sensors FX are arranged in the second display area DA2of the display panel DP. The pixel PXR includes a first light emitting element ED_R and the pixel driving circuit PDC, the pixel PXG includes a second light emitting element ED_G and the pixel driving circuit PDC, and the pixel PXB includes a third light emitting element ED_B and the pixel driving circuit PDC. Each of the sensors FX includes the light sensing element OPD and the sensor driving circuit SDC. The pixels PXR, PXG, PXB and the sensors FX are alternately arranged in the first direction DR1and alternately arranged in the second direction DR2. The pixels PXR, PXG, PXB include the first pixels PXR including a light emitting element (hereinafter referred to as a “first light emitting element ED_R”) that outputs light of a first color (e.g., red (R)), the second pixels PXG including a light emitting element (hereinafter referred to as a “second light emitting element ED_G”) that outputs light of a second color (e.g., green (G)), and the third pixels PXB including a light emitting element (hereinafter referred to as a “third light emitting element ED_B”) that outputs light of a third color (e.g., blue (B)). As shown inFIG.5B, the first pixels PXR and the third pixels PXB may be alternately and repeatedly arranged in each of the first and second directions DR1and DR2. The second pixels PXG may be arranged in the first direction DR1and the second direction DR2. Each of the sensors FX may be disposed between the first pixel PXR and the third pixel PXB, which are adjacent to each other, in the first and second directions DR1and DR2. In addition, each of the sensors FX may be interposed between two second pixels PXG in the first and second directions DR1and DR2. However, the arrangement structure of the pixels PX and the sensors FX is not limited thereto. As shown inFIG.5C, in an embodiment, light emitting elements that output the same light may be arranged in the second direction DR2. For example, the first pixels PXR may be arranged in a first column, the second pixels PXG may be arranged in a second column, the third pixels PXB may be arranged in a third column, and the second pixels PXG may be arranged in a fourth column. Each of the sensors FX may be interposed between the two first pixels PXR, between the two second pixels PXG, and between the two third pixels PXB, in the second direction DR2. Furthermore, in the first direction DR1, each of the sensors FX may be interposed between the first pixel PXR and the third pixel PXB, which are adjacent to each other, and between the two second pixels PXG. The arrangement structure of the pixels PX and the sensors FX may be variously modified according to embodiments of the present disclosure. For example, the first pixels PXR and the third pixels PXB may be arranged in different columns or in different rows. When the first pixels PXR are arranged in an odd-numbered column, the third pixels PXB may be arranged in an even-numbered column. When the first pixels PXR are arranged in an odd-numbered row, the third pixels PXB may be arranged in an even-numbered row. In this case, the at least one second pixel PXG and the at least one sensor FX may be interposed between the two first pixels PXR adjacent to each other in the first and second directions DR1and DR2. Moreover, the at least one second pixel PXG and the at least one sensor FX may be interposed between the two third pixels PXB adjacent to each other in the first and second directions DR1and DR2. In an embodiment of the present disclosure, the first light emitting element ED_R may have a size greater than the second light emitting element ED_G. Moreover, the third light emitting element ED_B may have a size greater than or about equal to that of the first light emitting element ED_R. The size of each of the first to third light emitting elements ED_R, ED_G, ED_B is not limited thereto, and may be variously modified. For example, in an embodiment of the present disclosure, the first to third light emitting elements ED_R, ED_G, ED_B may have the same size as one another. Furthermore, although it is illustrated that each of the first to third light emitting elements ED_R, ED_G, and ED_B has a quadrangular shape, embodiments of the present disclosure are not limited thereto. For example, according to embodiments, a shape of each of the first to third light emitting elements ED_R, ED_G, and ED_B may be variously transformed into a polygon, a circle, an oval, etc. As another example, the shapes of the first to third light emitting elements ED_R, ED_G, and ED_B may be different from one another. For example, the second light emitting element ED_G may have a circular shape, and the first and third light emitting elements ED_R and ED_B may have a quadrangular shape. The light sensing element OPD may have a smaller size than the first and third light emitting elements ED_R and ED_B. In an embodiment of the present disclosure, the light sensing element OPD may have a size smaller than or about equal to that of the second light emitting element ED_G. However, the size of the light sensing element OPD is not limited thereto, and may be variously modified. Although it is illustrated that the light sensing element OPD has a quadrangular shape, the shape of the light sensing element OPD is not limited thereto. For example, according to embodiments, the shape of the light sensing element OPD may be variously transformed into a polygon, a circle, an oval, etc. Each of the first to third light emitting elements ED_R, ED_G, and ED_B is electrically connected to the corresponding pixel driving circuit PDC. The pixel driving circuit PDC may include a plurality of transistors and a capacitor. The pixel driving circuits PDC connected to each of the first to third light emitting elements ED_R, ED_G, and ED_B may have the same circuit configuration. The light sensing element OPD is electrically connected to the corresponding sensor driving circuit SDC. The sensor driving circuit SDC may include a plurality of transistors. In an embodiment of the present disclosure, the sensor driving circuit SDC and the pixel driving circuit PDC may be formed simultaneously through the same process. Furthermore, the scan driving circuit300may include transistors formed through the same process as the pixel driving circuit PDC and the sensor driving circuit SDC. The pixel driving circuit PDC receives the first driving voltage ELVDD, the second driving voltage ELVSS, and the first and second initialization voltages VINT1and VINT2from the voltage generator600. The sensor driving circuit SDC receives the reset voltage VRST and the second driving voltage ELVSS from the voltage generator600. FIG.6is a circuit diagram of a pixel and a sensor, according to an embodiment of the present disclosure. FIG.6illustrates one pixel PXij among the plurality of pixels PX shown inFIG.3and one sensor FXij among the plurality of sensors FX. Each of the plurality of pixels PX shown inFIG.3may have the same circuit configuration as the equivalent circuit diagram of the pixel PXij shown inFIG.6. Moreover, each of the plurality of sensors FX shown inFIG.3may have the same circuit configuration as the equivalent circuit diagram of the sensor FXij shown inFIG.6. Referring toFIG.6, the pixel PXij includes the pixel driving circuit PDC and the at least one light emitting element ED. The light emitting element ED may be a light emitting diode. In an embodiment of the present disclosure, the light emitting element ED may be an organic light emitting diode including an organic light emitting layer. The pixel driving circuit PDC according to an embodiment includes first to seventh transistors T1, T2, T3, T4, T5, T6, and T7and one capacitor Cst. The third and fourth transistors T3and T4among the first to seventh transistors T1to T7may be N-type transistors that use an oxide semiconductor as a semiconductor layer. Each of the first, second, fifth, sixth, and seventh transistors T1, T2, T5, T6, and T7may be a P-type transistor having a low-temperature polycrystalline silicon (LTPS) semiconductor layer. Some of the first to seventh transistors T1to T7may be P-type transistors, and the remaining transistors may be N-type transistors that use an oxide semiconductor as a semiconductor layer. In an embodiment, among the first to seventh transistors T1to T7, the first, second, and fifth to seventh transistors T1, T2, and T5to T7are P-type transistors, and the third and fourth transistors T3and T4are N-type transistors. In an embodiment, at least one of the first to seventh transistors T1to T7may be an N-type transistor and the rest may be P-type transistors. A configuration of the pixel driving circuit PDC according to an embodiment of the present disclosure is not limited to the embodiment illustrated inFIGS.5A to5C. The pixel driving circuit PDC illustrated inFIGS.5A to5Cis only an example. For example, the configuration of the pixel driving circuit PDC may be modified. For example, in an embodiment, all of the first to seventh transistors T1to T7may be P-type transistors or N-type transistors. The scan lines GILj, GCLj, GWLj, and GWLj+1 may deliver scan signals GIj, GCj, GWj, and GWj+1, respectively. The emission control line EMLj may deliver an emission control signal EMj. The data line DLi delivers a data signal Di. The data signal Di may have a voltage level corresponding to the image signal RGB input to the display device DD (seeFIG.3). First to fourth driving voltage lines VL1, VL2, VL3, and VL4may deliver the first driving voltage ELVDD, the second driving voltage ELVSS, the first initialization voltage VINT1, and the second initialization voltage VINT2, respectively. The first transistor T1includes a first electrode connected to the first driving voltage line VL1via the fifth transistor T5, a second electrode electrically connected to an anode of the light emitting element ED via the sixth transistor T6, and a gate electrode connected to one end of the capacitor Cst. The first transistor T1may receive the data signal Di delivered by the data line DLi depending on the switching operation of the second transistor T2and then may supply a driving current Id to the light emitting element ED. The second transistor T2includes a first electrode connected to the data line DLi, a second electrode connected to the first electrode of the first transistor T1, and a gate electrode connected to the scan line GWLj. The second transistor T2may be turned on depending on the scan signal GWj received through the scan line GWLj and then may deliver the data signal Di delivered from the data line DLi to the first electrode of the first transistor T1. The third transistor T3includes a first electrode connected to the gate electrode of the first transistor T1, a second electrode connected to the second electrode of the first transistor T1, and a gate electrode connected to the scan line GCLj. The third transistor T3may be turned on depending on the scan signal GCj received through the scan line GCLj, and thus, the gate electrode and the second electrode of the first transistor T1may be connected. Thus, the first transistor T1may be diode-connected. The fourth transistor T4includes a first electrode connected to the gate electrode of the first transistor T1, a second electrode connected to the fourth driving voltage line VL4through which the second initialization voltage VINT2is supplied, and a gate electrode connected to the scan line GILj. The fourth transistor T4may be turned on depending on the scan signal GIj received through the scan line GILj and then may perform an initialization operation of initializing a voltage of the gate electrode of the first transistor T1by supplying the second initialization voltage VINT2to the gate electrode of the first transistor T1. The fifth transistor T5includes a first electrode connected to the first driving voltage line VL1, a second electrode connected to the first electrode of the first transistor T1, and a gate electrode connected to the emission control line EMLj. The sixth transistor T6includes a first electrode connected to the second electrode of the first transistor T1, a second electrode connected to the anode of the light emitting element ED, and a gate electrode connected to the emission control line EMLj. The fifth transistor T5and the sixth transistor T6may be simultaneously turned on depending on the emission control signal EMj received through the emission control line EMLj. In this way, the first driving voltage ELVDD may be compensated through the first transistor T1(which may thus be diode-connected) and may be supplied to the light emitting element ED. The seventh transistor T7includes a first electrode connected to the second electrode of the sixth transistor T6, a second electrode connected to the third driving voltage line VL3, and a gate electrode connected to the scan line GWLj+1. The seventh transistor T7is turned on depending on the scan signal GWj+1 received through the scan line GWLj+1, and bypasses a current of the anode of the light emitting element ED to the third driving voltage line VL3. As described above, one end of the capacitor Cst is connected to the gate electrode of the first transistor T1, and the other end of the capacitor Cst is connected to the first driving voltage line VL1. The cathode of the light emitting element ED may be connected to the second driving voltage line VL2that delivers the second driving voltage ELVSS. A structure of the pixel PXij according to an embodiment is not limited to the structure shown inFIG.6. The number of transistors included in the one pixel PXij, the number of capacitors included in the one pixel PXij, and the connection relationship thereof may be variously modified. The sensor FXij includes the light sensing element OPD and the sensor driving circuit SDC. The light sensing element OPD may be a photodiode. In an embodiment of the present disclosure, the light sensing element OPD may be an organic photodiode including an organic material, as a photoelectric conversion layer. An anode of the light sensing element OPD may be connected to a first sensing node SN1, and a cathode of the light sensing element OPD may be connected to the second driving voltage line VL2that delivers the second driving voltage ELVSS. The sensor driving circuit SDC includes three transistors ST1to ST3. The three transistors ST1to ST3may be a reset transistor ST1, an amplification transistor ST2, and an output transistor ST3, respectively. A part of the reset transistor ST1, the amplification transistor ST2, and the output transistor ST3may be a P-type transistor. Another part thereof may be an N-type transistor. In an embodiment of the present disclosure, the amplification transistor ST2may be a P-type transistor, and the reset transistor ST1and the output transistor ST3may be N-type transistors. However, the present disclosure is not limited thereto. For example, according to embodiments, all of the reset transistor ST1, the amplification transistor ST2, and the output transistor ST3may be entirely N-type transistors or entirely P-type transistors. A part (e.g., the reset transistor ST1) of the reset transistor ST1, the amplification transistor ST2, and the output transistor ST3may be a transistor having the same type as each of the third and fourth transistors T3and T4of the pixel PXij. Some (e.g., the amplification transistor ST2and the output transistor ST3) of the reset transistor ST1, the amplification transistor ST2, and the output transistor ST3may be transistors of the same type as the first and second transistors T1and T2of the pixel PXij. The circuit configuration of the sensor driving circuit SDC according to an embodiment of the present disclosure is not limited to that illustrated inFIG.6. That is, the sensor driving circuit SDC illustrated inFIG.6is only an example, and the configuration of the sensor driving circuit SDC may be modified. The reset transistor ST1includes a first electrode connected to a reset voltage line VL5that receives a reset voltage VRST, a second electrode connected to a first sensing node SN1, and a gate electrode connected to a reset line RSTL that receives a reset signal RST. The reset transistor ST1may reset the potential of the first sensing node SN1to a reset voltage VRST in response to the reset signal RST. In an embodiment of the present disclosure, the reset signal RST may be a pulse signal that transitions to an active level (e.g., a high level) at the start of one frame. In an embodiment, the reset voltage VRST may have a voltage level lower than the second driving voltage ELVSS. The amplification transistor ST2includes a first electrode connected to the first driving voltage line VL1that receives the first driving voltage ELVDD, a second electrode connected to a second sensing node SN2, and a gate electrode connected to the first sensing node SN1. The amplification transistor ST2may be turned on depending on the potential of the first sensing node SN1so as to apply the first driving voltage ELVDD to the second sensing node SN2. The first electrode of the amplification transistor ST2may receive the first initialization voltage VINT1instead of the first driving voltage ELVDD. The output transistor ST3includes a first electrode connected to the second sensing node SN2, a second electrode connected to the readout line RLi, and a gate electrode connected to the scan line GWLa that receives the scan signal GWa. The output transistor ST3may transmit a detection signal FSi to the readout line RLi in response to the scan signal GWa. FIG.7is a timing diagram for describing an operation of the pixel and the sensor shown inFIG.6, according to an embodiment of the present disclosure. Referring toFIGS.6and7, one frame Fs may include an emission period EP and a non-emission period NEP depending on an operation of the pixel PXij. The emission period EP may correspond to a low-level period (e.g., an active period) of the emission control signal EMj. The non-emission period NEP may correspond to a high-level period (e.g., an inactive period) of the emission control signal EMj. The non-emission period NEP may include an initialization period and a data programming and compensation period. When the scan signal GIj having a high level is provided through the scan line GILj during the initialization period, the fourth transistor T4is turned on. The second initialization voltage VINT2is delivered to the gate electrode of the first transistor T1through the fourth transistor T4so as to initialize the first transistor T1. Next, when the scan signal GCj having a high level is supplied through the scan line GCLj during the data programming and compensation period, the third transistor T3is turned on. The first transistor T1is diode-connected by the third transistor T3that is turned on and is forward-biased. At this time, when the scan signal GWj having a low level is supplied through the scan line GWLj, the second transistor T2is turned on. In this case, a compensation voltage, which is obtained by reducing the voltage of the data signal Di supplied from the data line DLi by a threshold voltage of the first transistor T1, is applied to the gate electrode of the first transistor T1. That is, a gate voltage applied to the gate electrode of the first transistor T1may be a compensation voltage. As the first driving voltage ELVDD and the compensation voltage are respectively applied to opposite ends of the capacitor Cst, a charge corresponding to a difference between the first driving voltage ELVDD and the compensation voltage may be stored in the capacitor Cst. Meanwhile, the seventh transistor T7is turned on in response to the scan signal GWj+1 having a low level delivered through the scan line GWLj+1. A part of the driving current Id may be drained through the seventh transistor T7as a bypass current Ibp. When the light emitting element ED emits light under the condition that a minimum current of the first transistor T1flows as a driving current for the purpose of displaying a black image, the black image may not be normally displayed. Accordingly, the seventh transistor T7in the pixel PXij according to an embodiment of the present disclosure may drain (or disperse) a part of the minimum current of the first transistor T1to a current path, which is different from a current path to the light emitting element ED, as the bypass current Ibp. Herein, the minimum current of the first transistor T1means a current flowing under the condition that a gate-source voltage of the first transistor T1is smaller than the threshold voltage, that is, the first transistor T1is turned off. As a minimum driving current (e.g., a current of about 10 pA or less) is delivered to the light emitting element ED, with the first transistor T1turned off, an image of black luminance is expressed. When the minimum driving current for displaying a black image flows, the influence of a bypass transfer of the bypass current Ibp may be great. On the other hand, when a large driving current for displaying an image such as a normal image or a white image flows, there may be almost no influence of the bypass current Ibp. Accordingly, when a driving current for displaying a black image flows, a light emitting current led of the light emitting element ED, which corresponds to a result of subtracting the bypass current Ibp drained through the seventh transistor T7from the driving current Id, may have a minimum current amount to such an extent as to accurately express a black image. Accordingly, a contrast ratio may be improved by implementing an accurate black luminance image by using the seventh transistor T7. In an embodiment, the bypass signal is the scan signal GWj+1 having a low level, but is not necessarily limited thereto. Next, during the emission period EP, the emission control signal EMj supplied from the emission control line EMLj is changed from a high level to a low level. During the emission period EP, the fifth transistor T5and the sixth transistor T6are turned on by the emission control signal EMj having a low level. In this case, the driving current Id is generated depending on a voltage difference between the gate voltage of the gate electrode of the first transistor T1and the first driving voltage ELVDD and is supplied to the light emitting element ED through the sixth transistor T6, and the current led flows through the light emitting element ED. When the reset signal RST transitions to a high level at the start of one frame Fs, the reset transistor ST1may be turned on such that a voltage of the first sensing node SN1is capable of being initialized to the reset voltage VRST. A light exposure period of the sensor FXij may correspond to the emission period EP of the pixel PXij. During the emission period EP, the emission control signal EMj is maintained at a low level. The light sensing element OPD is exposed to light during the emission period EP. The light may be light output from the light emitting element ED of the pixel PXij. When a user's hand US_F (seeFIG.1) touches a display surface, the light sensing element OPD may generate photocharges corresponding to light reflected by a valley between ridges of a fingerprint, and the generated photocharges may be accumulated in the first sensing node SN1. The amplification transistor ST2may be a source follower amplifier that generates a source-drain current in proportion to the amount of charges of the first sensing node SN1, which are input to a gate electrode of the amplification transistor ST2. While a scan signal GWa is at an inactive level, that is, a high level, the output transistor ST3remains turned off. When the scan signal GWa transitions to an active level, that is, a low level, the output transistor ST3is turned on. When the output transistor ST3is turned on, the detection signal FSi corresponding to a current flowing through the amplification transistor ST2may be output to the readout line RLi. As such, the display panel DP may include the pixel PXij and the sensor FXij. The sensor FXij may be driven by using the scan signal for driving the pixel PXij. For example, an initialization scan signal GIj and a compensation scan signal GCj supplied to the second transistor T2of the pixel PXij may be supplied to the reset transistor ST1and the output transistor ST3of the sensor FXij. Accordingly, a separate signal wire or circuit required to drive the sensor FXij is unnecessary according to embodiments of the present disclosure, thereby reducing or preventing a reduction in an aperture ratio even though the sensor FXij is disposed on the display panel DP. FIG.8is a block diagram of the readout circuit500shown inFIG.3, according to an embodiment of the present disclosure. Referring toFIG.8, the readout circuit500includes a comparator501, switches SW1, SW2, and SW3, capacitors Cf, C1, and C2, and an analog-to-digital converter502. The comparator501includes a first input terminal connected to the readout line RLi, a second input terminal receiving a reference voltage VREF, and an output terminal connected to a first node N11. The switch SW1is connected between the first input terminal of the comparator501and the first node N11. The switch SW1may be turned on/off in response to an input reset signal IRST. The capacitor Cf is connected between the first input terminal of the comparator501and the first node N11. The switch SW2is connected between the first node N11and the second node N12. The switch SW2may be turned on/off in response to a first switching signal SHR. The capacitor C1is connected between the second node N12and the ground voltage. The switch SW3is connected between the first node N11and a third node N13. The switch SW3may be turned on/off in response to a second switching signal SHS. The capacitor C2is connected between the third node N13and the ground voltage. The analog-to-digital converter502receives a signal from the second node N12and a signal from the third node N13, and outputs the received signal as the biometric sensing signal FSS, which is a digital signal. The biometric sensing signal FSS may be provided to the driving controller100illustrated inFIG.3. FIG.9is a waveform diagram for describing an operation of the readout circuit500shown inFIG.8, according to an embodiment of the present disclosure. Referring toFIGS.8and9, when the input reset signal IRST transitions to a high level, the switch SW1is turned on. As the switch SW1is turned on, the first input terminal of the comparator501and the first node N11are electrically connected, and the first node N11may be initialized. While the first switching signal SHR is at a high level, and the second switching signal SHS is at a low level, after the input reset signal IRST transitions to a low level, the detection signal FSi transmitted through the readout line RLi may be stored in the second node N12by the capacitor C1. The scan signal GWa is at a high level while the first switching signal SHR is at a high level, and thus, the signal stored in the second node N12may be a reset sampling signal. Subsequently, while the first switching signal SHR is at a low level, and the second switching signal SHS is at a high level, the detection signal FSi transmitted through the readout line RLi may be stored in the third node N13by the capacitor C2. While the scan signal GWa is at a low level, the detection signal FSi transmitted through the readout line RLi may correspond to the amount of light sensed by the light sensing element OPD. Accordingly, a signal stored in the third node N13may be a detection sampling signal. The analog-to-digital converter502converts a difference between the reset sampling signal of the second node N12and the detection sampling signal of the third node N13into a digital signal. The analog-to-digital converter502may output the biometric sensing signal FSS, which is a digital signal. For the readout circuit500to accurately detect the detection signal FSi transmitted through the readout line RLi, a detection time is from a point in time when the input reset signal IRST transitions from the low level to the high level until the second switching signal SHS transitions from the high level to the low level is sufficiently secured in embodiments of the present disclosure. FIG.10is a timing diagram of scan signals GW1to GWn+1 and the reset signal RST provided to the pixel PXij and the sensor FXij shown inFIG.6, according to an embodiment of the present disclosure. Referring toFIGS.6and10, the reset signal RST is activated to a high level at the start of one frame Fs. The scan signals GW1to GWn+1 sequentially transition to an active level (e.g., a low level). A time interval until the (j+1)-th scan signal GWj+1 transitions to the active level after the j-th scan signal GWj transitions to the active level is one horizontal period (1H). In an embodiment, when the detection time is described inFIG.9is at least 4 horizontal period (4H), the readout circuit500may accurately detect the detection signal FSi delivered through the readout line RLi. When the sensor FXij positioned in a j-th row operates in response to the j-th scan signal GWj and a sensor FXij+1 positioned in a (j+1)-th row operates in response to the (j+1)-th scan signal GWj+1, the readout circuit500may not accurately detect the detection signal FSi detected by the sensors FXij and FXij+1. In an embodiment, the sensor FXij operates in response to the a-th scan signal GWa among the scan signals GW1to GWn+1. Herein, a is a positive integer different from j. FIGS.11A to11Care diagrams illustrating a display panel, according to embodiments of the present disclosure. FIG.11Ais a block diagram of the display panel DP, according to an embodiment. Referring toFIG.11A, the scan driving circuit300is arranged on one side of the display area DA in the display panel DP. In an embodiment, the scan driving circuit300may be arranged on a left side of the display area DA. The pixels PX in a j-th row among the pixels PX are connected to the j-th scan line GWLj. For example, the pixels PX in the first row are connected to the first scan line GWL1, the pixels PX in the 51st row are connected to the 51st scan line GWL51, and the pixels PX in the 52nd row are connected to the 52nd scan line GWL52. Sensors FX in the j-th row among the sensors FX are connected to the a-th scan line GWLa. For example, when the detection time ts described inFIG.9is 4 horizontal periods (4H), the sensors FX in the 51st row may be connected to the first scan line GWL1, the sensors FX in the 52nd row may be connected to the fifth scan line GWL5, and the sensors FX in the 60th row may be connected to the 37th scan line GWL37. Although it is illustrated and described that the sensors FX are arranged in the 51st to 60th rows, this is only an example, and embodiments of the present disclosure are not limited thereto. For example, rows in which each of the sensors FX are arranged may be changed in various manners. Further, according to embodiments, the sensors FX arranged in the 51st row are not connected to the first scan line GWL1, but may be connected to another scan line. For example, when the sensors FX arranged in the 51st row are connected to the scan line GWL35, the sensors FX arranged in the 52nd row may be connected to the 39th scan line GWL39. For example, to sufficiently secure the detection time ts, the sensors FX in the j-th line are connected to the a-th scan line GWLa, and the sensors FX in the (j+1)-th line are connected to the b-th scan line GWLb. Here, ‘a’ is a positive integer different from ‘j’, and ‘b’ is a positive integer different from ‘a’ and ‘j+1’. Furthermore, to sufficiently secure the detection time ts, ‘b’ may be greater than ‘a’ by 2 or more. FIG.11Bis a block diagram of a display panel DPa, according to an embodiment. Referring toFIG.11B, the pixels PX in a j-th row among the pixels PX are connected to the j-th scan line GWLj. For example, the pixels PX in the first row are connected to the first scan line GWL1, the pixels PX in the 51st row are connected to the 51st scan line GWL51, and the pixels PX in the 52nd row are connected to the 52nd scan line GWL52. Sensors FX in the j-th row among the sensors FX are connected to the a-th scan line GWLa. For example, when the detection time is described inFIG.9is 2 horizontal periods (2H), the sensors FX in the 51st row may be connected to the first scan line GWL1, the sensors FX in the 52nd row may be connected to the third scan line GWL3, and the sensors FX in the 60th row may be connected to the nineteenth scan line GWL19. FIG.11Cis a block diagram of a display panel DPb, according to an embodiment. Referring toFIG.11C, the sensors FX in the j-th row among the sensors FX are connected to the a-th scan line GWLa. For example, the sensors FX in the 51st row are connected to the first scan line GWL1, the sensors FX in the 52nd row are connected to the fifth scan line GWL5, and the sensors FX in the 60th row are connected to the 37th scan line GWL37. A connection wire CL1connecting the first scan line GWL1and the sensors FX in the 51st row, a connection wire CL2connecting the fifth scan line GWL5and the sensors FX in the 52nd row, a connection wire CL3connecting the ninth scan line GWL9and the sensors FX in the 53rd row, and a connection wire CL10connecting the 37th scan line GWL37and the sensors FX in the 60th row may be arranged in the display area DA. In an embodiment, as shown inFIGS.11A and11B, the connection wires CL1to CL10may be arranged outside of the display area DA, that is, in the non-display area NDA shown inFIG.3. FIG.12is a block diagram of a display panel DPc, according to an embodiment. In the example shown inFIG.12, a scan driving circuit300ais arranged on one side of the display area DA. In an embodiment, the scan driving circuit300amay be arranged on a right side of the display area DA. Connections between the pixels PX and the sensors FX and the scan lines GWL1to GWLn+1 may be the same as those described with reference toFIGS.11A to11C. A connection wire CL11connecting the first scan line GWL1and the sensors FX in the 51st row, a connection wire CL12connecting the fifth scan line GWL5and the sensors FX in the 52nd row, a connection wire CL13connecting the ninth scan line GWL9and the sensors FX in the 53rd row, and a connection wire CL20connecting the 37th scan line GWL37and the sensors FX in the 60th row may be arranged outside of the display area DA, that is, in the non-display area NDA shown inFIG.3. The connection wires CL11to CL20may be arranged in the display area DA. FIGS.13A to13Dare diagrams illustrating that pixels and sensors are connected to a scan driving circuit, according to embodiments of the present disclosure. FIG.13Ais a block diagram of the display panel DPd, according to an embodiment. Referring toFIG.13A, a first scan driving circuit300-1and a second scan driving circuit300-2may be arranged on a display panel DPd. The first scan driving circuit300-1and the second scan driving circuit300-2may face each other with the display area DA interposed therebetween. Each of the first scan driving circuit300-1and the second scan driving circuit300-2may be connected to the scan lines GWL1to GWLn+1. That is, the pixels PX may be connected in common to the scan lines GWL1to GWLn+1 extending from the first scan driving circuit300-1and the scan lines GWL1to GWLn+1 extending from the second scan driving circuit300-2. The pixels PX in a j-th row among the pixels PX are connected to the j-th scan line GWLj. For example, the pixels PX in the first row are connected to the first scan line GWL1, the pixels PX in the 51st row are connected to the 51st scan line GWL51, and the pixels PX in the 52nd row are connected to the 52nd scan line GWL52. Sensors in the j-th row among the sensors FX are connected to the a-th scan line GWLa. For example, when the detection time ts described inFIG.9is 4 horizontal periods (4H), the sensors FX in the 51st row may be connected to the first scan line GWL1, the sensors FX in the 52nd row may be connected to the fifth scan line GWL5, and the sensors FX in the 60th row may be connected to the 37th scan line GWL37. To sufficiently secure the detection time ts described inFIG.9, according to embodiments, the sensors FX in the j-th line is connected to the a-th scan line GWLa, and the sensors FX in the (j+1)-th line are connected to the b-th scan line GWLb. Here, ‘a’ is a positive integer different from ‘j’, and ‘b’ is a positive integer different from ‘a’ and ‘j+1’. Furthermore, to sufficiently secure the detection time ts, ‘b’ may be greater than ‘a’ by 2 or more. FIG.13Bis a block diagram of a display panel DPe, according to an embodiment. Referring toFIG.13B, the sensors FX in the j-th row among the sensors FX are connected to the a-th scan line GWLa. For example, the sensors FX in the 51st row are connected to the first scan line GWL1, the sensors FX in the 52nd row are connected to the fifth scan line GWL5, and the sensors FX in the 60th row are connected to the 37th scan line GWL37. A connection wire CL21connecting the first scan line GWL1and the sensors FX in the 51st row, a connection wire CL22connecting the fifth scan line GWL5and the sensors FX in the 52nd row, a connection wire CL23connecting the ninth scan line GWL9and the sensors FX in the 53rd row, and a connection wire CL30connecting the 37th scan line GWL37and the sensors FX in the 60th row may be arranged in the display area DA. In an embodiment, as shown inFIG.13A, the connection wires CL21to CL30may be arranged outside the display area DA, that is, in the non-display area NDA shown inFIG.3. FIG.13Cis a block diagram of a display panel DPf, according to an embodiment. Referring toFIG.13C, the first scan driving circuit300-1and the second scan driving circuit300-2may be arranged on the display panel DPf. The first scan driving circuit300-1and the second scan driving circuit300-2may face each other with the display area DA interposed therebetween. Some of the pixels PX are connected to scan lines GWL1to GWLn+1 extending from the first scan driving circuit300-1. Some of the pixels PX are connected to scan lines GWL1to GWLn+1 extending from the second scan driving circuit300-2. Some of the sensors FX are connected to corresponding scan lines among the scan lines GWL1to GWLn+1 extending from the first scan driving circuit300-1. Some of the sensors FX are connected to corresponding scan lines among the scan lines GWL1to GWLn+1 extending from the second scan driving circuit300-2. FIG.13Dis a block diagram of a display panel DPg, according to an embodiment. Referring toFIG.13D, the first scan driving circuit300-1and the second scan driving circuit300-2may be arranged on the display panel DPg. The first scan driving circuit300-1and the second scan driving circuit300-2may face each other with the display area DA interposed therebetween. The first scan driving circuit300-1may drive odd-numbered scan lines GWL1, GWL3, GWL5, . . . , GWLn−1, GWLn+1 among the scan lines GWL1to GWLn+1. The second scan driving circuit300-2may drive even-numbered scan lines GWL2, GWL4, GWL6, . . . , GWLn among the scan lines GWL1to GWLn+1. The pixels PX arranged in the odd-numbered row among the pixels PX are connected to the odd-numbered scan lines GWL1, GWL3, GWL5, . . . , GWLn−1, GWLn+1 extending from the first scan driving circuit300-1. The pixels PX arranged in the even-numbered row among the pixels PX are connected to the even-numbered scan lines GWL2, GWL4, GWL6, GWLn extending from the second scan driving circuit300-2. Sensors FX in the j-th row among the sensors FX are connected to the a-th scan line GWLa. For example, when the detection time ts described inFIG.9is 4 horizontal periods 4H, the sensors FX in the 51st row may be connected to the first scan line GWL1, the sensors FX in the 52nd row may be connected to the fifth scan line GWL5, and the sensors FX in the 60th row may be connected to the 37th scan line GWL37. To sufficiently secure the detection time ts described inFIG.9, according to embodiments, the sensors FX in the j-th line is connected to the a-th scan line GWLa, and the sensors FX in the (j+1)-th line are connected to the b-th scan line GWLb. Here, ‘a’ is a positive integer different from ‘j’, and ‘b’ is a positive integer different from ‘a’ and ‘j+1’. Furthermore, to sufficiently secure the detection time ts, ‘b’ may be greater than ‘a’ by 2 or more. FIG.14is a cross-sectional view illustrating a pixel of a display panel, according to an embodiment of the present disclosure.FIGS.15A and15Bare cross-sectional views illustrating a light emitting element and a light sensing element of a display panel, according to an embodiment of the present disclosure. Referring toFIGS.14and15A, the display panel DP may include the base layer BL, the circuit layer DP_CL disposed on the base layer BL, the element layer DP_ED, and the encapsulation layer TFE. The base layer BL may include a synthetic resin layer. The synthetic resin layer may include a thermosetting resin. For example, the synthetic resin layer may be a polyimide-based resin layer. However, a material thereof is not particularly limited. The synthetic resin layer may include at least one of, for example, acrylate-based resin, methacrylate-based resin, polyisoprene-based resin, vinyl-based resin, epoxy-based resin, urethane-based resin, cellulose-based resin, siloxane-based resin, polyamide-based resin, and perylene-based resin. Further, the base layer BL may include, for example, a glass substrate, a metal substrate, an organic/inorganic composite substrate, etc. At least one inorganic layer is formed on an upper surface of the base layer BL. The inorganic layer may include at least one of, for example, aluminum oxide, titanium oxide, silicon oxide, silicon oxynitride, zirconium oxide, and hafnium oxide. The inorganic layer may be formed in multiple layers. The multi-layered inorganic layers may constitute a barrier layer BRL and/or a buffer layer BFL, which will be described in further detail below. The barrier layer BRL and the buffer layer BFL may be disposed selectively. The barrier layer BRL may prevent foreign objects from outside of the display device DD from entering the display device DD. The barrier layer BRL may include, for example, a silicon oxide layer and a silicon nitride layer. In an embodiment, the plurality of silicon oxide layers are present and the silicon nitride layers are present, and the silicon oxide layers and the silicon nitride layers may be alternately stacked. The buffer layer BFL may be disposed on the barrier layer BRL. The buffer layer BFL may increase a bonding force between the base layer BL and the semiconductor pattern and/or the conductive pattern. The buffer layer BFL may include, for example, a silicon oxide layer and a silicon nitride layer. The silicon oxide layer and the silicon nitride layer may be alternately stacked. The semiconductor pattern is disposed on the buffer layer BFL. Hereinafter, the semiconductor pattern directly disposed on the buffer layer BFL is defined as a first semiconductor pattern. The first semiconductor pattern may include a silicon semiconductor. The first semiconductor pattern may include polysilicon. However, embodiments of the present disclosure are not limited thereto, and the first semiconductor pattern may include, for example, amorphous silicon. FIG.14only illustrates a part of the first semiconductor pattern. The first semiconductor pattern may be further disposed in another area of the pixel PXij (seeFIG.6). An electrical property of the first semiconductor pattern varies depending on whether the pattern is doped. The first semiconductor pattern may include a doped area and an undoped area. The doped area may be doped with an N-type dopant or a P-type dopant. A P-type transistor includes a doped area doped with the P-type dopant, and an N-type transistor includes a doped area doped with the N-type dopant. The doped area has higher conductivity than the undoped area, and substantially operates as an electrode or signal line. The undoped area substantially corresponds to the active area (or channel) of a transistor. For example, a part of the first semiconductor pattern may be the active area of the transistor. Another part thereof may be a source or drain of the transistor. Another part thereof may be a connection signal line (or a connection electrode). As illustrated inFIG.14, a first electrode S1, a channel part A1, and a second electrode D1of the first transistor T1are formed from the first semiconductor pattern. The first electrode S1and the second electrode D1of the first transistor T1extend in opposite directions from the channel part A1. A portion of a connection signal line CSL formed from the semiconductor pattern is illustrated inFIG.14. In an embodiment, the connection signal line CSL may be electrically connected to the second electrode of the sixth transistor T6(seeFIG.6) in a plane. A first insulating layer10is disposed on the buffer layer BFL. The first insulating layer10overlaps the plurality of pixels PX (seeFIG.3) in common so as to cover the first semiconductor pattern. The first insulating layer10may be an inorganic layer and/or an organic layer, and may have a single-layer structure or a multi-layer structure. The first insulating layer10may include at least one of, for example, an aluminum oxide, a titanium oxide, a silicon oxide, a silicon oxynitride, a zirconium oxide, and a hafnium oxide. In an embodiment, the first insulating layer10may be a silicon oxide layer having a single layer structure. An insulating layer of the circuit layer DP_CL, which is to be described in further detail below, as well as the first insulating layer10, may be an inorganic layer and/or an organic layer, and may have a single-layer structure or a multi-layer structure. The inorganic layer may include at least one of the above-described materials. A gate electrode G1of the first transistor T1is disposed on the first insulating layer10. The gate electrode G1may be a part of a metal pattern. The gate electrode G1of the first transistor T1overlaps the channel part A1of the first transistor T1. In a process of doping the first semiconductor pattern, the gate electrode G1of the first transistor T1may serve as a mask. A second insulating layer20covering the gate electrode G1is disposed on the first insulating layer10. The second insulating layer20overlaps the plurality of pixels PX in common. The second insulating layer20may be an inorganic layer and/or an organic layer, and may have a single layer structure or a multi-layer structure. In an embodiment, the second insulating layer20may be a silicon oxide layer having a single layer structure. An upper electrode UE may be disposed on the second insulating layer20. The upper electrode UE may overlap the gate electrode G1. The upper electrode UE may be a part of a metal pattern or a part of a doped semiconductor pattern. A portion of the gate electrode G1and the upper electrode UE overlapping the portion of the gate electrode G1may define the capacitor Cst (seeFIG.6). In an embodiment of the present disclosure, the upper electrode UE may be omitted. In an embodiment of the present disclosure, the second insulating layer20may be replaced with an insulating pattern. The upper electrode UE is arranged on the insulating pattern. The upper electrode UE may serve as a mask for forming an insulating pattern from the second insulating layer20. A third insulating layer30covering the upper electrode UE is disposed on the second insulating layer20. In an embodiment, the third insulating layer30may be a silicon oxide layer having a single layer structure. A semiconductor pattern is arranged on the third insulating layer30. Hereinafter, the semiconductor pattern directly disposed on the third insulating layer30is referred to as a second semiconductor pattern. The second semiconductor pattern may include a metal oxide. The oxide semiconductor may include a crystalline or amorphous oxide semiconductor. For example, the oxide semiconductor may include metal such as, for example, zinc (Zn), indium (In), gallium (Ga), tin (Sn), titanium (Ti), etc., and a mixture of these oxides. The oxide semiconductors may include, for example, indium-tin oxide (ITO), indium-gallium-zinc oxide (IGZO), zinc oxide (ZnO), indium-zinc oxide (IZO), zinc-indium oxide (ZIO), indium oxide (InO), titanium oxide (TiO), indium-zinc-tin oxide (IZTO), zinc-tin oxide (ZTO), etc. FIG.14only illustrates a part of the second semiconductor pattern. The second semiconductor pattern may be further disposed in another area of the pixel PXij (seeFIG.6). The second semiconductor pattern may include a plurality of areas identified depending on whether the metal oxide is reduced. An area in which the metal oxide is reduced (hereinafter, a reduction area) has higher conductivity than an area in which the metal oxide is not reduced (hereinafter, a non-reduction area). The reduction area substantially has the role of an electrode or signal line. The non-reduction area substantially corresponds to a channel part of a transistor. For example, a part of the second semiconductor pattern may be a channel part of a transistor, and another part may be a first electrode or a second electrode of the transistor. As illustrated inFIG.14, a first electrode S3, a channel part A3, and a second electrode D3of the third transistor T3are formed from the second semiconductor pattern. The first electrode S3and the second electrode D3include a metal reduced from a metal oxide semiconductor. The first electrode S3and the second electrode D3may have a predetermined thickness from an upper surface of the second semiconductor pattern, and may include a metal layer including the reduced metal. A fourth insulating layer40covering the second semiconductor pattern is disposed on the third insulating layer30. In an embodiment, the fourth insulating layer40may be a silicon oxide layer having a single layer structure. A gate electrode G3of the third transistor T3is disposed on the fourth insulating layer40. The gate electrode G3may be a part of a metal pattern. The gate electrode G3of the third transistor T3overlaps the channel part A3of the third transistor T3. In an embodiment of the present disclosure, the fourth insulating layer40may be replaced with an insulating pattern. The gate electrode G3of the third transistor T3is disposed on the insulating pattern. In an embodiment, the gate electrode G3may have the same shape as the insulating pattern in a plane. Although, for convenience of description, only one gate electrode G3is illustrated, embodiments are not limited thereto. For example, according to embodiments, the third transistor T3may include two gate electrodes. A fifth insulating layer50covering the gate electrode G3is disposed on the fourth insulating layer40. In an embodiment, the fifth insulating layer50may include a silicon oxide layer and a silicon nitride layer. The fifth insulating layer50may include a plurality of silicon oxide layers and a plurality of silicon nitride layers, which are alternately stacked. In an embodiment, the first electrode and the second electrode of the fourth transistor T4(seeFIG.6) may be formed through the same process as the first electrode S3and the second electrode D3of the third transistor T3. Moreover, the first and second electrodes of the reset transistor ST1of the sensor FXij shown inFIG.6may be formed through the same process as the first electrode S3and the second electrode D3of the third transistor T3. At least one insulating layer is further disposed on the fifth insulating layer50. In an embodiment, a sixth insulating layer60and a seventh insulating layer70may be disposed on the fifth insulating layer50. The sixth insulating layer60and the seventh insulating layer70may be organic layers, and may have a single-layer or multi-layer structure. The sixth insulating layer60and the seventh insulating layer70may be a polyimide-based resin layer having a single layer structure. However, embodiments of the present disclosure are not limited thereto. For example, the sixth insulating layer60and the seventh insulating layer70may include at least one of acrylate-based resin, methacrylate-based resin, polyisoprene-based resin, vinyl-based resin, epoxy-based resin, urethane-based resin, cellulose-based resin, siloxane-based resin, polyamide-based resin, and perylene-based resin. A first connection electrode CNE10may be disposed on the fifth insulating layer50. The first connection electrode CNE10may be connected to the connection signal line CSL through a first contact hole CH1penetrating the first to fifth insulating layers10to A second connection electrode CNE20may be connected to the first connection electrode CNE10through a contact hole CH-60penetrating the sixth insulating layer60. In an embodiment of the present disclosure, at least one of the fifth insulating layer50and the sixth insulating layer60may be omitted. The element layer DP_ED includes the light emitting element ED and a pixel defining layer PDL. An anode AE of the light emitting element ED is disposed on the seventh insulating layer70. The anode AE of the light emitting element ED may be connected to the second connection electrode CNE20through a contact hole CH-70penetrating the seventh insulating layer70. An opening OP of the pixel defining layer PDL exposes at least part of the anode AE of the light emitting element ED. The opening OP of the pixel defining layer PDL may define an emission area PXA. For example, the plurality of pixels PX (seeFIG.3) may be arranged in a plane of the display panel DP (seeFIG.3) depending on a specific rule. An area in which the plurality of pixels PX are arranged may be defined as a pixel area. One pixel area may include the emission area PXA and a non-emission area NPXA adjacent to the emission area PXA. The non-emission area NPXA may surround the emission area PXA. A hole control layer HCL may be disposed in common in the emission area PXA and the non-emission area NPXA. A common layer such as the hole control layer HCL may be formed in common in the plurality of pixels PX. The hole control layer HCL may include a hole transport layer and a hole injection layer. A light emitting layer EML is disposed on the hole control layer HCL. The light emitting layer EML may be disposed in only an area corresponding to the opening OP. The light emitting layer EML may be separately formed in each of the plurality of pixels PX. In an embodiment, the patterned light emitting layer EML is illustrated. However, the light emitting layer EML may be disposed in the plurality of pixels PX in common. The light emitting layer EML may generate white light or blue light. The light emitting layer EML may have a multi-layer structure. An electron control layer ECL is disposed on the light emitting layer EML. The electron control layer ECL may include an electron transport layer and an electron injection layer. A cathode CE of the light emitting element ED is disposed on the electron control layer ECL. The electron control layer ECL and the cathode CE are disposed in common in the plurality of pixels PX. The encapsulation layer TFE is disposed on the cathode CE. The encapsulation layer TFE may cover the plurality of pixels PX. In an embodiment, the encapsulation layer TFE directly covers the cathode CE. In an embodiment of the present disclosure, the display panel DP may further include a capping layer directly covering the cathode CE. In an embodiment of the present disclosure, the stacked structure of the light emitting element ED may have a vertically inverted structure in the structure shown inFIG.12. Referring toFIGS.15A and15B, a first electrode layer is disposed on the circuit layer DP_CL. The pixel defining layer PDL is formed on the first electrode layer. The first electrode layer may include first to third anodes AE1, AE2, AE3. First to third openings OP1, OP2, OP3of the pixel defining layer PDL expose at least part of the first to third anodes AE1, AE2, AE3, respectively. In an embodiment of the present disclosure, the pixel defining layer PDL may further include a black material. The pixel defining layer PDL may further include a black organic dye/pigment such as, for example, carbon black, aniline black, etc. The pixel defining layer PDL may be formed by mixing a blue organic material and a black organic material. The pixel defining layer PDL may further include a liquid-repellent organic material. As shown inFIG.15A, the display panel DP may include first to third emission areas PXA-R, PXA-G, PXA-B and first to third non-emission areas NPXA-R, NPXA-G, NPXA-B that are adjacent to the first to third emission areas PXA-R, PXA-G, PXA-B. The non-emission area NPXA-R, NPXA-G, NPXA-B may surround the corresponding emission area PXA-R, PXA-G, PXA-B, respectively. In an embodiment, the first emission area PXA-R is defined to correspond to a partial area of the first anode AE1exposed by the first opening OP1, the second emission area PXA-G is defined to correspond to a partial area of the second anode AE2exposed by the second opening OP2, and the third emission area PXA-B is defined to correspond to a partial area of the third anode AE3exposed by the third opening OP3. A non-pixel area NPA may be defined between the first to third non-emission areas NPXA-R, NPXA-G, NPXA-B. A light emitting layer may be disposed on a first electrode layer. The light emitting layer may include first to third light emitting layers EML1to EML3. The first to third light emitting layers EML1to EML3may be disposed in areas corresponding to the first to third openings OP1, OP2, OP3, respectively. The first to third light emitting layers EML1to EML3may be separately formed in first to third pixels PXR, PXG, and PXB (seeFIGS.5A to5C). Each of the first to third light emitting layers EML1to EML3may include an organic material and/or an inorganic material. The first to third light emitting layers EML1to EML3may generate light of a predetermined color. For example, the first light emitting layer EML1may generate red light, the second light emitting layer EML2may generate green light, and the third light emitting layer EML3may generate blue light. In an embodiment, the patterned first to third light emitting layers EML1to EML3are illustrated. However, embodiments are not limited thereto. For example, in an embodiment, one light emitting layer may be disposed in the first to third emission areas PXA-R, PXA-G, and PXA-B in common. The light emitting layer may generate white light or blue light. The light emitting layer may have a multi-layered structure that is referred to as “tandem”. Each of the first to third light emitting layers EML1to EML3may include a low molecular weight organic material or a high molecular weight organic material as a light emitting material. Alternatively, each of the first to third light emitting layers EML1to EML3may include a quantum dot material as a light emitting material. The core of a quantum dot may be selected from, for example, a group II-VI compound, a group III-V compound, a group IV-VI compound, a group IV compound, a group IV compound, and a combination thereof. A second electrode layer is disposed on the light emitting layer. The second electrode layer may include first to third cathodes CE1, CE2, and CE3. The first to third cathodes CE1, CE2, and CE3may be electrically connected to one another. In an embodiment of the present disclosure, the first to third cathodes CE1, CE2, and CE3may be integrated with each other. In this case, the first to third cathodes CE1, CE2, CE3may be disposed in the first to third emission areas PXA-R, PXA-G, PXA-B, the first to third non-emission areas NPXA-R, NPXA-G, NPXA-B, and the non-pixel area NPA in common. The element layer DP_ED may further include the sensors OPD. Each of the sensors OPD may be a photodiode. The pixel defining layer PDL may further include a fourth opening OP4provided to correspond to the sensors OPD. Each of the sensors OPD may include a fourth anode AE4, a photoelectric conversion layer ORL, and a fourth cathode CE4. The fourth anode AE4may be disposed on the same layer as the first electrode layer. That is, the fourth anode AE4may be disposed on the element layer DP_CL, and may be simultaneously formed through the same process as the first to third anodes AE1to AE3. The fourth opening OP4of the pixel defining layer PDL exposes at least part of the fourth anode AE4. The photoelectric conversion layer ORL is disposed on the fourth anode AE4exposed by the fourth opening OP4. The photoelectric conversion layer ORL may include an organic photo-sensing material. The fourth cathode CE4may be disposed on the photoelectric conversion layer ORL. The fourth cathode CE4may be simultaneously formed through the same process as the first to third cathodes CE1to CE3. In an embodiment of the present disclosure, the fourth cathode CE4may be integrated with the first to third cathodes CE1to CE3. Each of the fourth anode AE4and the fourth cathode CE4may receive an electrical signal. The fourth cathode CE4may receive a signal different from that of the fourth anode AE4. Accordingly, a predetermined electric field may be formed between the fourth anode AE4and the fourth cathode CE4. The photoelectric conversion layer ORL generates an electrical signal corresponding to the light incident on a sensor. The photoelectric conversion layer ORL may generate an electric charge by absorbing the energy of the incident light. For example, the photoelectric conversion layer ORL may include a light-sensitive semiconductor material. The electric charge generated in the photoelectric conversion layer ORL changes the electric field between the fourth anode AE4and the fourth cathode CE4. The amount of charge generated in the photoelectric conversion layer ORL may vary depending on whether light is incident on the sensors OPD, or the amount and intensity of light incident on the sensors OPD. Accordingly, the electric field formed between the fourth anode AE4and the fourth cathode CE4may vary. The sensors OPD according to an embodiment of the present disclosure may obtain fingerprint information of a user through a change in the electric field between the fourth anode AE4and the fourth cathode CE4. However, embodiments of the present disclosure are not limited thereto. For example, according to embodiments, each of the sensors OPD may include a phototransistor that uses the photoelectric conversion layer ORL as an active layer. In this case, each of the sensors OPD may obtain fingerprint information by sensing the amount of current flowing through the phototransistor. Each of the sensors OPD according to an embodiment of the present disclosure may include various photoelectric conversion elements capable of generating an electrical signal in response to a change in the amount of light. However, the sensors OPD are not limited thereto. The encapsulation layer TFE is disposed on the element layer DP_ED. The encapsulation layer TFE includes at least one inorganic layer or at least one organic layer. In an embodiment of the present disclosure, the encapsulation layer TFE may include two inorganic layers and an organic layer disposed therebetween. In an embodiment of the present disclosure, a thin-film encapsulation layer may include a plurality of inorganic layers and a plurality of organic layers, which are alternately stacked. An encapsulation inorganic layer may protect the light emitting element ED from, for example, moisture or oxygen. An encapsulation organic layer may protect the light emitting element ED from foreign objects such as, for example, dust particles. The encapsulation inorganic layer may include, for example, a silicon nitride layer, a silicon oxynitride layer, a silicon oxide layer, a titanium oxide layer, an aluminum oxide layer, etc., but is not limited thereto. The encapsulation organic layer may include an acryl-based organic layer, but is not limited thereto. The display device DD includes the input sensing layer ISL disposed on the display panel DP and the color filter layer CFL disposed on the input sensing layer ISL. The input sensing layer ISL may be disposed directly on the encapsulation layer TFE. The input sensing layer ISL includes a first conductive layer ICL1, an insulating layer IL, a second conductive layer ICL2, and a protective layer PL. The first conductive layer ICL1may be disposed on the encapsulation layer TFE.FIGS.15A and15Billustrate a structure in which the first conductive layer ICL1is directly disposed on the encapsulation layer TFE, but the present disclosure is not limited thereto. The input sensing layer ISL may further include a base insulating layer interposed between the first conductive layer ICL1and the encapsulation layer TFE. In this case, the encapsulation layer TFE may be covered by the base insulating layer, and the first conductive layer ICL1may be disposed on the base insulating layer. In an embodiment of the present disclosure, the base insulating layer may include an inorganic insulating material. The insulating layer IL may cover the first conductive layer ICL1. The second conductive layer ICL2is disposed on the insulating layer IL. Although a structure in which the input sensing layer ISL includes the first and second conductive layers ICL1and ICL2is illustrated, the present disclosure is not limited thereto. For example, according to embodiments, the input sensing layer ISL may include only one of the first and second conductive layers ICL1and ICL2. The protective layer PL may be disposed on the second conductive layer ICL2. The protective layer PL may include an organic insulating material. The protective layer PL may protect the first and second conductive layers ICL1and ICL2from moisture/oxygen, and may protect the first and second conductive layers ICL1and ICL2from foreign objects. The color filter layer CFL may be disposed on the input sensing layer ISL. The color filter layer CFL may be disposed directly on the protective layer PL. The color filter layer CFL may include a first color filter CF_R, a second color filter CF_G, and a third color filter CF_B. The first color filter CF_R has a first color, the second color filter CF_G has a second color, and the third color filter CF_B has a third color. In an embodiment of the present disclosure, the first color may be red, the second color may be green, and the third color may be blue. The color filter layer CFL may further include a dummy color filter DCF. In an embodiment of the present disclosure, when an area where the photoelectric conversion layer ORL is disposed is defined as a sensing area SA, and a periphery of the sensing area SA is defined as a non-sensing area NSA, the dummy color filter DCF may be disposed to correspond to the sensing area SA. The dummy color filter DCF may overlap the sensing area SA and the non-sensing area NSA. In an embodiment of the present disclosure, the dummy color filter DCF may have the same color as one of the first to third color filters CF_R, CF_G, and CF_B. In an embodiment of the present disclosure, the dummy color filter DCF may have the same green color as the second color filter CF_G. The color filter layer CFL may further include a black matrix BM. The black matrix BM may be disposed to correspond to the non-pixel area NPA. The black matrix BM may be disposed to overlap the first and second conductive layers ICL1and ICL2in the non-pixel area NPA. In an embodiment of the present disclosure, the black matrix BM may overlap the non-pixel area NPA and the first to third non-emission areas NPXA-R, NPXA-G, NPXA-B. The black matrix BM may not overlap the first to third emission areas PXA-R, PXR-G, PXA-B. The color filter layer CFL may further include an overcoat layer OCL. The overcoat layer OCL may include an organic insulating material. The overcoat layer OCL may be provided with a thickness sufficient to remove a level difference between the first to third color filters CF_R, CF_G, CF_B. As long as a material is capable of having a predetermined thickness and planarizing an upper surface of the color filter layer CFL, the overcoat layer OCL may include the material without being particularly limited. For example, the overcoat layer OCL may include an acrylate-based organic material. Referring toFIG.15B, when the display device DD (seeFIG.1) operates, each of first to third light emitting elements ED_R, ED_G, and ED_B may output or emit light. The first light emitting elements ED_R emit first light Lr1, the second light emitting elements ED_G emit second light Lg1, and the third light emitting elements ED_B output third light. Herein, the first light Lr1may be light in a red wavelength band, the second light Lg1may be light in a green wavelength band, and the third light may be light in a blue wavelength band. In an embodiment of the present disclosure, each of the sensors OPD may receive light from specific light emitting elements (e.g., second light emitting elements ED_G) among first to third light emitting elements ED_R, ED_G, and ED_B. That is, each of the sensors OPD may receive second reflected light Lg2, which is reflected by a user's fingerprint from the second light Lg1output from the second light emitting elements ED_G. The second light Lg1and the second reflected light Lg2may be light in a green wavelength band. The dummy color filter DCF is disposed over the sensors OPD. The dummy color filter DCF may have a green color. Accordingly, the second reflected light Lg2may pass through the dummy color filter DCF and may be incident on the sensors OPD. Meanwhile, first and third lights output from the first and third light emitting elements ED_R and ED_B may also be reflected by the user's hand US_F. For example, when light from reflecting the first light Lr1, which is output from the first light emitting elements ED_R, from the user's hand US_F is defined as first reflected light Lr2, the first reflected light Lr2may be absorbed without passing through the dummy color filter DCF. Accordingly, the first reflected light Lr2may not pass through the dummy color filter DCF, and thus may not be incident on the sensors OPD. Likewise, even though the third light is reflected by the user's hand US_F, the third light may be absorbed by the dummy color filter DCF. Accordingly, only the second reflected light Lg2may be provided to the sensors OPD. A display device having a configuration according to embodiments of the present disclosure may detect biometric information of a user by including a sensor formed through the same process as a pixel. According to embodiments, since the sensor is driven using a scan signal for driving the pixel, a separate signal wire that drives the sensor may be omitted. According to embodiments, the reliability of the detected biometric information may be increased by securing a sufficient amount of time to detect a signal received from the sensor. While the present disclosure has been described with reference to embodiments thereof, it will be apparent to those of ordinary skill in the art that various changes and modifications may be made thereto without departing from the spirit and scope of the present disclosure as set forth in the following claims. | 98,039 |
11862101 | DETAILED DESCRIPTION It will be understood that when an element is referred to as being “connected to” another element, it can be directly connected to the other element or intervening elements may be present therebetween. In contrast, when an element is referred to as being “directly connected to” another element, there are no intervening elements present. It will be understood that, although the terms “first,” “second,” “third” etc. may be used herein to describe various elements, components, regions, layers and/or sections, these elements, components, regions, layers and/or sections should not be limited by these terms. These terms are only used to distinguish one element, component, region, layer or section from another element, component, region, layer or section. Thus, “a first element,” “component,” “region,” “layer” or “section” discussed below could be termed a second element, component, region, layer or section without departing from the teachings herein. The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting. As used herein, “a”, “an,” “the,” and “at least one” do not denote a limitation of quantity, and are intended to include both the singular and plural, unless the context clearly indicates otherwise. For example, “an element” has the same meaning as “at least one element,” unless the context clearly indicates otherwise. “At least one” is not to be construed as limiting “a” or “an.” “Or” means “and/or.” As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items. It will be further understood that the terms “comprises” and/or “comprising,” or “includes” and/or “including” when used in this specification, specify the presence of stated features, regions, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, regions, integers, steps, operations, elements, components, and/or groups thereof. “About”, “approximately” or “substantially equal” as used herein is inclusive of the stated value and means within an acceptable range of deviation for the particular value as determined by one of ordinary skill in the art, considering the measurement in question and the error associated with measurement of the particular quantity (i.e., the limitations of the measurement system). For example, “about” can mean within one or more standard deviations, or within ±30%, 20%, 10% or 5% of the stated value. Hereinafter, preferred embodiments of the present invention will be described in more detail with reference to the accompanying drawings. The same reference numerals are used to refer to the same elements in the drawings, and redundant descriptions thereof are omitted. FIG.1is a block diagram illustrating a display device according to embodiments of the present invention. Referring toFIG.1, a display device1000may include a display unit100, a scan driver200, an emission driver300, a data driver400, and a timing controller500. The display device1000may display an image at various frame frequencies (refresh rates, driving frequencies, or screen refresh rates) according to driving conditions. The frame frequency is a frequency at which a data voltage is substantially written to a driving transistor (e.g., first transistor M1inFIG.4) of a pixel PX for 1 second. For example, the frame frequency is also referred to as a screen scan rate or a screen refresh rate, and represents a frequency at which a display screen is reproduced for 1 second. In an embodiment, the output frequency of the data driver400and/or a fourth scan signal supplied to a fourth scan line S4ifor supplying a data signal may be changed corresponding to the frame frequency. For example, a frame frequency for driving a moving image may be a frequency of about 60 Hertz (Hz) or more (for example, 60 Hz, 120 Hz, 240 Hz, 360 Hz, 480 Hz, and the like). When the frame frequency is 60 Hz, the fourth scan signal may be supplied to each horizontal line (pixel row) 60 times per second. In an embodiment, the display device1000may control the output frequencies of the scan driver200and the emission driver300and the corresponding output frequency of the data driver400according to driving conditions. For example, the display device1000may display images corresponding to various frame frequencies of 1 Hz to 240 Hz. However, this is an example, and the display device1000may display images even at a frame frequency of 240 Hz or more (for example, 300 Hz or 480 Hz) in another embodiment. The display unit100may include scan lines S11to S1n, S21to S2n, S31to S3n, S41to S4n, and S51to S5n, emission control lines E11to E1nand E21to E2n, and data lines D1to Dm, and may include pixels PXs connected thereto (m and n are integers greater than 1). Each of the pixels PX may include a driving transistor (e.g., first transistor M1) and a plurality of switching transistors. The timing controller500may receive input image data IRGB and control signals from a host system such as an application processor (“AP”) through a predetermined interface. The timing controller500may control driving timings of the scan driver200, the emission driver300, and the data driver400. The timing controller500may generate a first control signal SCS, a second control signal ECS, and a third control signal DCS based on the input image data IRGB, the control signals, and a clock signal. The first control signal SCS may be supplied to the scan driver200, the second control signal ECS may be supplied to the emission driver300, and the third control signal DCS may be supplied to the data driver400. The timing controller500may rearrange the input image data IRGB and supply the rearranged input image data (i.e., digital image data RGB) to the data driver400. The scan driver200may receive the first control signal SCS from the timing controller500, and may supply a first scan signal, a second scan signal, a third scan signal, a fourth scan signal, and a fifth scan signal to first scan lines S11to S1n, second scan lines S21to S2n, third scan lines S31to S3n, fourth scan lines S41to S4n, and fifth scan lines S51to S5nbased on the first control signal SCS, respectively. The first to fifth scan signals may be set to a gate-on level corresponding to the types of transistors to which the scan signals are supplied. The transistor receiving the scan signal may be set to a turn-on state when the scan signal is supplied. For example, a gate-on level of a scan signal supplied to a P-channel metal oxide semiconductor (“PMOS”) transistor may be a logic low level, and a gate-on level of a scan signal supplied to an N-channel metal oxide semiconductor (“NMOS”) transistor may be a logic high level. Hereinafter, the phrase “a scan signal is supplied” may be understood to mean that a scan signal is supplied at a logic level that turns on a transistor controlled thereby. In an embodiment, the scan driver200may supply some of the first to fifth scan signals a plurality of times in a non-emission period. Therefore, the bias state of the driving transistor included in the pixel PX may be controlled. The emission driver300may supply a first emission control signal and a second emission control signal to the first emission control lines E11to E1nand the second emission control lines E21to E2nbased on the second control signal ECS, respectively. The first and second emission control signals may be set to a gate-off voltage (for example, a high voltage). The transistor receiving the first emission control signal or the second emission control signal of the gate-off voltage may be turned off when the emission control signal is supplied, and may be set to a turned-on state in other cases. Hereinafter, the phrase “an emission control signal is supplied” may be understood to mean that an emission control signal is supplied at a logic level (for example, a high level) that turns off a transistor controlled thereby. AlthoughFIG.1illustrates that each of the scan driver200and the emission driver300has a single configuration for convenience of explanation, the present invention is not limited thereto. According to a design in another embodiment, the scan driver200may include a plurality of scan drivers that supply at least one of the first to fifth scan signals, respectively. In addition, at least a part of the scan driver200and the emission driver300may be integrated into a single driving circuit, module, or the like. The data driver400may receive the third control signal DCS and the image data RGB from the timing controller500. The data driver400may convert digital image data RGB into an analog data signal (data voltage). The data driver400may supply a data signal to the data lines D1to Dm in response to the third control signal DCS. In this case, the data signal supplied to the data lines D1to Dm may be supplied in synchronization with the fourth scan signal supplied to the fourth scan lines S41to S4n. In an embodiment, the display device1000may further include a power supply. The power supply may supply, to the display unit100, a first power supply voltage VDD, a second power supply voltage VSS, a third power supply voltage Vint1(for example, a first initialization voltage), a fourth power supply voltage Vint2(for example, a second initialization voltage), and a fifth power supply voltage Vbias (for example, a bias voltage) for driving the pixels PX. On the other hand, the display device1000may operate at various frame frequencies. In the case of low-frequency driving, image defects such as flicker may be recognized due to current leakage inside the pixel. In addition, an afterimage such as image drag may be recognized according to a change in the bias state of the driving transistor due to driving at various frame frequencies or a change in a response time due to a threshold voltage shift due to a change in hysteresis characteristics. In order to improve image quality, one frame period of the pixel PX may include non-emission periods and emission periods according to the frame frequency. For example, the first non-emission period and emission period of one frame may be defined as a first driving period, and a subsequent non-emission period and emission period may be defined as a second driving period. For example, a data signal for displaying an image may be substantially written to the pixel PX in the first driving period, and an on-bias state may be applied to the driving transistor of the pixel PX in the second driving period (a state capable of being turned on). On the other hand, in the case of high-speed driving at a frame frequency of 120 Hz or more, a threshold voltage compensation time of the driving transistor has to be sufficiently secured in order to implement the minimum criterion of image quality. The pixel PX and the display device1000according to embodiments of the present invention may display high-quality images at various frame frequencies while securing a sufficient threshold voltage compensation time. FIG.2is a diagram illustrating an example of the scan driver and the emission driver included in the display device ofFIG.1. Referring toFIGS.1and2, the scan driver200may include a first scan driver210, a second scan driver220, a third scan driver230, a fourth scan driver240, and a fifth scan driver250. In an embodiment, each of the first to fifth scan drivers210,220,230,240, and250may include stage circuits connected separately and dependently. The first control signal SCS may include first to fifth scan start signals FLM1to FLM5. The first to fifth scan start signals FLM1to FLM5may be supplied to the first to fifth scan drivers210,220,230,240, and250, respectively. The pulse widths and supply timings of the first to fifth scan start signals FLM1to FLM5may be determined according to the frame frequency and the driving condition of the pixel PX. The first to fifth scan signals may be output based on the first to fifth scan start signals FLM1to FLM5, respectively. For example, a signal width of at least one of the first to fifth scan signals may be different from a signal width of the others thereof. In addition, at least one of the first to fifth scan signals may be output a plurality of times during the non-emission period. Furthermore, the gate-on levels of the first to fifth scan signals may be determined according to the type of the corresponding transistor. For example, gate-on levels of the second scan signal and the third scan signal may be different from a gate-on level of the fourth scan signal. The first scan driver210may supply the first scan signal to the first scan lines S11to Sin in response to the first scan start signal FLM1. The second scan driver220may supply the second scan signal to the second scan lines S21to Stn in response to the second scan start signal FLM2. The third scan driver230may supply the third scan signal to the third scan lines S31to S3nin response to the third scan start signal FLM3. The fourth scan driver240may supply the fourth scan signal to the fourth scan lines S41to S4nin response to the fourth scan start signal FLM4. The fifth scan driver250may supply the fifth scan signal to the fifth scan lines S51to S5nin response to the fifth scan start signal FLM5. In an embodiment, the emission driver300may include a first emission driver310and a second emission driver320. The second control signal ECS may include first and second emission control start signals EFLM1and EFLM2. The first and second emission control start signals EFLM1and EFLM2may be supplied to the first and second emission drivers310and320, respectively. In an embodiment, each of the first and second emission drivers310and320may include stage circuits connected separately and dependently. In addition, the pulse width and supply timing of the first emission control signal may be different from the pulse width and supply timing of the second emission control signal. The first emission driver310may supply the first emission control signal to the first emission control lines E11to E1nin response to the first emission control start signal EFLM1. The second emission driver320may supply the second emission control signal to the second emission control lines E21to E2nin response to the second emission control start signal EFLM2. FIG.3is a diagram illustrating another example of the scan driver and the emission driver included in the display device ofFIG.1. Since the display device ofFIG.3is substantially the same as or similar to the contents described with reference toFIG.2, except for a scan driver201, the same reference numerals are used to refer to the same or corresponding components and redundant descriptions thereof are omitted. Referring toFIGS.1and3, the scan driver201may include a first scan driver211, a second scan driver221, and a third scan driver231. In an embodiment, the second scan driver221may supply a second scan signal to second scan lines S21to Stn and a third scan signal to third scan lines S31to S3n, based on a second scan start signal FLM2. A pulse width of the third scan signal may be equal to a pulse width of the second scan signal. For example, the third scan signal supplied to the same pixel may be a signal obtained by shifting the second scan signal. For example, the third scan line (for example, S3i) connected to an i-th pixel row (where i is a natural number) may be connected to the second scan line (for example, S2i+k) connected to an (i+k)-th pixel row (where k is a natural number). In an embodiment, the third scan driver231may supply a fourth scan signal to fourth scan lines S41to S4nand a fifth scan signal to fifth scan lines S51to S5n, based on a third scan start signal FLM3. A pulse width of the fifth scan signal may be equal to a pulse width of the fourth scan signal. For example, the fifth scan signal supplied to the same pixel may be a signal obtained by shifting the fourth scan signal. For example, the fifth scan line (for example, S5i) connected to an i-th pixel row (where i is a natural number) may be connected to the fourth scan line (for example, S4i+j) connected to an (i+j)-th pixel row (where j is a natural number). Therefore, the size of the scan driver201included in the display device1000and wiring complexity of the display device1000may be reduced, and manufacturing costs may be reduced. However, this is only an example, and the fourth scan signal and the fifth scan signal may be output from different scan drivers in another embodiment. For example, the third scan driver231may supply the fourth scan signal to the fourth scan lines S41to S4n, and an additional fourth scan driver may supply the fifth scan to the fifth scan lines S51to S5n. FIG.4is a circuit diagram illustrating an example of a pixel included in the display device ofFIG.1. For convenience of explanation,FIG.4illustrates that a pixel10is positioned on an i-th horizontal line (or an i-th pixel row) and connected to a j-th data line Dj (where i and j are natural numbers). Referring toFIGS.1and4, the pixel10may include a light emitting element LD, first to ninth transistors M1to M9, a storage capacitor Cst, and a first capacitor C1. A first electrode (for example, an anode electrode) of the light emitting element LD may be connected to a fifth node N5, and a second electrode (for example, a cathode electrode) of the light emitting element LD may be connected to a second power line PL2through which a second power supply voltage VSS is transmitted. The light emitting element LD may emit light having a predetermined luminance according to the amount of current supplied from the transistor M1. The second power line PL2may have a line shape, but is not limited thereto. For example, the second power line PL2may be a conductive layer having a conductive plate shape. In an embodiment, the light emitting element LD may be an organic light emitting diode including an organic emission layer. In another embodiment, the light emitting element LD may be an inorganic light emitting element including an inorganic material. In another embodiment, the light emitting element LD may be a light emitting element including an inorganic material and an organic material in combination. Alternatively, the light emitting element LD may have a structure in which a plurality of inorganic light emitting elements are connected in parallel and/or in series between the second power line PL2and the fifth node N5. A first electrode of the first transistor M1(or the driving transistor) may be connected to a first node N1, and a second electrode of the first transistor M1may be connected to a second node N2. A gate electrode of the first transistor M1may be connected to a third node N3. The first transistor M1may control the driving current flowing from the first power line PL1, through which a first power supply voltage VDD is supplied, to the second power line PL2, through which a second power supply voltage VSS is supplied via the light emitting element LD, in response to the voltage of the third node N3. For example, the first power supply voltage VDD may be set to be higher than the second power supply voltage VSS. The second transistor M2may be connected between a j-th data line Dj (hereinafter, referred to as a “data line”) and the first node N1. A gate electrode of the second transistor M2may be connected to an i-th fourth scan line S4i(hereinafter, referred to as a “fourth scan line”). When the fourth scan signal is supplied to the fourth scan line S4i, the second transistor M2may be turned on to electrically connect the data line Dj to the first node N1. The third transistor M3may be connected between the second electrode of the first transistor M1(that is, the second node N2) and the third node N3(that is, the second electrode of the first transistor M1). A gate electrode of the third transistor M3may be connected to an i-th second scan line S2i(hereinafter, referred to as a “second scan line”). When the second scan signal is supplied to the second scan line S2i, the third transistor M3may be turned on to electrically connect the second electrode of the first transistor M1to the third node N3. That is, a timing at which the second electrode (for example, a drain electrode) of the first transistor M1is connected to the gate electrode of the first transistor M1may be controlled by the second scan signal. When the third transistor M3is turned on, the first transistor M1may be diode-connected. The fourth transistor M4may be connected between the second node N2and a third power supply line PL3through which a third power supply voltage Vint1(for example, a first initialization voltage) is supplied. A gate electrode of the fourth transistor M4may be connected to an i-th first scan line S1i(hereinafter, referred to as a “first scan line”). When the first scan signal is supplied to the first scan line S1i, the fourth transistor M4may be turned on to supply the third power supply voltage Vint1to the second node N2. For example, the third power supply voltage Vint1may be set to a voltage lower than the lowest level of the data signal supplied to the data line Dj. The fifth transistor M5may be connected between the first node N1and the fourth node N4. A gate electrode of the fifth transistor M5may be connected to an i-th third scan line S3i(hereinafter, referred to as a “third scan line”). When the third scan signal is supplied to the third scan line S3i, the fifth transistor M5is turned on to supply the first power supply voltage VDD or the voltage of the data signal to the fourth node N4. In an embodiment, the third transistor M3and the fifth transistor M5may be oxide semiconductor transistors. Each of the third transistor M3and the fifth transistor M5may include an oxide semiconductor layer as an active layer (semiconductor layer). For example, the third transistor M3and the fifth transistor M5may be n-type oxide semiconductor transistors. The oxide semiconductor transistor may be processed at a low temperature and has a lower charge mobility than a polysilicon semiconductor transistor. That is, the oxide semiconductor transistor has excellent off-current characteristics. Therefore, when the third transistor M3and the fifth transistor M5are provided as oxide semiconductor transistors, it is possible to minimize leakage current through the third transistor M3and the fifth transistor M5according to low-frequency driving and variable frequency driving, thereby improving display quality. The sixth transistor M6may be connected between the first power line PL1and the first node N1. A gate electrode of the sixth transistor M6may be connected to an i-th first emission control line E1i(hereinafter, referred to as a “first emission control line”). The sixth transistor M6may be turned off when the first emission control signal is supplied to the first emission control line E1i, and may be turned on in other cases. When the sixth transistor M6is turned on, the first node N1may be electrically connected to the first power line PL1. The seventh transistor M7may be connected between the second node N2and the fifth node N5(for example, the first electrode of the light emitting element LD). A gate electrode of the seventh transistor M7may be connected to an i-th second emission control line E2i(hereinafter, referred to as a “second emission control line”). The seventh transistor M7may be turned off when the second emission control signal is supplied to the second emission control line E2i, and may be turned on in other cases. When the seventh transistor M7is turned on, the second node N2and the fifth node N5may be electrically connected to each other. The eighth transistor M8may be connected between the fifth node N5and the fourth power line PL4through which a fourth power supply voltage Vint2is supplied. A gate electrode of the eighth transistor M8may be connected to an i-th fifth scan line S5i(hereinafter, referred to as a “fifth scan line”). When the fifth scan signal is supplied to the fifth scan line S5i, the eighth transistor M8is turned on to supply the fourth power supply voltage Vint2(for example, the second initialization voltage) to the fifth node N5. When the fourth power supply voltage Vint2is supplied to the first electrode of the light emitting element LD (that is, the fifth node N5), a parasitic capacitor of the light emitting element LD may be discharged. As the residual voltage charged in the parasitic capacitor is discharged (removed), unintentional fine light emission may be prevented. Therefore, the black expression capability of the pixel10may be improved. On the other hand, the third power supply voltage Vint1and the fourth power supply voltage Vint2may be different from each other. That is, the voltage for initializing the third node N3and the voltage for initializing the fifth node N5may be set differently. When the third power supply voltage Vint1supplied to the third node N3is too low in the low-frequency driving in which the length of one frame period increases, a strong on-bias state is applied to the first transistor M1, and thus a threshold voltage of the first transistor M1in the corresponding frame period is shifted. Such a hysteresis characteristic may cause a flicker phenomenon in the low-frequency driving. Therefore, in the low-frequency-driving display device, the third power supply voltage Vint1higher than the second power supply voltage VSS may be desirable. However, when the fourth power supply voltage Vint2supplied to the fifth node N5is higher than a predetermined reference, the voltage of the parasitic capacitor of the light emitting element LD may be charged rather than discharged. Therefore, the fourth power supply voltage Vint2is desirable to be lower than the second power supply voltage VSS. However, this is only an example, and the third power supply voltage Vint1and the fourth power supply voltage Vint2may be substantially equal to each other in another embodiment. The ninth transistor M9may be connected between the first node N1and the fifth power line PL5through which a fifth power supply voltage Vbias (for example, a bias voltage) is supplied. A gate electrode of the ninth transistor M9may be connected to the fifth scan line S5i. When the fifth scan signal is supplied to the fifth scan line S5i, the ninth transistor M9is turned on to supply the fifth power supply voltage Vbias to the first node N1. In an embodiment, the fifth power supply voltage Vbias may be at a level similar to a data voltage of a black gray scale. For example, the fifth power supply voltage Vbias may be about 5 volts (V) to about 7 V. Therefore, when the ninth transistor M9is turned on, a predetermined high voltage may be applied to a source electrode of the first transistor M1. At this time, when the third transistor M3is in a turned-off state, the first transistor M1may have an on-bias state (a state capable of being turned on) (that is, on-biased). As the fifth power supply voltage Vbias is periodically supplied to the first node N1, the bias state of the first transistor M1may be periodically changed and the threshold voltage characteristic of the first transistor M1may be changed. Therefore, the first transistor M1degradation for the reason that the characteristics of the first transistor M1are fixed to a specific state in low-frequency driving may be prevented. The storage capacitor Cst may be connected between the third node N3and the fourth node N4. The storage capacitor Cst may store a voltage difference between the third node N3and the fourth node N4. The first capacitor C1may be connected between the first power line PL1and the fourth node N4. The first power supply voltage VDD, which is a constant voltage, may be continuously supplied to one electrode of the first capacitor C1. Therefore, the voltage of the fourth node N4may not be affected by other parasitic capacitors, and voltage levels directly supplied to the fourth node N4may be maintained. That is, the first capacitor C1may function as a hold capacitor. Some transistors of the pixel10may be polysilicon semiconductor transistors. For example, the first, second, fourth, sixth, seventh, eighth, and ninth transistors M1, M2, M4, M6, M7, M8, and M9may include polysilicon semiconductor layers formed through a low temperature poly-silicon (“LTPS”) process as active layers (channels). Since the polysilicon semiconductor transistor has an advantage of a fast response time, the polysilicon semiconductor transistor may be applied to a switching device for fast switching. However, this is an example, and the types and kinds of transistors according to the invention are not limited to the above-described examples. FIG.5is a timing diagram illustrating an example of signals supplied to the pixel ofFIG.4in a first driving period, andFIG.6is a timing diagram illustrating an example of signals supplied to the pixel ofFIG.4in a second driving period. Referring toFIGS.4,5, and6, the pixel10may operate through a first driving period DP1or a second driving period DP2. In variable frequency driving for controlling the frame frequency, one frame period may include the first driving period DP1. In addition, the second driving period DP2may be omitted or may proceed at least once depending on the frame frequency. The first driving period DP1may include a first non-emission period NEP1and a first emission period EP1. The second driving period DP2may include a second non-emission period NEP2and a second emission period EP2. The first driving period DP1may include a period (for example, a third period P3) in which a data signal actually corresponding to an output image is written. A data signal is not supplied in the second driving period DP2, and a fifth scan signal may be supplied in order to control the first transistor M1of the pixel10to an on-bias state in a fifth period P5of the second driving period DP2. As illustrated inFIG.5, the first non-emission period NEP1may include first to fourth periods P1to P4and first and second compensation periods CP1and CP2. In an embodiment, the pulse width of the third scan signal supplied to the third scan line S3imay be equal to the width of the second scan signal supplied to the second scan line S2i. For example, the third scan signal supplied to the third scan line S3imay be a signal obtained by shifting the second scan signal supplied to the second scan line S2i. Therefore, the third scan line S3imay share a scan signal with the second scan line S2i+k of the (i+k)-th pixel row, where k is a natural number. In an embodiment, each of the pulse widths of the second and third scan signals may be greater than each of the pulse width of the first scan signal, the pulse width of the fourth scan signal, and the pulse width of the fifth scan signal. The second and third scan signals supplied to the n-type oxide semiconductor transistors may be at a high level, and the first scan signal, the fourth scan signal, and the fifth scan signal supplied to the p-type polysilicon semiconductor transistors may be at a low level. In an embodiment, the pulse width of the fourth scan signal supplied to the fourth scan line S4imay be equal to the pulse width of the fifth scan signal supplied to the fifth scan line S5i. For example, the fourth scan signal supplied to the fourth scan line S4imay be a signal obtained by shifting the fifth scan signal supplied to the fifth scan line S5i. Therefore, the fourth scan line S4imay share a scan signal with the fifth scan line S5i+j of the (i+j)-th pixel row, where j is a natural number. In an embodiment, the waveform of the first emission control signal may be different from the waveform of the second emission control signal in the first non-emission period NEP1. For example, the first emission control signal may be supplied a plurality of times during the first non-emission period NEP1. In the first and second compensation periods CP1and CP2, the supply of the first emission control signal may be stopped (that is, the first emission control signal may have a low level). The second emission control signal may be supplied during the first non-emission period NEP1and may maintain a high level. When the supply of the first and second emission control signals E1iand E2iis started (that is, transitioned to a high level), the first non-emission period NEP1may be started. Thereafter, in the first period P1, the first scan signal may be supplied to the first scan line S1iand the second scan signal may be supplied to the second scan line S2i. The supply of the second scan signal may be maintained before the third period P3. AlthoughFIG.5illustrates that the first scan signal is supplied after the second scan signal is supplied, but the present invention is not limited thereto. For example, at the start of the first period P1, the second scan signal may simultaneously transition together with the first scan signal in another embodiment. In the first period P1, the third transistor M3and the fourth transistor M4may be turned on, and the third power supply voltage Vint1may be supplied to the third node N3. Therefore, the voltage of the third node N3(that is, the gate voltage of the first transistor M1) may be initialized to the third power supply voltage Vint1. In this case, a voltage of a data signal of a previous frame (hereinafter referred to as a “previous data voltage”) may be substantially maintained at the fourth node N4by the voltage holding operation of the first capacitor C1. The first period P1is a period for initializing the voltage of the third node N3and may be understood as a first initialization period. After the first period P1, the fourth transistor M4may be turned off. Thereafter, the third scan signal may be supplied to the third scan line S3i, and the fifth transistor M5may be turned on. The supply of the third scan signal may be maintained before the fourth period P4. After the third scan signal may be supplied, in the first compensation period CP1, the supply of the first emission control signal may be stopped, and the sixth transistor M6may be turned on. Therefore, a current path from the first power line PL1to the fourth node N4via the sixth transistor M6and the fifth transistor M5may be formed, and the first power supply voltage VDD may be supplied to the fourth node N4. In addition, since the third transistor M3is in a turned-on state in the first compensation period CP1, the first transistor M1may be diode-connected and the threshold voltage compensation of the first transistor M1may be performed. That is, the first compensation period CP1may be determined by the length of the period in which the first emission control signal is not supplied. For example, the first compensation period CP1may be set to three or more horizontal periods. Therefore, a sufficient threshold voltage compensation time may be secured. However, this is an example, and the length of the first compensation period CP1according to the invention is not limited thereto, and the design may be freely changed according to driving conditions or the like. On the other hand, in the first compensation period CP1, the voltage of the fourth node N4may be changed from the previous data voltage to the first power supply voltage VDD, and the voltage change amount of the fourth node N4may be reflected in the third node N3due to the coupling of the storage capacitor Cst. Therefore, the voltage of the third node N3does not become the difference between the first power supply voltage VDD and the threshold voltage (hereinafter referred to as “Vth”) of the first transistor M1, and may be reflected up to the voltage change due to the coupling. That is, in the first compensation period CP1, complete threshold voltage compensation cannot be performed due to the influence of the previous data voltage. When the first emission control signal is supplied again, the sixth transistor M6may be turned off and the first compensation period CP1may be ended. Thereafter, in the second period P2, the first scan signal may be supplied again to the first scan line S1i, and the fourth transistor M4may be turned on. Therefore, the voltage of the third node N3may be initialized again to the third power supply voltage Vint1. In this case, the first power supply voltage VDD may be maintained at the fourth node N4by the voltage hold operation of the first capacitor C1. The second period P2is a period for initializing the voltage of the third node N3again and may be understood as a second initialization period. After the second period P2, the fourth transistor M4may be turned off again. Thereafter, in the second compensation period CP2, the supply of the first emission control signal may be stopped, and the sixth transistor M6may be turned on again. Therefore, a current path from the first power line PL1to the fourth node N4via the sixth transistor M6and the fifth transistor M5may be formed, and the first power supply voltage VDD may be supplied to the fourth node N4. In addition, since the third transistor M3is in a turned-on state, the first transistor M1may be diode-connected and the threshold voltage compensation of the first transistor M1may be performed again. The second compensation period CP2may be determined by the length of the period in which the first emission control signal is not supplied. For example, the second compensation period CP2may be set to three or more horizontal periods. Since the first power supply voltage VDD is already supplied to the fourth node N4before the second compensation period CP2, the coupling effect of the storage capacitor Cst may be substantially removed. That is, since there is little change in the voltage of the fourth node N4, the voltage of the third node N3may be changed to a difference (hereinafter, “VDD-Vth”) between the first power supply voltage VDD and the threshold voltage Vth of the first transistor M1. Therefore, the threshold voltage Vth of the first transistor M1may be stored in the storage capacitor Cst. When the first emission control signal is supplied again, the sixth transistor M6may be turned off and the second compensation period CP2may be ended. As such, based on the supply control of the first emission control signal, the initialization periods (for example, the first and second periods P1and P2) and the compensation periods (for example, the first and second compensation periods CP1and CP2) are alternately repeated to sufficiently secure the compensation time, and the influence of the previous data voltage may be effectively removed in the threshold voltage compensation. Therefore, the reliability of threshold voltage compensation in high-speed driving of a frame frequency of 120 Hz or more may be greatly improved. On the other hand, althoughFIG.5illustrates that the sequence of the initialization period and the compensation period is repeated twice, the present invention is not limited thereto. For example, the sequence of the initialization period and the compensation period may be alternately repeated three or more times in another embodiment. Thereafter, the supply of the second scan signal may be stopped and the third transistor M3may be turned off. However, this is only an example, and the supply of the second scan signal may be stopped simultaneously with the end of the second compensation period CP2in another embodiment. In the third period P3, the fourth scan signal may be supplied to the fourth scan line S4iand the second transistor M2may be turned on. In addition, in the third period P3, the fifth transistor M5may be in a turned-on state. A voltage of a data signal of a current frame (for example, referred to as a current data voltage “Vdata”) may be supplied to the fourth node N4through the second transistor M2and the fifth transistor M5. The voltage of the fourth node N4may be changed from the first power supply voltage VDD to the current data voltage Vdata in the third period P3. Due to the coupling of the storage capacitor Cst, the voltage of the third node N3may have a value to which the coupling is reflected to the difference between the existing first power supply voltage VDD and the threshold voltage Vth of the first transistor M1(for example, VDD−Vth+(Vdata−VDD)). That is, in the voltage of the third node N3, only a value of Vdata−Vth remains, and thereafter, the driving current may have a value corresponding to the data voltage Vdata. Thereafter, the supply of the third scan signal may be stopped and the fifth transistor M5may be turned off. Therefore, the voltage of the third node N3and the voltage of the fourth node N4may be maintained, respectively. However, this is only an example, and the supply of the third scan signal may be stopped simultaneously with the end of the third compensation period P3in another embodiment. In the fourth period P4, the fifth scan signal may be supplied to the fifth scan line S5i, and the eighth transistor M8and the ninth transistor M9may be turned on. When the eighth transistor M8is turned on, the fourth power supply voltage Vint2may be supplied to the fifth node N5, and the parasitic capacitor of the light emitting element LD may be discharged. When the ninth transistor M9is turned on, the fifth power supply voltage Vbias may be supplied to the first node N1, and the first transistor M1may be controlled to an on-bias state before light emission of the light emitting element LD. Thereafter, the supply of the first and second emission control signals may be stopped, so that the first non-emission period NEP1may be ended and the first emission period EP1may start. In the first emission period EP1, the sixth and seventh transistors M6and M7may be turned on. In the first emission period EP1, a driving current corresponding to the current data voltage Vdata written in the first transistor M1in the fourth period P4may be supplied to the light emitting element LD, and the light emitting element LD may emit light based on the driving current. As illustrated inFIG.6, the second driving period DP2may include a second non-emission period NEP2and a second emission period EP2. In an embodiment, the first and second emission control signals may be supplied without interruption during the second non-emission period NEP2. That is, during the second non-emission period NEP2, the first and second emission control signals may have a high level. In an embodiment, in the second non-emission period NEP2, the first to fourth scan signals may not be supplied and the second to seventh transistors M2to M7may be in a turned-off state. In the second non-emission period NEP2, the fifth scan signal may be supplied to the fifth scan line S5i, and the eighth and ninth transistors M8and M9may be turned on. Therefore, according to the insertion/progression of the second driving period DP2, the first transistor M1may be periodically controlled to an on-bias state. As described above, the pixel10and the display device1000including the same according to embodiments of the present invention may extend and secure the threshold voltage compensation time while removing the influence of the previous data voltage, through the control of the first emission control signal in the pixel circuit structure as illustrated inFIG.4. Therefore, the image quality of high-speed driving at a frame frequency of 120 Hz or more may also be effectively improved. In addition, since the pixel10is driven using the first and second driving periods DP1and DP2, image quality for various frame frequencies may be improved. FIGS.7A to7Care diagrams for describing examples of driving of the display device ofFIG.1according to a frame frequency. Referring toFIGS.1and5to7C, the display device1000may be driven at various frame frequencies. The frequency of the first driving period DP1may correspond to the frame frequency. In an embodiment, as illustrated inFIG.7A, a first frame FRa may include a first driving period DP1. For example, when the frequency of the first driving period DP1is 240 Hz, the first frame FRa may be driven at 240 Hz. In other words, each of the length of the first driving period DP1and the first frame FRa may be about 4.17 microseconds (ms). In an embodiment, as illustrated inFIG.7B, a second frame FRb may include a first driving period DP1and a second driving period DP2. For example, the first driving period DP1and the second driving period DP2may be alternately repeated. In this case, the second frame FRb may be driven at 120 Hz. In other words, each of the length of the first driving period DP1and the second driving period DP2may be about 4.17 ms, and the length of the second frame FRb may be about 8.33 ms. In an embodiment, as illustrated inFIG.7C, a third frame FRc may include one first driving period DP1and a plurality of repeated second driving periods DP2. For example, when the third frame FRc is driven at 1 Hz, the length of the third frame FRc is about 1 second, and the second driving period DP2within the third frame FRc is repeated about 239 times. As such, by controlling the number of repetitions of the second driving period DP2within one frame, the display device1000may be freely driven at various frame frequencies (for example, 1 Hz to 480 Hz). FIG.8is a circuit diagram illustrating another example of a pixel included in the display device ofFIG.1, andFIG.9is a timing diagram illustrating an example of signals supplied to the pixel ofFIG.8in a first driving period. Since a pixel11ofFIG.8has the same configuration and operation as the pixel10described with reference toFIG.4, except for a fifth transistor M5and a second scan signal, the same reference numerals are used to refer to the same or corresponding components and redundant descriptions thereof are omitted. Referring toFIGS.1,8, and9, the pixel11may include a light emitting element LD, first to ninth transistors M1to M9, a storage capacitor Cst, and a first capacitor C1. In an embodiment, a gate electrode of the third transistor M3and a gate electrode of the fifth transistor M5may be commonly connected to a second scan line S2i. Therefore, the third transistor M3and the fifth transistor M5may be controlled in common. In an embodiment, the supply of the second scan signal to the second scan line S2imay be started before the first period P1, and may be stopped before the fourth period P4. Therefore, the third transistor M3and the fifth transistor M5may be in a turned-on state in the first period P1, the first compensation period CP1, the second period P2, the second compensation period CP2, and the third period P3. For example, unlike the embodiment ofFIG.5, even when the fifth transistor M5is turned on in the first period P1, the sixth transistor M6is in a turned-off state, and thus the initialization of the voltage of the third node N3is not affected. In addition, unlike the embodiment ofFIG.5, even when the third transistor M3is turned on in the third period P3, the fourth and seventh transistors M4and M7are in a turned-off state, and thus data writing is not affected. Therefore, the structure of the pixel11and the display device1000driving the same may be simplified, and manufacturing costs may be reduced compared to the pixel10. FIG.10is a circuit diagram illustrating still another example of a pixel included in the display device ofFIG.1. Since a pixel12ofFIG.10has the same configuration and operation as the pixel10described with reference toFIG.4, except for a second capacitor C2, the same reference numerals are used to refer to the same or corresponding components and redundant descriptions thereof are omitted. Referring toFIGS.1and10, the pixel12may include a light emitting element LD, first to ninth transistors M1to M9, a storage capacitor Cst, a first capacitor C1, and a second capacitor C2. In an embodiment, the second capacitor C2may be connected between the fourth node N4and one of the first scan line S1i, the fourth scan line S4i, and the fifth scan line S5i. The second capacitor C2may function as a boosting capacitor. For example, the third and fifth scan signals controlling the third and fifth transistors M3and M5that are n-type transistors have a high level. Therefore, when the third transistor M3and/or the fifth transistor M5is turned off, the third scan signal and/or the fifth scan signal transition from a high level to a low level, and the voltage level of the third node N3may drop due to coupling by a parasitic component such as a parasitic capacitance between the corresponding scan lines (i.e., the one of the first scan line S1i, the fourth scan line S4i, and the fifth scan line S5i) and the third node N3and/or the fourth node N4. The second capacitor C2may be used to compensate for an unintended voltage drop at the third node N3. For example, one end of the second capacitor C2may be connected to one of the scan lines controlling the p-type transistor. For example, when one end of the second capacitor C2is connected to the fourth scan line S4i, the voltage of the fourth node N4may be increased by stopping the supply of the fourth scan signal to the fourth scan line S4i(that is, the fourth scan signal transitions from a low level to a high level). In addition, as the voltage of the fourth node N4increases, the voltage of the third node N3may increase. Therefore, the voltage drop at the third node N3according to the control of the n-type transistor (e.g., the third transistor M3) may be compensated for. A timing at which the voltage of the third node N3is increased due to boosting by the coupling of the second capacitor C2may be any timing during the first non-emission period (for example, NEP1inFIG.5). As described above, by adding the second capacitor C2to the pixel12, the voltage drop at the third node N3according to the control of the n-type transistor may be compensated for, and image quality may be effectively improved. FIG.11is a circuit diagram illustrating yet another example of a pixel included in the display device ofFIG.1. Since a pixel13ofFIG.11has the same configuration and operation as the pixel12described with reference toFIG.10, except for a fifth transistor M5and a second scan signal, the same reference numerals are used to refer to the same or corresponding components and redundant descriptions thereof are omitted. Referring toFIGS.1and11, the pixel13may include a light emitting element LD, first to ninth transistors M1to M9, a storage capacitor Cst, a first capacitor C1, and a second capacitor C2. In an embodiment, a gate electrode of the third transistor M3and a gate electrode of the fifth transistor M5may be commonly connected to a second scan line S2i. Therefore, the third transistor M3and the fifth transistor M5may be controlled in common. Therefore, the structure of the pixel13and the display device1000driving the same may be simplified, and manufacturing costs may be reduced. FIG.12is a circuit diagram illustrating another example of a pixel included in the display device ofFIG.1. Since a pixel14ofFIG.12has the same configuration and operation as the pixel12described with reference toFIG.10, except for a second capacitor C2, the same reference numerals are used to refer to the same or corresponding components and redundant descriptions thereof are omitted. Referring toFIGS.1and12, the pixel14may include a light emitting element LD, first to ninth transistors M1to M9, a storage capacitor Cst, a first capacitor C1, and a second capacitor C2. In an embodiment, the second capacitor C2may be connected between the third node N3and one of the first scan line S1i, the fourth scan line S4i, and the fifth scan line S5i. The second capacitor C2may function as a boosting capacitor. For example, when one end of the second capacitor C2is connected to the fourth scan line S4i, the voltage of the third node N3may be increased by stopping the supply of the fourth scan signal to the fourth scan line S4i(that is, the fourth scan signal transitions from a low level to a high level). Therefore, the voltage drop at the third node N3according to the control of the n-type transistor (e.g., the third transistor M3) may be compensated for. As described above, since the pixel and the display device including the same according to the embodiments of the present invention include the n-type oxide semiconductor transistors, it is possible to prevent image quality deterioration due to current leakage in the pixel during low-frequency driving. In addition, it is possible to extend and secure the threshold voltage compensation time while removing the influence of the previous data voltage (the voltage of the data signal of the previous frame) through the control of the first emission control signal. Therefore, the image quality of high-speed driving at a frame frequency of 120 Hz or more may also be improved. Furthermore, since the pixel is driven using the first and second driving periods, image quality for various frame frequencies may be effectively improved. However, the effects of the present invention are not limited to the above-described effects, and may be variously expanded without departing from the spirit and scope of the present invention. Although the present invention has been described with reference to the embodiments, it will be understood by those skilled in the art that various modifications and changes can be made thereto without departing from the spirit and scope of the present invention as set forth in the appended claims. | 53,686 |
11862102 | DETAILED DESCRIPTION The invention now will be described more fully hereinafter with reference to the accompanying drawings, in which various embodiments are shown. This invention may, however, be embodied in many different forms, and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art. It will be understood that when an element or layer (or region, portion, and the like) is referred to as being “on”, “connected to”, or “coupled to” another element or layer, it can be directly on, connected to, or coupled to the other element or layer, or intervening elements or layers may be present. In contrast, when an element is referred to as being “directly on” another element, there are no intervening elements present. Like reference numerals refer to like elements throughout. In the figures, the thicknesses, ratios, and dimensions of elements are exaggerated for effective description of the technical contents. “Or” means “and/or.” As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items. It will be understood that, although the terms first, second, etc. may be used herein to describe various elements, components, regions, layers, and/or sections, these elements, components, regions, layers, and/or sections should not be limited by these terms. These terms are only used to distinguish one element, component, region, layer, or section from another element, component, region, layer, or section. Thus, a first element, component, region, layer, or section discussed below could be termed a second element, component, region, layer, or section without departing from the teachings of the invention. As used herein, the singular forms, “a”, “an”, and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. For example, “an element” has the same meaning as “at least one element,” unless the context clearly indicates otherwise. “At least one” is not to be construed as limiting “a” or “an.” Spatially relative terms, such as “beneath”, “below”, “lower”, “above”, and “upper”, may be used herein for ease of description to describe one element or feature's relationship to another element(s) or feature(s) as illustrated in the figures. It will be understood that the spatially relative terms are intended to encompass different orientations of the device in use or operation in addition to the orientation depicted in the figures. It will be further understood that the terms “comprises” and/or “comprising,” or “includes” and/or “including”, when used in this specification, specify the presence of stated features, integers, steps, operations, elements, components, and/or groups thereof, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having meaning that is consistent with their meaning in the context of the relevant art and should not be interpreted in an overly idealized or overly formal sense unless expressly so defined herein. Embodiments described herein should not be construed as limited to the particular shapes of regions as illustrated herein but are to include deviations in shapes that result, for example, from manufacturing. For example, a region illustrated or described as flat may, typically, have rough and/or nonlinear features. Moreover, sharp angles that are illustrated may be rounded. Thus, the regions illustrated in the figures are schematic in nature and their shapes are not intended to illustrate the precise shape of a region and are not intended to limit the scope of the present claims. Hereinafter, embodiments of the invention will be described in detail with reference to the accompanying drawings. FIG.1is a block diagram of a display device according to an embodiment of the invention, andFIG.2is a block diagram illustrating the controller and the source driver illustrated inFIG.1. Referring toFIG.1andFIG.2, an embodiment of a display device DD according to the invention may be a device that is activated based on an electrical signal to display an image. The display device DD may be applied to an electronic device such as a smart watch, a tablet, a laptop computer, a computer, or a smart television. The display device DD may include a display panel DP, a controller100, a source driver200, and a scan driver300. In an embodiment of the invention, the source driver200may include a data driver210and a sensing circuit (or a sensing driver)220. The display panel DP includes a plurality of driving scan lines DSL1to DSLn, a plurality of sensing scan lines SSL1to SSLn, a plurality of data lines DL1to DLm, a plurality of readout lines RL1to RLm, and a plurality of pixels PX. The driving scan lines DSL1to DSLn may each extend in a first direction DR1and may be arranged in a second direction DR2. The sensing scan lines SSL1to SSLn may each extend in the first direction DR1and may be arranged in the second direction DR2. The second direction DR2may be a direction crossing the first direction DR1. The data lines DL1to DLm may each extend in the second direction DR2and may be arranged in the first direction DR1, and the readout lines RL1to RLm may each extend in the second direction DR2and may be arranged in the first direction DR1. Each of the plurality of pixels PX is electrically connected to a corresponding one of the driving scan lines DSL1to DSLn, a corresponding one of the sensing scan lines SSL1to SSLn, a corresponding one of the data lines DL1to DLm, and a corresponding one of the readout lines RL1to RLm. Each of the plurality of pixels PX may be electrically connected to two scan lines. In an embodiment, for example, as illustrated inFIG.2, a first pixel PX11of the plurality of pixels PX may be connected to a first driving scan line DSL1, a first sensing scan line SSL1, a first data line DL1, and a first readout line RL1. Each of the plurality of pixels PX may include a light emitting element ED (seeFIG.6A) and a pixel driving circuit (or a pixel circuit) PXC (seeFIG.6A) for controlling light emission of the light emitting element ED. The pixel driving circuit PXC may include a plurality of transistors and a capacitor. The controller100receives an image signal RGB and a control signal CTRL. The controller100generates an image data signal DATA obtained by converting the data format of the image signal RGB based on (or to correspond to) the interface specification between the controller100and the source driver200. The controller100outputs a scan control signal GCS and a source control signal DCS. The source control signal DCS may include a data control signal DCS1for controlling the driving of the data driver210and a sensing control signal DCS2for controlling the driving of the sensing circuit220. The data driver210receives the data control signal DCS1and the image data signal DATA from the controller100. The data driver210converts the image data signal DATA into data signals and outputs the data signals to the plurality of data lines DL1to DLm. The data signals may be analog voltages corresponding to the gradation (or grayscale) values of the image data signal DATA. The sensing circuit220receives the sensing control signal DCS2from the controller100. The sensing circuit220may sense the display panel DP in response to the sensing control signal DCS2. The sensing circuit220may sense characteristics of elements included in each of the pixels PX of the display panel DP from the plurality of readout lines RL1to RLm. In an embodiment of the invention, the source driver200may be formed in the form of or defined by at least one chip. In an embodiment, for example, where the source driver200is formed as a single chip, the data driver210and the sensing circuit220may be embedded in the chip. Each of the data driver210and the sensing circuit220may be provided in plural. In an embodiment, where the source driver200is formed of a plurality of chips, each of the data drivers210and each of the sensing circuits220may be embedded in a corresponding one of the plurality of chips. Although an embodiment may have a structure in which the data driver210and the sensing circuit220are embedded in the source driver200, an embodiment of the invention is not limited thereto. In an alternative embodiment, for example, the data driver210and the sensing circuit220may be formed in the form of separate chips. In an embodiment, as show inFIG.2, the controller100includes a compensation memory120that stores sensing data SD for data compensation and a compensation unit110that compensates the image data signal DATA based on the sensing data SD. The compensation memory120may receive and store the sensing data SD sensed through the sensing circuit220. The compensation unit110may read the sensing data SD stored in the compensation memory120and may compensate the image data signal DATA based on the read sensing data SD. The controller100may drive the sensing circuit220in a period (e.g., a power-on period) in which power is applied to the display device DD, or in a certain period (e.g., a blank period) of each of frames in which the display device DD displays an image. The elements such as the light emitting element ED and the transistors included in each of the pixels PX may deteriorate in proportion to the driving time, and characteristics (e.g., a threshold voltage) thereof may be degraded. To compensate therefor, the sensing circuit220may sense characteristics of elements included in one or more of the pixels PX and may feed the sensed sensing data SD back to the controller100. The controller100may correct the image data signal DATA to be written in the pixels PX, based on the sensing data SD fed back from the sensing circuit220. The scan driver300receives the scan control signal GCS from the controller100. The scan driver300may output scan signals in response to the scan control signal GCS. The scan driver300may be formed in the form of a chip and mounted on the display panel DP. Alternatively, the scan driver300may be embedded in the display panel DP. In an embodiment where the scan driver300is embedded in the display panel DP, the scan driver300may include transistors formed through a same process as the pixel driving circuit PXC. The scan driver300may generate a plurality of driving scan signals SC1to SCn (seeFIG.7) and a plurality of sensing scan signals SS1to SSn (seeFIG.7) in response to the scan control signal GCS. The plurality of driving scan signals SC1to SCn are respectively applied to the driving scan lines DSL1to DSLn, and the plurality of sensing scan signals SS1to SSn are respectively applied to the sensing scan lines SSL1to SSLn. FIG.3AandFIG.3Bare conceptual diagrams illustrating connection relationships between pixels and readout lines according to embodiments of the invention. Referring toFIGS.1,2, and3A, in an embodiment, the plurality of pixels PX may include a plurality of red pixels, a plurality of green pixels, and a plurality of blue pixels. A first red pixel PX_R of the plurality of red pixels is connected to the first data line DL1and the first readout line RL1. A first green pixel PX_G of the plurality of green pixels is connected to a second data line DL2and a second readout line RL2. A first blue pixel PX_B of the plurality of blue pixels is connected to a third data line DL3and a third readout line RL3. In an embodiment of the invention, the first to third readout lines RL1to RL3may be electrically connected to a common readout line CRL1. In an embodiment where the first to third readout lines RL1to RL3are electrically connected to each other through the common readout line CRL1, the sensing circuit220may simultaneously sense the characteristics of elements respectively included in the first red pixel PX_R, the first green pixel PX_G, and the first blue pixel PX_B. The first pixel PX11illustrated inFIG.2may be one of the first red pixel PX_R, the first green pixel PX_G, and the first blue pixel PX_B. AlthoughFIG.3Aexemplarily illustrates an embodiment where the first to third readout lines RL1to RL3are electrically connected to each other, an embodiment of the invention is not limited thereto. Alternatively, two adjacent readout lines among the plurality of readout lines RL1to RLm may be electrically connected to each other, or four adjacent readout lines among the plurality of readout lines RL1to RLm may be electrically connected to each other. The first red pixel PX_R, the first green pixel PX_G, and the first blue pixel PX_B may be connected to the first driving scan line DSL1among the plurality of driving scan lines DSL1to DSLn and the first sensing scan line SSL1among the plurality of sensing scan lines SSL1to SSLn. The first red pixel PX_R, the first green pixel PX_G, and the first blue pixel PX_B receive a first driving scan signal SC1through the first driving scan line DSL1and receive a first sensing scan signal SS1through the first sensing scan line SSL1. An operation of each of the pixels PX will be described in detail later with reference toFIGS.6A to12B. Referring toFIGS.1,2, and3B, in an embodiment, a plurality of pixels PX may include a plurality of red pixels, a plurality of green pixels, a plurality of blue pixels, and a plurality of white pixels. A first red pixel PX_R among the plurality of red pixels is connected to a first data line DL1and a first readout line RL1. A first green pixel PX_G among the plurality of green pixels is connected to a second data line DL2and a second readout line RL2. A first blue pixel PX_B among the plurality of blue pixels is connected to a third data line DL3and a third readout line RL3. A first white pixel PX_W among the plurality of white pixels is connected to a fourth data line DL4and a fourth readout line RL4. In an embodiment of the invention, the first to fourth readout lines RL1to RL4may be electrically connected to a common readout line CRLa. In an embodiment where the first to fourth readout lines RL1to RL4are electrically connected to each other through the common readout line CRLa, a sensing circuit220may simultaneously sense the characteristics of elements respectively included in the first red pixel PX_R, the first green pixel PX_G, the first blue pixel PX_B, and the first white pixel PX_W. The first pixel PX11illustrated inFIG.2may be one of the first red pixel PX_R, the first green pixel PX_G, the first blue pixel PX_B, and the first white pixel PX_W. FIG.4is a block diagram of the sensing circuit illustrated inFIG.2. Referring toFIG.4, an embodiment of the sensing circuit220according to the invention may include an initialization circuit unit221, a sampling circuit unit222, and an analog-to-digital converter (“ADC”)223. The initialization circuit unit221may be electrically connected to the readout lines RL1to RLm and may initialize the readout lines RL1to RLm in response to an initialization control signal ICS (seeFIG.6A). The sampling circuit unit222may be electrically connected to the readout lines RL1to RLm and may sample sensing signals respectively outputted from the readout lines RL1to RLm in response to a sampling control signal SCS (seeFIG.6A). The sampling circuit unit222may sample the sensing signals respectively outputted from the readout lines RL1to RLm during a sampling period and may output the sampled sensing signals as sampled signals SM1to SMm. The ADC223converts the sampled signals SM1to SMm outputted from the sampling circuit unit222into sensing data SD1to SDm in a digital form and outputs the sensing data SD1to SDm. Alternatively, the sensing circuit220may further include a scaler disposed between the sampling circuit unit222and the ADC223. The scaler may scale the voltage range of the sampled signals SM1to SMm outputted from the sampling circuit unit222according to the input voltage range of the ADC223. FIG.5is a plan view of a display device according to an embodiment of the invention. Referring toFIG.1andFIG.5, an embodiment of the display panel DP includes a display area DA which displays an image and a non-display area NDA adjacent to the display area DA. The display area DA is an area in which an image is substantially displayed, and the non-display area NDA is a bezel area in which an image is not displayed.FIG.5illustrates an embodiment having a structure in which the non-display area NDA is disposed to surround the display area DA, but an embodiment of the invention is not limited thereto. In an embodiment, the non-display area NDA may be disposed on at least one side of the display area DA. The plurality of driving scan lines DSL1to DSLn, the plurality of sensing scan lines SSL1to SSLn, the plurality of data lines DL1to DLm, the plurality of readout lines RL1to RLm, and the plurality of pixels PX illustrated inFIG.1are disposed in the display area DA. For convenience of illustration,FIG.5illustrates only the plurality of driving scan lines DSL1to DSLn and the plurality of sensing scan lines SSL1to SSLn. In an embodiment, the source driver200illustrated inFIG.2may be formed in the form of a plurality of chips. The source driver200may be provided in plural. In such an embodiment, the display device DD may include a plurality of source driving chips201,202,203, and204in which the source drivers200are respectively embedded. The data driver210(seeFIG.2) and the sensing circuit220(seeFIG.2) may be disposed in each of the source driving chips201,202,203, and204. The display device DD may further include a plurality of flexible films FCB1, FCB2, FCB3, and FCB4connected to the display panel DP. The source driving chips201,202,203, and204may be respectively mounted on the flexible films FCB1, FCB2, FCB3, and FCB4. The flexible films FCB1, FCB2, FCB3, and FCB4may be attached to a first side of the display panel DP. The display device DD may further include at least one circuit board PCB coupled to the plurality of flexible films FCB1, FCB2, FCB3, and FCB4. In an embodiment, a single circuit board PCB is provided in the display device DD, but the number of circuit boards PCB is not limited thereto. In an embodiment, the controller100(seeFIG.1andFIG.2), a voltage generator, and the like may be disposed on the circuit board PCB. In an embodiment of the invention, the first side of the display panel DP may be a side adjacent to the first driving scan line DSL1among the plurality of driving scan lines DSL1to DSLn. A second side of the display panel DP opposite to the first side may be a side adjacent to an n-th driving scan line DSLn among the plurality of driving scan lines DSL1to DSLn. In an embodiment where the flexible films FCB1, FCB2, FCB3, and FCB4are disposed adjacent to the first side of the display panel DP, distances between the source driving chips201,202,203, and204and the driving scan lines DSL1to DSLn may be different from each other. In an embodiment, for example, while the first driving scan line DSL1is spaced apart from the source driving chips201,202,203, and204by a first distance d1, the n-th driving scan line DSLn may be spaced apart from the source driving chips201,202,203, and204by a second distance d2. Here, the second distance d2may be longer than the first distance d1. The plurality of sensing scan lines SSL1to SSLn may be arranged in parallel with the plurality of driving scan lines DSL1to DSLn. Accordingly, distances between the source driving chips201,202,203, and204and the sensing scan lines SSL1to SSLn may also be different from each other. In an embodiment, for example, while the first sensing scan line SSL1is spaced apart from the source driving chips201,202,203, and204by a third distance d3, an n-th sensing scan line SSLn may be spaced apart from the source driving chips201,202,203, and204by a fourth distance d4. Here, the fourth distance d4may be longer than the third distance d3. Referring toFIGS.2,4, and5, the sensing circuit220may be embedded in each of the source driving chips201,202,203, and204. The sensing circuits220may be connected to the plurality of readout lines RL1to RLm. In an embodiment, for example, the first readout line RL1may transmit sensed sensing data to the sensing circuit220when the first driving scan line DSL1and the first sensing scan line SSL1operate. In such an embodiment, the first readout line RL1may transmit sensed sensing data to the sensing circuit220when the n-th driving scan line DSLn and the n-th sensing scan line SSLn operate. Here, a sensing period in which the first driving scan line DSL1and the first sensing scan line SSL1operate may be different from a sensing period in which the n-th driving scan line DSLn and the n-th sensing scan line SSLn operate. In an embodiment of the invention, the sensing period in which the first driving scan line DSL1and the first sensing scan line SSL1operate may be included in a first frame, and the sensing period in which the n-th driving scan line DSLn and the n-th sensing scan line SSLn operate may be included in a second frame. FIG.6AandFIG.6Bare circuit diagrams illustrating pixels and sensing circuits according to embodiments of the invention. FIG.6Aillustrates an equivalent circuit diagram of an embodiment of the first pixel PX11of the plurality of pixels PX illustrated inFIG.1. In such an embodiment, the plurality of pixels PX have a same circuit configuration as each other. Accordingly, for convenience of description, the circuit configuration of the first pixel PX11will hereinafter be described in detail, and any repetitive detailed description of the remaining pixels will be omitted. In addition,FIG.6Aillustrates some components of the initialization circuit unit221and the sampling circuit unit222of an embodiment of the sensing circuit220illustrated inFIG.4. Referring toFIG.6A, the first pixel PX11is connected to the first data line DL1, the first driving scan line DSL1, the first sensing scan line SSL1, and the first readout line RL1. The first pixel PX11includes the light emitting element ED and the pixel driving circuit PXC. The light emitting element ED may be a light emitting diode. In an embodiment of the invention, the light emitting element ED may be an organic light emitting diode including an organic light emitting layer. The pixel driving circuit PXC includes first to third transistors T1, T2, and T3and a capacitor Cst. At least one of (i.e., at least one selected from) the first to third transistors T1, T2, and T3may be a transistor having a low-temperature polycrystalline silicon (“LTPS”) semiconductor layer. Each of the first to third transistors T1, T2, and T3may be an N-type transistor. However, an embodiment of the invention is not limited thereto. Alternatively, each of the first to third transistors T1, T2, and T3may be a P-type transistor. Alternatively, some of the first to third transistors T1, T2, and T3may be N-type transistors, and the others may be P-type transistors. In an embodiment, at least one of the first to third transistors T1, T2, and T3may be a transistor having an oxide semiconductor layer. The configuration of an embodiment of the pixel driving circuit PXC according to the invention is not limited to the embodiment illustrated inFIG.6A. The pixel driving circuit PXC illustrated inFIG.6Ais only one embodiment, and the configuration of the pixel driving circuit PXC may be variously modified. The first transistor T1is connected between a first driving voltage line VL1that receives a first driving voltage ELVDD and the light emitting element ED. The first transistor T1includes a first electrode connected to the first driving voltage line VL1, a second electrode electrically connected to an anode of the light emitting element ED, and a third electrode connected to one end of the capacitor Cst. Here, a contact point where the anode of the light emitting element ED and the second electrode of the first transistor T1are connected may be referred to as a first node N1. In this specification, “a transistor is connected to a signal line” means “one electrode of first to third electrodes of the transistor has an integral shape (or integrally formed as a single unitary unit) with the signal line or is connected to the signal line through a connection electrode”. In addition, “a transistor is electrically connected to another transistor” means “one electrode of first to third electrodes of the transistor has an integral shape (or integrally formed as a single unitary unit) with one electrode of first to third electrodes of the other transistor or is connected to the one electrode of the first to third electrodes of the other transistor through a connection electrode”. The first transistor T1may receive a data signal V_DATA transmitted from the first data line DL1based on a switching operation of the second transistor T2and may supply a driving current Id to the light emitting element ED. The second transistor T2is connected between the first data line DL1and the third electrode of the first transistor T1. The second transistor T2includes a first electrode connected to the first data line DL1, a second electrode connected to the third electrode of the first transistor T1, and a third electrode connected to the first driving scan line DSL1. The second transistor T2may be turned on in response to the first driving scan signal SC1transmitted through the first driving scan line DSL1to transmit, to the third electrode of the first transistor T1, the data signal V_DATA transmitted from the first data line DL1. The third transistor T3is connected between the second electrode of the first transistor T1and the first readout line RL1. The third transistor T3includes a first electrode connected to the first node N1, a second electrode connected to the first readout line RL1, and a third electrode connected to the first sensing scan line SSL1. The third transistor T3may be turned on in response to the first sensing scan signal SS1received through the first sensing scan line SSL1to electrically connect the first readout line RL1and the first node N1. The one end of the capacitor Cst is connected to the third electrode of the first transistor T1, and the other end thereof is connected to the first node N1. A cathode of the light emitting element ED may be connected to a second driving voltage line VL2that transmits a second driving voltage ELVSS. The second driving voltage ELVSS may have a lower voltage level than the first driving voltage ELVDD. The sensing circuit220(seeFIG.2) may be connected to the plurality of readout lines RL1to RLm. The sensing circuit220may receive sensing data from the plurality of readout lines RL1to RLm. The initialization circuit unit221illustrated inFIG.4may include a plurality of initialization transistors respectively connected to the plurality of readout lines RL1to RLm. AlthoughFIG.6Aillustrates only an initialization transistor IT1connected to the first readout line RL1, the initialization circuit unit221may further include the initialization transistors respectively connected to the remaining readout lines RL2to RLm among the readout lines RL1to RLm illustrated inFIG.1. The sampling circuit unit222illustrated inFIG.4may include a plurality of sampling transistors respectively connected to the plurality of readout lines RL1to RLm. AlthoughFIG.6Aillustrates only a sampling transistor ST1connected to the first readout line RL1, the sampling circuit unit222may further include the sampling transistors respectively connected to the remaining readout lines RL2to RLm among the readout lines RL1to RLm illustrated inFIG.1. As illustrated inFIG.6B, in an alternative embodiment of a sensing circuit220-1according to the invention, a sampling circuit unit222amay further include a sampling capacitor Cp connected to the first readout line RL1through a sampling transistor ST1. The sampling capacitor Cp may store a signal sampled through the sampling transistor ST1. AlthoughFIG.6Billustrates only the sampling capacitor Cp connected to the first readout line RL1, the sampling circuit unit222amay further include sampling capacitors respectively connected to the remaining readout lines RL2to RLm among the readout lines RL1to RLm illustrated inFIG.1. Referring toFIG.6B, in such an embodiment, a line capacitor C1may be connected to the first readout line RL1. The line capacitor C1may be a parasitic capacitor formed in the display panel DP (seeFIG.1) by the first readout line RL1. In an embodiment, as shown inFIGS.6A and6B, the initialization transistor IT1may include a first electrode that receives an initialization voltage VINIT, a second electrode connected to the first readout line RL1, and a third electrode that receives the initialization control signal ICS. Here, a contact point to which the first readout line RL1and the initialization transistor IT1are connected may be referred to as a second node N2. The initialization transistor IT1may initialize the potential of the first readout line RL1to the initialization voltage VINIT in response to the initialization control signal ICS. In an embodiment of the invention, the initialization voltage VINIT may have a lower voltage level than the second driving voltage ELVSS. The sampling transistor ST1includes a first electrode connected to the second node N2, a second electrode connected to the ADC223(seeFIG.4), and a third electrode that receives the sampling control signal SCS. Here, the sampling transistor ST1may receive the sensing signal outputted from the first readout line RL1in response to the sampling control signal SCS. The sampling circuit units222or222amay further include various circuit elements (e.g., the sampling capacitor Cp) for sampling the sensing signals, in addition to the sampling transistor ST1. The sampled signals sampled through the sampling circuit units222and222amay be transmitted to the ADC223. FIG.7is a waveform diagram for describing an operation of the pixel illustrated inFIG.6A.FIG.8Ais a waveform diagram for describing operations of the pixel and a sensing circuit in the first blank period illustrated inFIG.7, andFIG.8Bis a waveform diagram for describing operations of the pixel and a sensing circuit in the second blank period illustrated inFIG.7. Referring toFIGS.1,6A, and7, the display device DD displays an image through the display panel DP. A time unit (period or duration) in which the display panel DP displays a frame image may be referred to as a frame. When an operating frequency of the display panel DP is about 60 hertz (Hz), about 60 frames may occur in about one second, and time corresponding to each of the frames may be about 16.67 milliseconds (ms). When the operating frequency of the display panel DP is about 120 Hz, about 120 frames may occur in about one second, and time corresponding to each of the frames may be about 8.3 ms. The period of each of the frames may be determined by a vertical synchronization signal Vsync.FIG.7illustrates two frames (hereinafter, referred to as first and second frames F1and F2) among the frames for convenience of illustration and description. Each of the frames F1and F2may include a corresponding one of display periods DT1and DT2and a corresponding one of blank periods BT1and BT2. The display periods DT1and DT2may be periods in which an image is substantially displayed, and the blank periods BT1and BT2may be periods which are disposed between two adjacent display periods (e.g., the display periods DT1and DT2) and in which no image is substantially displayed. In an embodiment of the invention, the blank periods BT1and BT2may be used as sensing periods for sensing the characteristic of each of the pixels PX through the sensing circuit220. In an embodiment of the invention, a first frame F1includes a first display period DT1and a first blank period BT1, and a second frame F2includes a second display period DT2and a second blank period BT2. A data enable signal DE is activated during the first and second display periods DT1and DT2and is deactivated during the first and second blank periods BT1and BT2. The driving scan signals SC1to SCn are respectively applied to the driving scan lines DSL1to DSLn during each of the display periods DT1and DT2of the frames F1and F2. The driving scan signals SC1to SCn are sequentially activated within each of the display periods DT1and DT2. in an embodiment, activation periods of the driving scan signals SC1to SCn may sequentially occur within each of the display periods DT1and DT2. Each of the driving scan signals SC1to SCn may have a high level during a corresponding one of the activation periods and have a low level during a deactivation period. However, an embodiment of the invention is not limited thereto. In an embodiment where the second transistor T2illustrated inFIG.6Ais formed as the P-type transistor, each of the driving scan signals SC1to SCn may have a low level during the activation period and have a high level during the deactivation period. For convenience of description, the activation periods of the driving scan signals SC1to SCn in each of the display periods DT1and DT2may be defined as driving scan periods DSP1to DSPn. The sensing scan signals SS1to SSn are respectively applied to the sensing scan lines SSL1to SSLn during each of the display periods DT1and DT2of the frames F1and F2. The sensing scan signals SS1to SSn are sequentially activated within each of the display periods DT1and DT2. in an embodiment, activation periods of the sensing scan signals SS1to SSn may sequentially occur within each of the display periods DT1and DT2. Each of the sensing scan signals SS1to SSn may have a high level during a corresponding one of the activation periods and have a low level during a deactivation period. However, an embodiment of the invention is not limited thereto. In an embodiment where the third transistor T3illustrated inFIG.6Ais formed as the P-type transistor, each of the sensing scan signals SS1to SSn may have a low level during the activation period and have a high level during the deactivation period. For convenience of description, the activation periods of the sensing scan signals SS1to SSn in each of the display periods DT1and DT2may be defined as sensing scan periods SSP1to SSPn. When a first driving scan signal SC1of the high level is provided through the first driving scan line DSL1during a first driving scan period DSP1, the second transistor T2is turned on in response to the first driving scan signal SC1. The data signal V_DATA provided to the first data line DL1is provided to the first transistor T1through the turned-on second transistor T2. When the data signal V_DATA is applied to the third electrode of the first transistor T1, the first transistor T1may be turned on. In an embodiment of the invention, during the display periods DT1and DT2, the first readout line RL1may have a state of being initialized to the initialization voltage VINIT. When a first sensing scan signal SS1of the high level is provided through the first sensing scan line SSL1during a first sensing scan period SSP1, the third transistor T3is turned on in response to the first sensing scan signal SS1. The initialization voltage VINIT supplied to the first readout line RL1is supplied to the first node N1through the turned-on third transistor T3. The first sensing scan period SSP1of the first sensing scan signal SS1may overlap the first driving scan period DSP1of the first driving scan signal SC1. In this case, the data signal V_DATA and the initialization voltage VINIT may be respectively applied to both ends of the capacitor Cst in the overlapping period, and an electric charge corresponding to a voltage difference (V_DATA−VINIT) between the both ends may be stored in the capacitor Cst. The second driving voltage ELVSS is applied to the cathode of the light emitting element ED. Accordingly, when the initialization voltage VINIT having a voltage level lower than that of the second driving voltage ELVSS is applied to the first node N1, no current flows in the light emitting element ED. During the deactivation period of the first driving scan signal SC1, the second transistor T2is turned off, and during the deactivation period of the first sensing scan signal SS1, the third transistor T3is turned off. Even when the second transistor T2is turned off during the deactivation period of the first driving scan signal SC1, the first transistor T1may remain turned on by the electric charge stored in the capacitor Cst. Accordingly, the driving current Id flows through the first transistor T1, and when the voltage level of the anode of the light emitting element ED becomes higher than the voltage level of the cathode by the driving current Id, the driving current Id may flow to the light emitting element ED, and thus the light emitting element ED may emit light. At least one driving scan signal of the plurality of driving scan signals SC1to SCn may be activated during each of the blank periods BT1and BT2of the frames F1and F2. In an embodiment of the invention, the first driving scan signal SC1among the plurality of driving scan signals SC1to SCn may be activated during the first blank period BT1, and an n-th driving scan signal SCn among the plurality of driving scan signals SC1to SCn may be activated during the second blank period BT2. However, an embodiment of the invention is not limited thereto. At least one of the remaining driving scan signals SC2to SCn among the plurality of driving scan signals SC1to SCn may be activated during the second blank period BT2. At least one of the plurality of driving scan signals SC1to SCn may be randomly selected for each of the frames and may be activated during a corresponding one of the blank periods BT1and BT2. In an embodiment, a driving scan signal activated in each of the blank periods BT1and BT2among the driving scan signals SC1to SCn may include a reference scan period and a rewriting period. In an embodiment of the invention, the first driving scan signal SC1activated in the first blank period BT1may include a first reference scan period RSP1and a first rewriting period RWP1, and the n-th driving scan signal SCn activated in the second blank period BT2may include a second reference scan period RSP2and a second rewriting period RWP2. In an embodiment, the first reference scan period RSP1may have a same duration as the second reference scan period RSP2. In such an embodiment, the first reference scan period RSP1may have a same duration as the first driving scan period DSP1. However, an embodiment of the invention is not limited thereto. Alternatively, the first reference scan period RSP1and the first driving scan period DSP1may have different durations from each other. In an embodiment, for example, the first reference scan period RSP1may have a duration shorter than that of the first driving scan period DSP1. The first rewriting period RWP1may have a duration longer than that of the first reference scan period RSP1. The first rewriting period RWP1and the second rewriting period RWP2may have different durations from each other. in an embodiment, as illustrated inFIG.5, the first driving scan line DSL1may be spaced apart from the sensing circuit220by the first distance d1, and the n-th driving scan line DSLn may be spaced apart from the sensing circuit220by the second distance d2. Here, the second distance d2may be longer than the first distance d1. In an embodiment, the duration of the rewriting period of each of the driving scan signals may be adjusted based on a distance between a corresponding one of the driving scan lines and the sensing circuit220. In such an embodiment, as the distance between the driving scan line and the sensing circuit220increases, the duration of the rewriting period of the driving scan signal applied to the driving scan line may increase. At least one of the plurality of sensing scan signals SS1to SSn may be activated during each of the blank periods BT1and BT2of the frames F1and F2. In an embodiment of the invention, the first sensing scan signal SS1among the plurality of sensing scan signals SS1to SSn may be activated during the first blank period BT1, and an n-th sensing scan signal SSn among the plurality of sensing scan signals SS1to SSn may be activated during the second blank period BT2. However, an embodiment of the invention is not limited thereto. At least one of the remaining sensing scan signals SS2to SSn among the plurality of sensing scan signals SS1to SSn may be activated during the second blank period BT2. At least one of the plurality of sensing scan signals SS1to SSn may be randomly selected for each of the frames and may be activated during a corresponding one of the blank periods BT1and BT2. In an embodiment, a sensing scan signal activated in each of the blank periods BT1and BT2among the sensing scan signals SS1to SSn may include a readout period. In an embodiment of the invention, the first sensing scan signal SS1activated in the first blank period BT1may include a first readout period ROP1, and the n-th sensing scan signal SSn activated in the second blank period BT2may include a second readout period ROP2. The first readout period ROP1and the second readout period ROP2may have different durations from each other. In an embodiment, as illustrated inFIG.5, the first sensing scan line SSL1may be spaced apart from the sensing circuit220by the third distance d3, and the n-th sensing scan line SSLn may be spaced apart from the sensing circuit220by the fourth distance d4. Here, the fourth distance d4may be longer than the third distance d3. In an embodiment, the duration of the readout period of each of the sensing scan signals may be adjusted based on a distance between a corresponding one of the sensing scan lines and the sensing circuit220. In such an embodiment, as the distance between the sensing scan line and the sensing circuit220increases, the duration of the readout period of the sensing scan signal applied to the sensing scan line may increase. Referring toFIG.6AandFIG.8A, the first driving scan signal SC1may be activated to the high level during the first reference scan period RSP1of the first blank period BT1. When the first driving scan signal SC1of the high level is provided through the first driving scan line DSL1during the first reference scan period RSP1, the second transistor T2is turned on in response to the first driving scan signal SC1. As shown inFIG.8A, a reference data signal Vref is provided to the first data line DL1during the first reference scan period RSP1of the first blank period BT1. The reference data signal Vref may be provided to the first transistor T1through the turned-on second transistor T2. In an embodiment of the invention, the level of the reference data signal Vref may be about 5 volts (V) but is not particularly limited. When the reference data signal Vref is applied to the third electrode of the first transistor T1, the first transistor T1may be turned on. The reference data signal Vref is defined as a signal applied to the first data line DL1for sensing in the first blank period BT1, and the data signal V_DATA is defined as a signal applied to the first data line DL1for light emission in the first display period DT1. In an embodiment of the invention, while the reference data signal Vref does not affect the light emission of the light emitting element ED, the driving current Id of the light emitting element ED may be determined by the data signal V_DATA in the first display period DT1. In an embodiment of the invention, during the first reference scan period RSP1of the first blank period BT1, the first readout line RL1may have a state of being initialized to the initialization voltage VINIT. In an embodiment, when the initialization transistor IT1is turned on in response to the initialization control signal ICS, the initialization voltage VINIT may be applied to the first readout line RL1. In an activation period of the initialization control signal ICS (i.e., an initialization period IP), the first readout line RL1may be initialized to the initialization voltage VINIT, and in a deactivation period of the initialization control signal ICS (i.e., a non-initialization period NIP), the initialization voltage VINIT may not be applied to the first readout line RL1. The first sensing scan signal SS1may be activated to the high level during the first readout period ROP1of the first blank period BT1. When the first sensing scan signal SS1of the high level is provided through the first sensing scan line SSL1during the first readout period ROP1, the third transistor T3is turned on in response to the first sensing scan signal SS1. The initialization voltage VINIT supplied to the first readout line RL1is supplied to the first node N1. In an embodiment of the invention, the first readout period ROP1and the first reference scan period RSP1may partially overlap each other. In such an embodiment, the reference data signal Vref and the initialization voltage VINIT may be respectively applied to both ends of the capacitor Cst in the overlapping period, and an electric charge corresponding to a voltage difference (Vref−VINIT) between the both ends may be stored in the capacitor Cst. The second driving voltage ELVSS is applied to the cathode of the light emitting element ED. Accordingly, when the initialization voltage VINIT having a voltage level lower than that of the second driving voltage ELVSS is applied to the first node N1, no current flows in the light emitting element ED. After the first reference scan period RSP1ends, the sampling control signal SCS may be activated, and the initialization control signal ICS may be deactivated. An activation period of the sampling control signal SCS may be defined as a sampling period SMP. During the sampling period SMP, the sampling circuit unit222may receive the sensing signal through the first readout line RL1. At least during the sampling period SMP, the first sensing scan signal SS1may be activated. That is, the sampling period SMP and the first readout period ROP1may overlap each other. When the initialization control signal ICS is deactivated after the first reference scan period RSP1ends, the initialization voltage VINIT may not be applied to the second node N2. Then, potentials VN1and VN2of the first and second nodes N1and N2may gradually increase. After the sampling period SMP ends, the first rewriting period RWP1may start. That is, the first rewriting period RWP1may start at a first time point t1at which the sampling period SMP ends. When the first rewriting period RWP1starts, the data signal V_DATA instead of the reference data signal Vref may be applied again to the first data line DL1. Accordingly, the rise of the potentials VN1and VN2of the first and second nodes N1and N2may slow or stop at the first time point t1. Thereafter, when the initialization control signal ICS is activated at a second time point t2, the potentials VN1and VN2of the first and second nodes N1and N2may be discharged by the initialization voltage VINIT. In an embodiment of the invention, the first time point t1at which the first rewriting period RWP1starts may precede the second time point t2at which the initialization control signal ICS is activated. The first time point t1at which the sampling period SMP ends and the second time point t2at which the initialization period IP starts may be apart from each other by a predetermined time interval. Here, a period between the first time point t1at which the sampling period SMP ends and the second time point t2at which the initialization period IP starts may be defined as a waiting period ADP. The waiting period ADP may be a period set to secure time for the ADC223to effectively process the sampled signals. In an embodiment, the length of the waiting period ADP may be set in consideration of variations in the processing speed of the ADC's223among the plurality of source driving chips201to204(seeFIG.4), and the like. In such an embodiment, as the waiting period ADP is secured as described above, noise may be effectively prevented from being introduced into the ADC223while the ADC223processes the sampled signals. In such an embodiment, because the first time point t1at which the first rewriting period RWP1starts precedes the second time point t2at which the initialization period IP starts, the rise of the potentials VN1and VN2of the first and second nodes N1and N2may be preemptively blocked before the initialization period IP is entered. Accordingly, after the initialization period IP is entered, the potentials VN1and VN2of the first and second nodes N1and N2may be rapidly discharged to the initialization voltage VINIT. Thereafter, the first driving scan signal SC1and the first sensing scan signal SS1may be simultaneously deactivated at a third time point t3, and thus the sensing period of the first readout line RL1may end. Referring toFIG.6AandFIG.8B, the n-th driving scan signal SCn may be activated to the high level during the second reference scan period RSP2of the second blank period BT2. When the n-th driving scan signal SCn of the high level is provided through the n-th driving scan line DSLn during the second reference scan period RSP2, the second transistor T2is turned on in response to the n-th driving scan signal SCn. in such an embodiment, the reference data signal Vref is provided to the first data line DL1during the second reference scan period RSP2of the second blank period BT2. The reference data signal Vref may be provided to the first transistor T1through the turned-on second transistor T2. The reference data signal Vref is defined as a signal applied to the first data line DL1for sensing in the second blank period BT2, and the data signal V_DATA is defined as a signal applied to the first data line DL1for light emission in the second display period DT2. In an embodiment of the invention, while the reference data signal Vref does not affect the light emission of the light emitting element ED, the driving current Id of the light emitting element ED may be determined by the data signal V_DATA in the second display period DT2. In an embodiment of the invention, during the second reference scan period RSP2of the second blank period BT2, the first readout line RL1may have a state of being initialized to the initialization voltage VINIT. The n-th sensing scan signal SSn may be activated to the high level during the second readout period ROP2of the second blank period BT2. When the n-th sensing scan signal SSn of the high level is provided through the n-th sensing scan line SSLn during the second readout period ROP2, the third transistor T3is turned on in response to the n-th sensing scan signal SSn. The initialization voltage VINIT supplied to the first readout line RL1is supplied to the first node N1. In an embodiment of the invention, the second readout period ROP2and the second reference scan period RSP2may partially overlap each other. In such an embodiment, the reference data signal Vref and the initialization voltage VINIT may be respectively applied to both ends of the capacitor Cst in the overlapping period, and an electric charge corresponding to the voltage difference Vref−VINIT between the both ends may be stored in the capacitor Cst. The second driving voltage ELVSS is applied to the cathode of the light emitting element ED. Accordingly, when the initialization voltage VINIT having a voltage level lower than that of the second driving voltage ELVSS is applied to the first node N1, no current flows in the light emitting element ED. Thereafter, after the second reference scan period RSP2ends, the sampling control signal SCS may be activated, and the initialization control signal ICS may be deactivated. An activation period of the sampling control signal SCS may be defined as the sampling period SMP. During the sampling period SMP, the sampling circuit unit222may receive the sensing signal through the first readout line RL1. The n-th sensing scan signal SSn may be activated at least during the sampling period SMP. That is, the sampling period SMP and the second readout period ROP2may overlap each other. When the initialization control signal ICS is deactivated after the second reference scan period RSP2ends, the initialization voltage VINIT may not be applied to the second node N2. Then, the potentials VN1and VN2of the first and second nodes N1and N2may gradually increase. After the sampling period SMP ends, the second rewriting period RWP2may start. In such an embodiment, the second rewriting period RWP2may start at the first time point t1at which the sampling period SMP ends. When the second rewriting period RWP2starts, the data signal V_DATA instead of the reference data signal Vref may be applied again to the first data line DL1. Accordingly, the rise of the potentials VN1and VN2of the first and second nodes N1and N2may slow or stop at the first time point t1. Thereafter, when the initialization control signal ICS is activated at the second time point t2, the potentials VN1and VN2of the first and second nodes N1and N2may be decreased by the initialization voltage VINIT. The n-th driving scan signal SCn and the n-th sensing scan signal SSn may be simultaneously deactivated at a fourth time point t4, and thus the sensing period of the first readout line RL1may end. The waiting period ADP may be defined between the first time point t1at which the sampling period SMP ends and the second time point t2at which the initialization period IP starts. The waiting period ADP may be a period set to secure time for the ADC223to effectively process the sampled signals. In such an embodiment, as the waiting period ADP is secured as described above, noise may be effectively prevented from being introduced into the ADC223while the ADC223processes the sampled signals. In such an embodiment, because the first time point t1at which the second rewriting period RWP2starts precedes the second time point t2at which the initialization period IP starts, the rise of the potentials VN1and VN2of the first and second nodes N1and N2may be preemptively blocked before the initialization period IP is entered. Accordingly, after the initialization period IP is entered, the potentials VN1and VN2of the first and second nodes N1and N2may be rapidly discharged to the initialization voltage VINIT. When the first time point t1at which the second rewriting period RWP2starts is later than the second time point t2at which the initialization period IP starts, the potentials VN1and VN2of the first and second nodes N1and N2may continue to rise, even when the initialization period IP has started, until the second rewriting period RWP2starts. As the period during which the potentials VN1and VN2of the first and second nodes N1and N2increase becomes longer, the display device may enter a next display period while in a state in which the potentials VN1and VN2of the first and second nodes N1and N2are not sufficiently initialized, which may result in the light emitting element ED generating light having a higher or lower luminance than desired. In addition, the duration of the second rewriting period RWP2may be longer than the duration of the first rewriting period RWP1. In particular, an interval from the second time point t2at which the initialization control signal ICS is activated to the fourth time point t4at which the second rewriting period RWP2is deactivated may be longer than an interval from the second time point t2at which the initialization control signal ICS is activated to the third time point t3at which the first rewriting period RWP1is deactivated. Accordingly, as the duration of the second rewriting period RWP2is extended, a period in which the potential VN1of the first node N1is lowered by the initialization voltage VINIT may be further secured. Accordingly, in such an embodiment, dark lines, bright lines, etc. may be effectively prevented from being viewed, which occurs when the potential VN1of the first node N1of each of pixels connected to the n-th driving scan line DSLn, which is relatively far from the sensing circuit220, is not sufficiently initialized. In such an embodiment, a luminance difference may be improved between pixels connected to the first driving scan line DSL1and the pixels connected to the n-th driving scan line DSLn. FIG.9is a block diagram of a sensing circuit according to an embodiment of the invention, andFIG.10is a circuit diagram illustrating one of pixels and a sensing circuit according to an embodiment of the invention. The same or like elements shown inFIGS.9and10as those inFIGS.3and6Ahave been labeled with the same reference characters as used above, and any repetitive detailed description thereof will hereinafter be omitted or simplified. Referring toFIG.9, an embodiment of a sensing circuit220amay include a first initialization circuit unit221a, a second initialization circuit unit221b, a sampling circuit unit222, and an ADC223. The first initialization circuit unit221amay be electrically connected to the readout lines RL1to RLm and may initialize the readout lines RL1to RLm in response to a first initialization control signal ICS1. The second initialization circuit unit221bmay be electrically connected to the readout lines RL1to RLm and may initialize the readout lines RL1to RLm in response to a second initialization control signal ICS2. The first initialization circuit unit221aand the second initialization circuit unit221bmay selectively operate. In an embodiment of the invention, in the blank period, the second initialization circuit unit221bmay operate before the first initialization circuit unit221adoes. The sampling circuit unit222may be electrically connected to the readout lines RL1to RLm and may sample the sensing signals respectively outputted from the readout lines RL1to RLm in response to a sampling control signal SCS. The sensing signals respectively outputted from the readout lines RL1to RLm may be sampled during a sampling period and outputted as sampled signals SM1to SMm. The ADC223converts the sampled signals SM1to SMm outputted from the sampling circuit unit222into sensing data SD1to SDm in a digital form and outputs the sensing data SD1to SDm. Referring toFIG.10, the first pixel PX11is connected to the first data line DL1, the first driving scan line DSL1, the first sensing scan line SSL1, and the first readout line RL1. The first pixel PX11includes the light emitting element ED and the pixel driving circuit PXC. The light emitting element ED may be a light emitting diode. In an embodiment of the invention, the light emitting element ED may be an organic light emitting diode including an organic light emitting layer. The sensing circuit220amay be connected to the plurality of readout lines RL1to RLm. The sensing circuit220amay receive the sensing signals from the plurality of readout lines RL1to RLm. The first initialization circuit unit221aof the sensing circuit220amay include a plurality of first initialization transistors ITa respectively connected to the plurality of readout lines RL1to RLm. The second initialization circuit unit221bof the sensing circuit220amay include a plurality of second initialization transistors ITb respectively connected to the plurality of readout lines RL1to RLm. AlthoughFIG.10illustrates first and second initialization transistors ITa and ITb connected to the first readout line RL1, the initialization circuit units221aand221bmay further include first and second initialization transistors respectively connected to the remaining readout lines RL2to RLm among the readout lines RL1to RLm illustrated inFIG.1. The sampling circuit unit222illustrated inFIG.9may include a plurality of sampling transistors respectively connected to the plurality of readout lines RL1to RLm. AlthoughFIG.10illustrates a first sampling transistor ST1connected to the first readout line RL1, the sampling circuit unit222may further include sampling transistors respectively connected to the remaining readout lines RL2to RLm among the readout lines RL1to RLm illustrated inFIG.1. The first initialization transistor ITa may include a first electrode that receives a first initialization voltage VINIT1, a second electrode connected to the first readout line RL1, and a third electrode that receives the first initialization control signal ICS1. Here, a contact point to which the first readout line RL1and the first initialization transistor ITa are connected may be referred to as a second node N2. The first initialization transistor ITa may initialize the potential of the first readout line RL1to the first initialization voltage VINIT1in response to the first initialization control signal ICS1. In an embodiment of the invention, the first initialization voltage VINIT1may have a lower voltage level than the second driving voltage ELVSS. The second initialization transistor ITb may include a first electrode that receives a second initialization voltage VINIT2, a second electrode connected to the first readout line RL1, and a third electrode that receives the second initialization control signal ICS2. The first readout line RL1and the second initialization transistor ITb may be connected at the second node N2. The second initialization transistor ITb may initialize the potential of the first readout line RL1to the second initialization voltage VINIT2in response to the second initialization control signal ICS2. In an embodiment of the invention, the second initialization voltage VINIT2may have a lower voltage level than the second driving voltage ELVSS. In addition, the second initialization voltage VINIT2may have a lower voltage level than the first initialization voltage VINIT1. FIG.11is a waveform diagram for describing an operation of the pixel illustrated inFIG.10,FIG.12Ais a waveform diagram for describing operations of the pixel and a sensing circuit in the first blank period illustrated inFIG.11, andFIG.12Bis a waveform diagram for describing operations of the pixel and a sensing circuit in the second blank period illustrated inFIG.11. Referring toFIG.11, at least one of a plurality of driving scan signals SC1to SCn may be activated during each of the blank periods BT1and BT2of the frames F1and F2. In an embodiment of the invention, a first driving scan signal SC1among the plurality of driving scan signals SC1to SCn may be activated during the first blank period BT1, and an n-th driving scan signal SCn among the plurality of driving scan signals SC1to SCn may be activated during the second blank period BT2. However, an embodiment of the invention is not limited thereto. One of the remaining driving scan signals SC2to SCn other than the first driving scan signal SC1among the plurality of driving scan signals SC1to SCn may be activated during the second blank period BT2. In an embodiment, a driving scan signal activated in each of the blank periods BT1and BT2among the driving scan signals SC1to SCn may include a reference scan period and a rewriting period. In an embodiment of the invention, the first driving scan signal SC1activated in the first blank period BT1may include a first reference scan period RSPa and a first rewriting period RWPa, and the n-th driving scan signal SCn activated in the second blank period BT2may include a second reference scan period RSPb and a second rewriting period RWPb. The first reference scan period RSPa may have a same duration as the second reference scan period RSPb. In addition, the first reference scan period RSPa may have a same duration as the first driving scan period DSP1. However, an embodiment of the invention is not limited thereto. Alternatively, the first reference scan period RSPa and the first driving scan period DSP1may have different durations from each other. In an embodiment, for example, the first reference scan period RSPa may have a shorter duration than the first driving scan period DSP1. The first rewriting period RWPa may have a shorter duration than the first reference scan period RSPa. The first rewriting period RWPa and the second rewriting period RWPb may have a same duration as each other. At least one of a plurality of sensing scan signals SS1to SSn may be activated during each of the blank periods BT1and BT2of the frames F1and F2. In an embodiment of the invention, a first sensing scan signal SS1among the plurality of sensing scan signals SS1to SSn may be activated during the first blank period BT1, and an n-th sensing scan signal SSn among the plurality of sensing scan signals SS1to SSn may be activated during the second blank period BT2. However, an embodiment of the invention is not limited thereto. One of the remaining sensing scan signals SS2to SSn other than the first sensing scan signal SS1among the plurality of sensing scan signals SS1to SSn may be activated during the second blank period BT2. In an embodiment, a sensing scan signal activated in each of the blank periods BT1and BT2among the sensing scan signals SS1to SSn may include a readout period. In an embodiment of the invention, the first sensing scan signal SS1activated in the first blank period BT1may include a first readout period ROPa, and the n-th sensing scan signal SSn activated in the second blank period BT2may include a second readout period ROPb. The first readout period ROPa may have a same duration as the second readout period ROPb. Referring toFIG.10andFIG.12A, the first driving scan signal SC1may be activated to a high level during the first reference scan period RSPa of the first blank period BT1. When the first driving scan signal SC1of the high level is provided through the first driving scan line DSL1during the first reference scan period RSPa, the second transistor T2is turned on in response to the first driving scan signal SC1. In such an embodiment, the reference data signal Vref is provided to the first data line DL1during the first reference scan period RSPa of the first blank period BT1. The reference data signal Vref may be provided to the first transistor T1through the turned-on second transistor T2. In an embodiment of the invention, the level of the reference data signal Vref may be about 5 V but is not particularly limited. The reference data signal Vref is defined as a signal applied to the first data line DL1for sensing in the first blank period BT1, and the data signal V_DATA is defined as a signal applied to the first data line DL1for light emission in the first display period DT1. In an embodiment of the invention, while the reference data signal Vref does not affect the light emission of the light emitting element ED, the driving current Id of the light emitting element ED may be determined by the data signal V_DATA in the first display period DT1. In an embodiment of the invention, during the first reference scan period RSPa of the first blank period BT1, the first readout line RL1may have a state of being initialized to the first initialization voltage VINIT1. In such an embodiment, when the first initialization transistor ITa is turned on in response to the first initialization control signal ICS1, the first initialization voltage VINIT1may be applied to the first readout line RL1. In an activation period of the first initialization control signal ICS1(i.e., a first initialization period IP), the first readout line RL1may be initialized to the first initialization voltage VINIT1, and in a deactivation period of the first initialization control signal ICS1(i.e., a first non-initialization period NIP), the first initialization voltage VINIT1may not be applied to the first readout line RL1. The first sensing scan signal SS1may be activated to a high level during the first readout period ROPa of the first blank period BT1. When the first sensing scan signal SS1of the high level is provided through the first sensing scan line SSL1during the first readout period ROPa, the third transistor T3is turned on in response to the first sensing scan signal SS1. The first initialization voltage VINIT1supplied to the first readout line RL1is supplied to the first node N1. In an embodiment of the invention, the first readout period ROPa and the first reference scan period RSPa may partially overlap each other. In such an embodiment, the reference data signal Vref and the first initialization voltage VINIT1may be respectively applied to both ends of the capacitor Cst in the overlapping period, and an electric charge corresponding to a voltage difference Vref−VINIT1between the both ends may be stored in the capacitor Cst. The second driving voltage ELVSS is applied to the cathode of the light emitting element ED. Accordingly, when the first initialization voltage VINIT1having a voltage level lower than that of the second driving voltage ELVSS is applied to the first node N1, no current flows in the light emitting element ED. After the first reference scan period RSPa ends, the sampling control signal SCS may be activated, and the first initialization control signal ICS1may be deactivated. An activation period of the sampling control signal SCS may be defined as a sampling period SMP. During the sampling period SMP, the sampling circuit unit222may receive the sensing signal through the first readout line RL1. The first sensing scan signal SS1may be activated at least during the sampling period SMP. That is, the sampling period SMP and the first readout period ROPa may overlap each other. When the first initialization control signal ICS1is deactivated after the first reference scan period RSPa ends, the first initialization voltage VINIT1may not be applied to the second node N2. Then, the potentials VN1and VN2of the first and second nodes N1and N2may gradually increase. The first rewriting period RWPa may start after the sampling period SMP ends. That is, the first rewriting period RWPa may start at a time point (i.e., a first time point ta) that is delayed by a predetermined time from a time point at which the sampling period SMP ends. When the first rewriting period RWPa starts, the data signal V_DATA instead of the reference data signal Vref may be applied again to the first data line DL1. Accordingly, the rise of the potentials VN1and VN2of the first and second nodes N1and N2may slow or stop at the first time point ta. In an embodiment of the invention, the second initialization control signal ICS2may be activated at the first time point ta. That is, an activation period of the second initialization control signal ICS2(i.e., a second initialization period IAP1) may overlap the first rewriting period RWPa. When the second initialization transistor ITb is turned on in response to the second initialization control signal ICS2, the second initialization voltage VINIT2may be applied to the first readout line RL1. Because the second initialization voltage VINIT2is lower than the first initialization voltage VINIT1, the potentials VN1and VN2of the first and second nodes N1and N2may be rapidly discharged in the second initialization period IAP1. Thereafter, at a second time point tb, the first initialization control signal ICS1may be activated, and the second initialization control signal ICS2may be deactivated. Then, the potentials VN1and VN2of the first and second nodes N1and N2may be lowered by the first initialization voltage VINIT1. A waiting period ADP may be defined between the time point at which the sampling period SMP ends and the first time point ta at which the second initialization control signal ICS2is activated. The waiting period ADP may be a period set to secure time for the ADC223to effectively process the sampled signals. As the waiting period ADP is secured as described above, noise may be effectively prevented from being introduced into the ADC223while the ADC223processes the sampled signals. In an embodiment, as described above, after performing a first initialization process of preemptively lowering the potential VN1of the first node N1to the second initialization voltage VINIT2through the second initialization circuit unit221b, a second initialization process of lowering the potential VN1of the first node N1to the first initialization voltage VINIT1may be performed. Accordingly, dark lines, bright lines, etc. may be effectively prevented being viewed, which occurs when the potential VN1of the first node N1is not sufficiently initialized. Referring toFIG.10andFIG.12B, the n-th driving scan signal SCn may be activated to the high level during the second reference scan period RSPb of the second blank period BT2. When the n-th driving scan signal SCn of the high level is provided through the n-th driving scan line DSLn during the second reference scan period RSPb, the second transistor T2is turned on in response to the n-th driving scan signal SCn. In such an embodiment, the reference data signal Vref is provided to the first data line DL1during the second reference scan period RSPb of the second blank period BT2. The reference data signal Vref may be provided to the first transistor T1through the turned-on second transistor T2. In an embodiment of the invention, during the second reference scan period RSPb of the second blank period BT2, the first readout line RL1may have a state of being initialized to the first initialization voltage VINIT1. The n-th sensing scan signal SSn may be activated to the high level during the second readout period ROPb of the second blank period BT2. When the n-th sensing scan signal SSn of the high level is provided through the n-th sensing scan line SSLn during the second readout period ROPb, the third transistor T3is turned on in response to the n-th sensing scan signal SSn. The first initialization voltage VINIT1supplied to the first readout line RL1is supplied to the first node N1. In an embodiment of the invention, the second readout period ROPb and the second reference scan period RSPb may partially overlap each other. In such an embodiment, the reference data signal Vref and the first initialization voltage VINIT1may be respectively applied to both ends of the capacitor Cst in the overlapping period, and an electric charge corresponding to the voltage difference Vref−VINIT1between the both ends may be stored in the capacitor Cst. The reference data signal Vref is defined as a signal applied to the first data line DL1for sensing in the second blank period BT2, and the data signal V_DATA is defined as a signal applied to the first data line DL1for light emission in the second display period DT2. In an embodiment of the invention, while the reference data signal Vref does not affect the light emission of the light emitting element ED, the driving current Id of the light emitting element ED may be determined by the data signal V_DATA in the second display period DT2. The second driving voltage ELVSS is applied to the cathode of the light emitting element ED. Accordingly, when the first initialization voltage VINIT1having a voltage level lower than that of the second driving voltage ELVSS is applied to the first node N1, no current flows in the light emitting element ED. After the second reference scan period RSPb ends, the sampling control signal SCS may be activated, and the first initialization control signal ICS1may be deactivated. The activation period of the sampling control signal SCS may be defined as the sampling period SMP. During the sampling period SMP, the sampling circuit unit222may receive the sensing signal through the first readout line RL1. The n-th sensing scan signal SSn may be activated at least during the sampling period SMP. That is, the sampling period SMP and the second readout period ROPb may overlap each other. When the first initialization control signal ICS1is deactivated after the second reference scan period RSPb ends, the first initialization voltage VINIT1may not be applied to the second node N2. Then, the potentials VN1and VN2of the first and second nodes N1and N2may gradually increase. After the sampling period SMP ends, the second rewriting period RWPb may start. That is, the second rewriting period RWPb may start at a time point (i.e., the first time point ta) that is delayed by a predetermined time from a time point at which the sampling period SMP ends. When the second rewriting period RWPb starts, the data signal V_DATA instead of the reference data signal Vref may be applied again to the first data line DL1. Accordingly, the rise of the potentials VN1and VN2of the first and second nodes N1and N2may slow or stop at the first time point ta. In an embodiment of the invention, the second initialization control signal ICS2may be activated at a third time point td. That is, the activation period of the second initialization control signal ICS2(i.e., a third initialization period IAP2) may overlap the second rewriting period RWPb. A waiting period ADP may be defined between the time point at which the sampling period SMP ends and the third time point td at which the second initialization control signal ICS2is activated. The waiting period ADP may be a period set to secure time for the ADC223to effectively process the sampled signals. As the waiting period ADP is secured as described above, noise may be effectively prevented from being introduced into the ADC223while the ADC223processes the sampled signals. When the second initialization transistor ITb is turned on in response to the second initialization control signal ICS2, the second initialization voltage VINIT2may be applied to the first readout line RL1. Because the second initialization voltage VINIT2is lower than the first initialization voltage VINIT1, the potentials VN1and VN2of the first and second nodes N1and N2may be rapidly discharged in the third initialization period IAP2. Here, the duration of the third initialization period IAP2may be longer than the duration of the second initialization period IAP1. Accordingly, dark lines, bright lines, etc. may be effectively prevented from being viewed, which occurs when the potential VN1of the first node N1of each of pixels connected to the n-th driving scan line DSLn, which is relatively far from the sensing circuit220a, is not sufficiently initialized. In such an embodiment, a luminance difference may be improved between pixels connected to the first driving scan line DSL1and the pixels connected to the n-th driving scan line DSLn. Thereafter, at the second time point tb, the first initialization control signal ICS1may be activated, and the second initialization control signal ICS2may be deactivated. Then, the potentials VN1and VN2of the first and second nodes N1and N2may be lowered by the first initialization voltage VINIT1. According to an embodiment of the invention, when the characteristics of the pixel are sensed through the sensing circuit, dark lines or bright lines may be effectively prevented from being viewed on the display panel by securing sufficient time for discharging the potential of the first node of the pixel. In such an embodiment, dark lines and bright lines which may occur due to the difference in the amount of discharge in the first node due to the distance between the sensing circuit and the pixels may be effectively prevented from being viewed on the display panel. The invention should not be construed as being limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete and will fully convey the concept of the invention to those skilled in the art. While the invention has been particularly shown and described with reference to embodiments thereof, it will be understood by those of ordinary skill in the art that various changes in form and details may be made therein without departing from the spirit or scope of the invention as defined by the following claims. | 80,472 |
11862103 | DESCRIPTION OF EXEMPLARY EMBODIMENTS An electro-optical device according to exemplary embodiments of the present disclosure will be described below with reference to the accompanying figures. Note that, in each figure, a size and a scale of each unit is different from the actual size and the actual scale of each unit as appropriate. Moreover, exemplary embodiments described below are suitable specific examples, and various technically preferable limitations are applied, but the scope of the disclosure is not limited to these modes unless it is specifically described in the following description to limit the disclosure. First Exemplary Embodiment FIG.1is a diagram illustrating a configuration of a system including an electro-optical device according to a first exemplary embodiment. As illustrated in the figure, a system1includes a host device250and an electro-optical device10. The host device250generates video data Vid in which images caused to be displayed by the electro-optical device10are continuous. The host device250supplies the generated video data Vid to the electro-optical device10together with a control signal Ctrl such as a synchronization signal via an FPC substrate194. Note that FPC is an abbreviation for Flexible Printed Circuits. Note that, the control signal Ctrl includes a row address described below. FIG.2is a perspective view illustrating a configuration of the electro-optical device10. The electro-optical device10is a micro display panel configured to display a color image, for example, in a head-mounted display, and a plurality of pixel circuits, a driving circuit for driving the pixel circuit, and the like are formed at a semiconductor substrate. The semiconductor substrate is typically a silicon substrate, but other semiconductor substrates may be used. The electro-optical device10is housed in a frame-shaped case192that opens in a display region100, and one end of the FPC substrate194is coupled to the electro-optical device10. Another end of the FPC substrate194is provided with a plurality of terminals196for coupling to the host device250. In the figure, an X direction indicates an extension direction of a scanning line in the electro-optical device10, and a Y direction indicates an extension direction of a data line. A two-dimensional plane defined by the X direction and the Y direction is a substrate surface of the semiconductor substrate. A Z direction is perpendicular to the X direction and the Y direction, and indicates an emission direction of light emitted from a display element. FIG.3is a block diagram illustrating a configuration of a main part of the electro-optical device10. As illustrated in this figure, the electro-optical device10includes a control circuit20, a data signal output circuit30, a switch group40, a capacitance element group50, an initialization circuit60, an auxiliary circuit70, the display region100, and a scanning line drive circuit120. In the display region100, for example, scanning lines12in 1080 rows are provided along the X direction, and data lines14in 5760 (=1920×3) columns are provided along the Y direction so as to be mutually electrically insulated from the respective scanning lines12. Pixel circuits110described later are provided corresponding to intersections of the scanning lines12in the 1080 rows and the data lines14in the 5760 columns. The data lines14form one group every three columns as illustrated inFIG.5. The three pixel circuits110corresponding to intersections of the scanning line12in one certain row and the data lines14in three columns belonging to the same group respectively correspond to R (red), G (green), and B (blue) pixels, and these three pixels represent one dot of a color image to be displayed. That is, in the exemplary embodiment, a color of one dot is represented with an additive color mixture by the three pixel circuits110corresponding to RGB. Referring again toFIG.3, the control circuit20controls each unit based on the video data Vid and the control signal Ctrl supplied from the host device250. The video data Vid supplied in synchronization with a synchronization signal included in the control signal Ctrl specifies a gray scale level of a pixel in an image to be displayed by the electro-optical device10, for example, with eight bits per RGB. Furthermore, the synchronization signal includes a vertical synchronization signal instructing a start of vertical scanning of the video data Vid, a horizontal synchronization signal instructing a start of horizontal scanning, and a dot clock signal indicating timing for one pixel of the video data Vid. The control circuit20generates, as logical signals, control signals Gref, Gcp, /Drst, Gorst, /Gini, L_Ctr, Sel(1) to Sel(1920), and a clock signal Clk, in order to control each unit. Further, the control circuit20extracts Adrs including row addresses Adrs1and Adrs2included in the control signal Ctrl, and supplies Adrs to the scanning line drive circuit120. Note that, although omitted inFIG.3, the control circuit20outputs a control signal /Gcp in a logical inversion relationship with the control signal Gcp, a control signal /Gref in a logical inversion relationship with the control signal Gref, and control signals /Sel(1) to /Sel(1920) that are in a logical inversion relationship with the Sel(1) to Sel(1920), respectively. In these logical signals, an L level corresponds to 0 V, which is a reference of voltage zero, and an H level corresponds to, for example, 6.0 V. Furthermore, control signals /Gel(1) to /Gel(1080) for light emission described below each take three levels including an M level in addition to the L level and the H level. The M level is a level of a value between the L level and the H level, and corresponds to 4 to 5 V, for example. The scanning line drive circuit120is a circuit for driving the pixel circuits110arrayed in the 1920 rows and the 5760 columns with one row as a unit, and outputs, in addition to a scanning signal, although omitted inFIG.3, various control signals in synchronization with the scanning signal. The data signal output circuit30outputs a data signal toward the data line14. Specifically, the data signal output circuit30outputs a data signal of a voltage in accordance with a gray scale level of each pixel. Note that, in the present exemplary embodiment, voltage amplitude of a data signal output from the data signal output circuit30is compressed, and supplied to the data line14. Therefore, a data signal after compression also has a voltage in accordance with a gray scale level of a pixel. Furthermore, the data signal output circuit30also has a function of parallel-converting serially supplied video data Vdat to a plurality of phases (in this example, “three” phases corresponding to the number of columns of data lines14forming a group) and outputting the plurality of phases. For the sake of brevity, the “three” phases will be used in the following. The data signal output circuit30includes a shift register31, a latch circuit32, a D/A conversion circuit group33, and an amplifier group34. The shift register31sequentially transfers the video data Vdat supplied serially in synchronization with the clock signal Clk, and stores the video data Vdat for a single row, that is, for 5760 pieces from a viewpoint of the number of pixel circuits110. Note that, in the present exemplary embodiment, in order to convert the video data Vdat to the three phases for outputting, the shift register31sequentially stores the video data Vdat for every three phases (three pixels). The latch circuit32latches the video data Vdat stored in the shift register31every three phases in accordance with a control signal L_Ctr, and parallel-converts the latched video data Vdat into three phases according to the control signal L_Ctr for outputting. The D/A conversion circuit group33includes three D/A (Digital to Analog) converters. The video data Vdat in the three phases output from the latch circuit32is converted to analog signals by the three D/A converters. The amplifier group34includes three amplifiers. The analog signals in the three phases output from the D/A conversion circuit group33are amplified by the three amplifiers, and output as data signals Vd(1), Vd(2), and Vd(3). The control circuit20outputs the control signals Sel(1) to Sel(1920) that are sequentially and exclusively set to the H level in a compensation period preceding a writing period as described below. FIG.4is a circuit diagram illustrating a configuration of the switch group40, the capacitance element group50, the initialization circuit60, the auxiliary circuit and the display region100, in the electro-optical device10. In the display region100, as described above, the pixel circuits110are provided, in a matrix, corresponding to the intersections of the scanning lines12and the data lines14. Specifically, the pixel circuits110are provided corresponding to the intersections of the scanning lines12in the 1080 rows and the data lines14in the 5760 columns. Thus, a color image represented by the electro-optical device10has resolution of vertical 1080 dots by horizontal 1920 dots. In order to distinguish the rows (lines) in the matrix array, the rows may be referred to as 1st, 2nd, 3rd, . . . , 1919th, and 1920-th rows in order from above in the figure, respectively. Similarly, in order to distinguish the columns in the matrix, the columns may be referred to as 1st, 2nd, 3rd, . . . , 5759-th, and 5760-th columns in order from left, respectively. In the present exemplary embodiment, as described above, the data lines14are grouped every three columns. When an integer j from 1 to 1920 is used in order to generalize and describe the group, the data lines14in total of three columns of a (3j−2)-th column, a (3j−1)-th column, and a (3j)-th column belong to a j-th group counted from left. Note that, regardless of the group, in order to generalize and describe the data lines14, an integer k from 1 to 5760 is used to use notation of “the data line14in a k-th column counted from left” in some cases. The scanning line drive circuit120supplies scanning signals /Gwr(1), /Gwr(2), . . . , /Gwr(1079), and/Gwr(1080) in this order to the scanning lines12in the 1st, 2nd, 3rd, . . . , 1079-th, 1080-th rows, respectively. Note that, details of the scanning line drive circuit120will be described below. In the electro-optical device10, a data transfer line14ais provided corresponding to the data line14. The switch group40is a collection of transmission gates45provided for the respective data transfer lines14a. Of these, input ends of the respective1920transmission gates45corresponding to the data transfer lines14ain the 1st, 4th, 7th, . . . , 5758-th columns are commonly coupled. Note that, the data signal Vd(1) is supplied to the input end for each pixel, in time series. Additionally, input ends of the respective1920transmission gates45corresponding to the data transfer lines14ain the 2nd, 5th, 8th, . . . , and 5759-th columns are commonly coupled, and the data signal Vd(2) is supplied for each pixel, in time series. Similarly, input ends of the respective1920transmission gates45corresponding to the data transfer lines14ain the 3rd, 6th, 9th, . . . , and 5760-th columns are commonly coupled, and the data signal Vd(3) is supplied for each pixel, in time series. An output end of the transmission gate45in one certain column is coupled to an end of the data transfer line14ain the column. The three transmission gates45corresponding to the (3j−2)-th, (3j−1)-th, and (3j)-th columns belonging to the j-th group are each brought into an on-state between an input end and an output end, when a control signal Sel(j) is at the H level (when a control signal /Sel(j) is at the L level). Note that inFIG.4, due to space limitations, only a first group and a 1920-th group are illustrated, and other groups are omitted. Also, the transmission gate45inFIG.4is simplified and denoted as a mere switch inFIG.3. In the present description, the “on-state” of a switch, a transistor, or a transmission gate refers to a state where both ends of the switch, a source node and a drain node in the transistor, or both ends of the transmission gate are electrically coupled to be brought into a low-impedance state. In addition, an “off-state” of a switch, a transistor, or a transmission gate refers to a state where both ends of the switch, a source node and a drain node, or both ends of the transmission gate are not electrically coupled to be brought into a high-impedance state. Also, “electrically coupled” or simply “coupled” in the present description means direct or indirect coupling or joint between two or more elements. The capacitance element group50is a collection of capacitance elements51provided for the respective data transfer lines14a. Here, one end of a capacitance element41corresponding to the data transfer line14ain one certain column is coupled to one end of the data transfer line14a, and another end of the capacitance element41is grounded to a constant potential, for example, to a potential serving as a reference of voltage zero. The auxiliary circuit70is a collection of transmission gates72and73provided in the respective columns and capacitance elements74and75provided in the respective columns. Here, the transmission gate72corresponding to a certain column is brought into the on-state between an input end and an output end, when the control signal Gcp is at the H level (when the control signal /Gcp is at the L level). An input end of the transmission gate72corresponding to a certain column is coupled to another end of the data transfer line14ain the column, and an output end of the transmission gate72corresponding to the column is coupled to an output end of the transmission gate73corresponding to the column, one end of the capacitance element74corresponding to the column, and one end of the capacitance element75corresponding to the column. The transmission gate73corresponding to one certain column is brought into the on-state between an input end and an output end, when the control signal Gref is at the H level (when the control signal /Gref is at the L level). An input end of the transmission gate73corresponding to one certain column is applied with a voltage Vref. Also, another end of the capacitance element75corresponding to one certain column is grounded to a constant potential, for example, to a potential serving as a reference of voltage zero. Another end of the capacitance element74corresponding to one certain column is coupled to one end of the data line14corresponding to the column. The initialization circuit60is a collection of P channel MOS type transistors66,68, and N channel MOS type transistors67provided for the respective data lines14. A gate node of the transistor66corresponding to the data line14in one certain column is supplied with the control signal /Drst, a source node of the transistor66is applied with a voltage Vel, and a drain node of the transistor66is coupled to the data line14in the column. Further, a gate node of the transistor67corresponding to the data line14in one certain column is supplied with the control signal Gorst, a source node of the transistor67is applied with a voltage Vorst, and a drain node of the transistor67is coupled to the data line14in the column. A gate node of the transistor68corresponding to the data line14in one certain column is supplied with the control signal /Gini, a source node of the transistor68is applied with a voltage Vini, and a drain node of the transistor68is coupled to the data line14in the column. FIG.6is a diagram illustrating a configuration of the pixel circuit110. The pixel circuits110arrayed in the 1080 rows by the 5760 columns are electrically identical to each other. Thus, the pixel circuits110will be explained, by using one pixel circuit110corresponding to an i-th row and the k-th column as a representative. As illustrated in the figure, the pixel circuit110includes P channel MOS type transistors121to124, an OLED130, and a capacitance element140. Further, the pixel circuit110in the i-th row is supplied with, in addition to a scanning signal /Gwr(i), control signals /Gcmp(i) and /Gel(i) from the scanning line drive circuit120. The OLED130is an example of a display element, and a pixel electrode131and a common electrode133sandwich a light emission function layer132. The pixel electrode131functions as an anode, and the common electrode133functions as a cathode. Note that, the common electrode133has light reflectivity and optical transparency. When a current flows from the anode toward the cathode in the OLED130, holes injected from the anode and electrons injected from the cathode are recombined in the light emission function layer132to generate excitons and generate white light. In a case of color display as in the present exemplary embodiment, the generated white light resonates in an optical resonator configured with, for example, a reflective layer and a semi-reflective semi-transmissive layer (not illustrated), and is emitted with a resonance wavelength that is set corresponding to one of colors of R (red), G (green), and B (blue). A color filter corresponding to the color is provided on an emission side of the light from the optical resonator. Thus, the emitted light from the OLED130is subjected to coloration by the optical resonator and the color filter, and is visually recognized by an observer. Note that, the optical resonator is not illustrated. In addition, when the electro-optical device10simply displays a monochromatic image with only brightness and darkness, the above color filter is omitted. In the transistor121of the pixel circuit110in the i-th row and the k-th column, a gate node g is coupled to a drain node of the transistor122, and a source node is coupled to a power supplying line116having the voltage Vel, and a drain node is coupled to a source node of the transistor123and a source node of the transistor124. Note that, in the capacitance element140, one end is coupled to the gate node g of the transistor121, and another end is coupled to a constant voltage, for example, to the power supplying line116having the voltage Vel. Thus, the capacitance element140holds a potential of the gate node g in the transistor121. Note that, as the capacitance element140, for example, a capacitor which is parasitic to the gate node g of the transistor121may be used, or and a capacitor formed by interposing an insulating layer between mutually different conductive layers in a silicon substrate may be used. In the transistor122of the pixel circuit110in the i-th row and the k-th column, a gate node is coupled to the scanning line12in the i-th row, and a source node is coupled to the data line14in the k-th column. In the transistor123of the pixel circuit110in the i-th row and the k-th column, the control signal /Gcmp(i) is supplied to a gate node, and a drain node is coupled to the data line14in the column. In the transistor124of the pixel circuit110in the i-th row and the k-th column, the control signal /Gel(i) is supplied to a gate node, and a drain node is coupled to the pixel electrode131, which is the anode of the OLED130. Note that, the control signal /Gel(i) is supplied via a light emission control line118in the i-th row from the scanning line drive circuit120. The common electrode133that functions as the cathode of the OLED130is coupled to a power supplying line having the voltage Vct. In addition, since the electro-optical device10is formed at a silicon substrate, a substrate potential of each of the transistors121to124is a potential corresponding to the potential Vel, for example. Next, what kind of the video data Vid is supplied by the host device250, and what kind of driving is performed by the electro-optical device10to perform displaying based on the video data Vid will be described. FIG.7is a diagram for explaining the video data Vid supplied from the host device250to the electro-optical device10. In the electro-optical device10, resolution that can be expressed for a color image is the vertical 1080 dots by the horizontal 1920 dots as described above. Therefore, as illustrated in an upper section of the figure, simply, it is sufficient that video data for three colors of RGB per dot is supplied to the electro-optical device10, in the vertical 1080 dots by the horizontal 1920 dots, at a frequency of a vertical synchronization signal (vertical synchronization frequency, for example, 60 Hz). However, when such video data is supplied to the electro-optical device10for the electro-optical device10to display the video data, and when supporting of a high-speed display at 90 Hz or more for game application is attempted, for example, a driving frequency is increased, and power consumption is increased. Thus, first, in the present exemplary embodiment, as illustrated in a middle section in the figure, in the host device250, images for two frames that are temporally continuous are separated as a top image including vertical 720 lines and a bottom image including vertical 720 lines, and are caused to be arrayed as one image. A sum of the number of lines in the top image and the number of lines in the bottom image is “1440”, and thus a data amount is reduced to ⅔ as compared to two screens each including the number of lines “1080”. Thus, a vertical synchronization frequency when the host device250supplies the video data Vid to the electro-optical device10corresponds to 45 Hz. In the electro-optical device10, a period in which the vertical synchronization frequency is 45 Hz is divided into an odd frame period and an even frame period, and the top image is caused to be displayed in the odd frame period, and the bottom image is caused to be displayed in the even frame period. In the present exemplary embodiment, two screens for the odd frame and even frame are displayed in the period in which the vertical synchronization frequency is 45 Hz, thus a display in the odd frame and the even frame is visually recognized as substantially being displayed at a vertical synchronization frequency of 90 Hz, which is twice 45 Hz. Note that, in the present description, the period in which the vertical synchronization frequency is 45 Hz is referred to as a frame period. Furthermore, when not particularly distinguished, the odd frame period and the even frame period may be referred to as subframe periods. Blanking is inserted at each of an upper end and a lower end of the top image and each of an upper end and a lower end of the bottom image, as indicated by hatching. A sum of the number of lines of blanking inserted at the top end of the top image and the number of lines of blanking inserted at the bottom end of the bottom image is set to be approximately equal to a sum of the number of lines of blanking inserted at the lower end of the top image and the number of lines of blanking inserted at the upper end of the bottom image. The top image and the bottom image both include the vertical 720 lines, whereas the number of vertical rows of the electro-optical device10is “1080”. Thus, Next, the display region100of the electro-optical device10is divided into four regions in order from above of regions (a), (b), (c), and (d) each including vertical 270 rows, as illustrated in a bottom section of the figure. Note that “division” here is not meant to physical division, and is used in a sense that a region to be supplied with signals is divided for convenience. Since the region (a) is located at an upper end of the display region100, and the region (d) is located at a lower end of the display region100, even when the regions (a) and (d) deteriorate, an observer of a display screen of the electro-optical device10is less likely to recognize the deterioration as deterioration. Thus, in the electro-optical device10, for each of the regions (a) and (d), the vertical 270 rows are caused to be displayed with ½ image quality by, for example, driving two rows simultaneously. With respect to the regions (a) and (d), the regions (b) and (c) are located at a center of the display region100, and thus the observer of the display screen of the electro-optical device10is more likely to gaze the regions (b) and (c). Thus, in the electro-optical device10, for each of the regions (b) and (c), the vertical 270 lines are caused to be displayed with ⅚ display quality, for example, while deterioration is suppressed, or reduced. Specifically, when six rows are considered as one block, driving is performed to display video data of the top image or the bottom image for five rows among the six rows, and for the remaining one row, driving is performed to display the same video data as that of one row adjacent in the Y direction. FIG.8is a diagram for explaining a data reduction in the present exemplary embodiment. Since each of the regions (a) and (d) has ½ image quality, thus an amount of image information also becomes ½ for each. Since each of the regions (b) and (c) has ⅚ image quality, thus an amount of image information also becomes ⅚ for each. Since each of the regions (a), (b), (c), and (d) is ¼ of the display region100, a data amount of the video data Vid supplied to the electro-optical device10from the host device250becomes ⅔ compared to a configuration in which the video data of the top image and the bottom image is supplied as is. Next, a specific driving procedure in the present exemplary embodiment will be described. As described above, in the present exemplary embodiment, in the host device250, as illustrated inFIG.7, the images for the two frames that are temporally continuous are separated as the top image including the vertical 720 lines and the bottom image including the vertical 720 lines, and are caused to be arrayed as one image. In the electro-optical device10, the period in which displaying is performed at the vertical synchronization frequency is divided into the odd frame period and the even frame period, and the top image is caused to be displayed in the odd frame period, and the bottom image is caused to be displayed in the even frame period. In the odd frame period, the host device250supplies, for video data corresponding to the regions (a) and (d) of video data for 720 lines in the top image, the video data Vid for odd-numbered rows to the electro-optical device10together with the row addresses Adrs1and Adrs2indicating the respective rows. Note that, the row addresses Adrs1and Adrs2here are each a row number, when the 720 rows in the top image are counted from above. In the electro-optical device10, the video data Vid for the odd-numbered row corresponding to the region (a) or (d) is caused to be displayed with two rows, including not only the odd-numbered row, but an even-numbered row adjacent to the odd-numbered row in the Y direction. Thus, in the scanning line drive circuit120in the electro-optical device10, a concept of a primary and a secondary is introduced in the driving of the scanning line12. The secondary means pertaining to the primary, more particularly, means operating in the same manner as the primary, and when the secondary is set, to which primary the secondary is subordinate is also certainly set. Conversely, however, no secondary is set to the primary in some cases. Note that, in the present exemplary embodiment, the scanning line12adjacent to the scanning line12set to the secondary in one of a positive Y direction (downward direction) or a negative Y direction (upward direction) is set to the primary. When the scan line12in one certain row is set to the primary, and the scanning line12is specified with the row address Adrs1, the primary scanning line12is selected for horizontal scanning. When the scanning line12in one certain row is set to the secondary, and the scanning line12set to the primary is specified with the row address Adrs1, two lines of the primary scanning line12and the secondary scanning line12are selected simultaneously for horizontal scanning. FIG.9is a block diagram illustrating an example of a configuration, of the scanning line drive circuit120, for supplying scanning signals. Note that, in the figure, for simplicity, a configuration is illustrated for supplying (i−2)-th to (i+2)-th rows with scanning signals/Gwr(i−2) to /Gwr(i+2), respectively. As illustrated in this figure, a unit circuit Ua is provided for each scanning line12to supply the scanning signal. The unit circuit Ua includes an address decoder Add1, a holding unit Me1, switches Sw1, Sw2, and Sw3. The unit circuit Ua is common to each row, and thus is described by using the i-th row. The holding unit Me1in the i-th row holds information specifying whether the i-th row is the primary or the secondary, and information indicating, when the i-th row is the secondary, whether the i-th row is dependent on the scanning line12in the (i−1)-th row adjacent in an upward direction or dependent on the scanning line12in the (i+1)-th row adjacent in a downward direction. Note that, the information stored in the holding unit Me1is supplied, for example, from the control circuit20. When, in one certain horizontal scanning period, the i-th row of the horizontal scanning period is specified by the row address Adrs1, the address decoder Add1outputs the scanning signal /Gwr(i) to select the scanning line12in the i-th row in the horizontal scanning period. The switch Sw1is provided between an output end of the address decoder Add1and the scanning line12, is brought into the on-state when information to set to the primary is held in the holding unit Mel, and brought into the off-state when information to set to the secondary is held. The switch Sw2is of a single pole double throw type, and a contact point a is electrically coupled to the scanning line12in the (i−1)-th row, and a contact point b is coupled to a scanning line in the (i+1)-th row. The switch Sw2selects the contact point a when information is stored, in the holding unit Me1, that is dependent on the scanning line12adjacent in the upward direction, and selects the contact a when information dependent on the scanning line12adjacent in the downward direction is stored. The switch Sw3is provided between a contact point c in common with the switch Sw2and the scanning line12in the i-th row, is brought into the off-state when information to set to the primary is held in the holding unit Me1, and brought into the on-state when information to set to the secondary is held. That is, switches Sw1and Sw3are mutually exclusively brought into the on-state or off-state. In such a configuration, in a state where the i-th row is set to the primary, when the i-th row is specified with the row address Adrs1, the switch Sw1is brought into the on-state, and thus switch Sw3is brought into the off-state, the scanning signal /Gwr(i) indicating that the i-th row is selected is output to the scanning line12in the i-th row. In a state where the i-th row is set to the secondary, when the i-th row is dependent on the (i−1)-th row, the switch Sw1is brought into the off-state, thus the switch SW2selects the contact point a, and the switch Sw3is brought into the on-state. Thus, the scanning line12in the i-th row is supplied with the scanning signal /Gwr (i−1) in the (i−1)-th row. In a state where the i-th row is set to the secondary, when the i-th row is dependent on the (i+1)-th row, the switch Sw1is brought into the off-state, thus the switch SW2selects the contact point b, and the switch Sw3is brought into the on-state. Thus, the scanning line12in the i-th row is supplied with the scanning signal /Gwr(i+1) in the (i+1)-th row. FIG.10is an example of a figure illustrating, over time, selection of the scanning lines12in the odd frame period and the even frame period, and setting of the primary and the secondary for each scanning line12. Note that, in the display region100in the electro-optical device10, the regions (a), (b), (c), and (d) each include 270 rows. However, inFIG.10, the regions (a), (b), (c), and (d) are simplified to each include six rows. This figure illustrates that, a horizontal axis indicates elapsed time, and a vertical axis indicates row numbers of the scanning lines12, the row numbers are counted as 1, 2, 3, . . . , in order from above, and the regions (a), (b), (c), and (d) are simplified to each include six rows. In the odd frame period, an odd-numbered (1, 3, . . . ) row is set to the primary in a block of the six rows in the region (a), and an even numbered (2, 4, . . . ) row is set to the secondary dependent on the odd-numbered row one row above. In the figure, a selection period for one row (one horizontal scanning period) is indicated by a square frame, a black frame indicates that a row is set to the primary and a white frame indicates that a row is set to the secondary. Furthermore, the secondary indicates being dependent on the black primary that is in the same selection period. In the odd frame period, in a block of six rows in the region (b), (1st, 2nd, 3rd, 5th, and 6th) rows counted from above are set to the primary, and a 4th row is set to the secondary dependent on a 3rd row. In the odd frame period, a block of six rows in the region (c) is similar to that in the region (b). In the odd frame period, a block of six rows in the region (d) is similar to that in the region (b). In an even frame period following the odd frame period, an even-numbered row is set to the primary in the block of six rows in the region (a), and the odd-numbered row is set to the secondary dependent on the even-numbered row one row below. In the even frame period, in the block of six rows in the region (b), the (1st, 2nd, 4th, 5th, and 6th) rows are set to the primary, and the 3rd row is set to the secondary dependent on the 4th row. In the even frame period, the block of six rows in the region (c) is similar to that in the region (b). In the even frame period, the block of six rows in the region (d) is similar to that in the region (b). Note that, in the figure, a period BL since a selection period for the last row in the odd frame period or the even frame period ends until a selection period for a leading row starts in the next even frame period or odd frame period is a period corresponding to blanking inserted into each of an upper end and a lower end of a top image and each of an upper end and a lower end of a bottom image. FIG.11is a diagram illustrating setting of the primary and the secondary, and display contents of the scanning lines12in the regions (a) and (d). Note that, inFIG.11, 12 rows in each of the regions (a) and (d) are extracted for simplification. In an odd frame period, an odd-numbered (1, 3, . . . ) row is set to the primary for the scanning lines12in each of the regions (a) and (d), and an even-numbered (2, 4, . . . ) row is set to the secondary dependent on the odd-numbered row one row above, thus the 1st and 2nd rows, the 3rd and 4th rows, and the and 6th rows have same display contents. In the even frame period, the even-numbered (2, 4, . . . ) row is set to the primary for the scanning lines12in each of the regions (a) and (d), and the odd-numbered (1, 3, . . . ) row is set to the secondary dependent on the even-numbered row one row below, thus the 1st and 2nd rows, the 3rd and 4th rows, and the and 6th rows have the same display contents. Note that inFIG.11, a left section of a square frame indicates a row number of a row among the 12 rows, and a right section indicates a line number of an image displayed. In addition, in the present exemplary embodiment, compensation for a threshold voltage of the transistor121(threshold compensation) is performed in the scanning line12set to the primary. FIG.12is a diagram illustrating setting of the primary and the secondary, and display contents of the scanning lines12in the regions (b) and (c). Note that inFIG.12, 12 rows in each of the regions (b) and (c) are extracted for simplification. In an odd frame period, 1st, 2nd, 3rd, 5th, and 6th rows are set to the primary for the scanning lines12in each of the regions (b) and (c), and a 4th row is set to the secondary dependent on the 3rd row one row above, thus the 3rd and 4th rows have the same display contents. In an odd frame period, the 1st, 2nd, 4th, 5th, and 6th rows are set to the primary for the scanning lines12in each of the regions (b) and (c), and the 3rd row is set to the secondary dependent on the 4th row one row below, thus the 3rd and 4th rows have the same display contents. FIG.13is a block diagram illustrating an example of a configuration, of the scanning line drive circuit120, for supplying scanning signals for light emission. Note that, in the figure, for simplicity, a configuration is illustrated for supplying the (i−2)-th to (i+2)-th rows with scanning signals /Gel(i−2) to /Gel(i+2), respectively. As illustrated in this figure, a unit circuit Ub is provided for each scanning line12to supply the control signal for light emission. The unit circuit Ub includes an address decoder Add2, a holding unit Me2, the switches Sw1, Sw2, and Sw3. The unit circuit Ub is common to each row and is substantially similar to the unit circuit Ua for supplying the scanning signal. Here, differences between the unit circuit Ub and the unit circuit Ua will be described. In order to supply the control signal for light emission, a concept of a primary and a secondary is introduced as in the case of the scanning signals. Thus, in the unit circuit Ub in the i-th row, the holding unit Me2holds information specifying whether the i-th row is the primary or the primary, and information indicating, when the i-th row is the secondary, whether the i-th row is dependent on the (i−1)-th row adjacent in the upward direction or dependent on the (i+1)-th row adjacent in the downward direction. Note that, the information stored in the holding unit Me2is supplied from the control circuit20. The address decoder Add2in the i-th row, when the i-th row is specified by the row address Adrs2, outputs the control signal /Gel(i) illustrated inFIG.16in a horizontal scanning period in which the i-th row is selected and after the horizontal scanning period. The control signal for light emission takes either of the three values of L level, M level and H level as described above. Of a waveform of the control signal /Gel(i) in the i-th row, a waveform in the horizontal scanning period in which the i-th row is selected will be described later, and after the horizontal scanning period, there are two periods (F) in which the control signal is set to the M level until the i-th row is selected in the next subframe, and the control signal is kept at the H level in other than the periods. Note that, for the i-th row, the period (F) in which the control signal /Gel(i) is set to the M level is a light emission period, and a period other than that is a non-light emission period. FIG.14is an example of a figure illustrating, over time, the light emission period (F) in an odd frame period and an even frame period, and setting of the primary and the secondary for each row. Note that,FIG.14also indicates that, similar toFIG.10, a horizontal axis indicates elapsed time, and a vertical axis indicates row numbers of the scanning line12, the row numbers are counted as 1, 2, 3, . . . , in order from above, and the regions (a), (b), (c), and (d) are simplified to each include six rows. As illustrated in this figure, in the present exemplary embodiment, the number of light emission periods (F) is two in the odd frame period or the even frame period, and is four from a viewpoint of a period V of a vertical synchronization signal of 45 Hz, and the light emission periods (F) are set at approximately regular intervals. When the light emission periods (F) are set at irregular intervals, flicker may be caused, but it is easy to arrange the light emission periods (F) at approximately regular intervals by insertion of the blanking period BL as in the present exemplary embodiment. Setting of the primary and the secondary in the control signal for light emission is similar to that in the primary and the secondary of the scanning signal. Thus, as illustrated inFIG.14, in the odd frame period, an odd-numbered (1, 3, . . . ) row is set to the primary in a block of six rows in each of the regions (a) and (d), and an even numbered (2, 4, . . . ) row is set to the secondary dependent on the odd-numbered row one row above. In an even frame period, the even-numbered (2, 4, . . . ) row is set to the primary in the block of six rows in each of the regions (a) and (d), and the odd numbered (1, 3, . . . ) row is set to the secondary dependent on the even-numbered row one row above. In addition, in the odd frame period, in a block of six rows in each of the regions (b) and (c), 1st, 2nd, 3rd, 5th, and 6th rows are set to the primary, and a 4th row is set to the secondary dependent on the 3rd row one row above. In the even frame period, in the block of six rows in each of the regions (b) and (c), the 1st, 2nd, 4th, 5th, and 6th rows are set to the primary, and the 3rd row is set to the secondary dependent on the 4th row one row below. Note that, in the present exemplary embodiment, the switching of the information indicating the primary or the secondary stored in the holding unit Me2is performed after the row address Adrs2selected the primary in the preceding stage. FIG.15is a timing chart for explaining operation of the electro-optical device10, andFIG.16is a diagram illustrating an example of a relationship between a scanning signal and a control signal for light emission. In the present exemplary embodiment, in the odd frame period and the even frame period, the primary or the secondary is set for each row in the regions (a), (b), (c) and (d), but when one certain row is focused, operation is common for selection in the horizontal scanning period (H). Also, operation is also common to the pixel circuits110in the respective first to 5760-th columns of a row scanned in the horizontal scanning period (H). Thus, in the following, a description will be given focusing on the pixel circuit110in the i-th row and the k-th column. Note that inFIG.15, of the scanning signals /Gwr(1) to /Gwr(1080), the scanning signals /Gwr(1) and /Gwr(2) in the region (a), the scanning signals /Gwr(i−1) and /Gwr(i) in the region (b) or (c), and the scanning signal /Gwr(1079) and /Gwr(1080) in the region (d) are illustrated. One of the scanning signals /Gwr(1) and /Gwr(2) is set to the primary, and another is set to the secondary, and thus, two rows are selected at the same time. Also for the scanning signals /Gwr(1079) and /Gwr(1080), one is set to the primary, and another is set to the secondary, and thus, two rows are selected at the same time. For the scanning signals /Gwr(271) to /Gwr(810) in the regions (b) and (c), one row is selected alone, or two rows are selected at the same time, but the scanning signals /Gwr(i−1) or/Gwr(i) is illustrated to be selected alone. InFIG.15andFIG.16, a vertical scale indicating a voltage is not necessarily even for each signal. In the electro-optical device10, the horizontal scanning period (H) is divided into five periods of initialization periods (A), (B), (C), a compensation period (D), and a writing period (E) in a temporal order. Further, as for the operation of the pixel circuit110, the light emission period (F) is further added to the five periods described above. The light emission period (F) in the i-th row is a period in which the control signal for light emission /Gel(i) is set to the M level, as described above or as illustrated inFIG.16. Of the initialization periods (A), (B), and (C), the initialization period (A) is a period for setting the transistor121to the off-state, and is a period for pre-preparation processing for the initialization period (C). The initialization period (B) is a period for a process for resetting a potential at the anode of the OLED130, and the initialization period (C) is a period for applying a voltage to turn on the transistor121at a start of the compensation period (E), to the gate node g of the transistor121. In each horizontal scanning period (H), in the initialization period (A), the control signal /Gini is at the H level, the control signal Gorst is at the L level, the control signal /Drst is at the L level, the control signal Gref is at the H level, and the control signal Gcp is at the L level. Thus, the transistor68is in the off-state, the transistor67is in the off-state, the transistor66is in the on-state, the transmission gate73is in the on-state, and the transmission gate72is in the off-state. In addition, in the initialization period (A) of the horizontal scanning period (H) in which the i-th row is selected, the scanning signal /Gwr(i) is at the L level, the control signal /Gcmp(i) is at the H level, and the control signal /Gel(i) is at the H level. Therefore, in the pixel circuit110, the transistor122is in the on-state, and the transistors123and124are in the off-state. Thus, in the initialization period (A), as illustrated inFIG.17, the voltage Vref is applied via the transmission gate73to the one end of the capacitance element74, the one end of the capacitance element75, and the output end of the transmission gate72. Additionally, in the pixel circuit110, the voltage Vel passes through the transistor66, the data line14, and the transistor122in order, and is applied to one end of the capacitance element140, and the gate node g of the transistor121. When the voltage Vel is applied to gate node g, a voltage between the gate node and the source nodes is zero, thus the transistor121is forcibly brought into the off-state, and a current flowing through the OLED130is blocked. Furthermore, since the voltage Vel is applied to the other end of the capacitance element74via the data line14, the capacitance element74is charged to a voltage |Vel−Vref|. In each horizontal scanning period (H), in the initialization period (B), the control signal /Gini is at the H level, the control signal Gorst is set to the H level, the control signal /Drst is set to the H level, the control signal Gref is at the H level, and the control signal Gcp is at the L level. Thus, the transistor68is kept in the off-state, the transistor67is changed to be in the on-state, the transistor66is changed to be in the off-state, the transmission gate73is kept in the on-state, and the transmission gate72is kept in the off-state. In addition, in the initialization period (B) of the horizontal scanning period (H) in which the i-th row is selected, the scanning signal /Gwr(i) is set to the H level, the control signal /Gcmp(i) is set to the L level, and the control signal /Gel(i) is set to the L level. Therefore, in the pixel circuit110, the transistor122is brought into the off-state, and the transistors123and124are brought into the on-state. Thus, in the initialization period (B), as illustrated inFIG.18, the one end of the capacitance element74, the one end of the capacitance element75, and the output end of the transmission gate72are kept at the voltage Vref. Further, in the pixel circuit110, the voltage Vorst passes through the transistor67, the data line14, the transistors123and124in order, and is applied to the pixel electrode131, which is the anode of the OLED130. In the OLED130, the light emission function layer132is sandwiched between the pixel electrode131and the common electrode133, and thus a capacitive component parasitizes. In the initialization period (B), a voltage held in the capacitive component, in particular, a voltage in accordance with a current flowing through the OLED130in the light emission period (F), is reset by application of the voltage Vorst to the pixel electrode131. Note that, the voltage Vorst is a voltage that causes the OLED130not to emit light, and specifically, is zero volts corresponding to the L level, or a voltage close to the zero volts (0 to 1 Volt). Furthermore, since the voltage Vorst is applied to the other end of the capacitance element74via the data line14, the capacitance element74is charged to a voltage |Vorst−Vref|. In each horizontal scanning period (H), in the initialization period (C), the control signal /Gini is set to the L level, the control signal Gorst is set to the L level, the control signal /Drst is at the H level, the control signal Gref is at the H level, and the control signal Gcp is at the L level. Thus, the transistor68is changed to be in the on-state, the transistor67is changed to be in the off-state, the transistor66is kept in the off-state, the transmission gate73is kept in the on-state, and the transmission gate72is kept in the off-state. In addition, in the initialization period (C) of the horizontal scanning period (H) in which the i-th row is selected, the scanning signal /Gwr(i) is set to the L level, the control signal /Gcmp(i) is set to the H level, and the control signal /Gel(i) is set to the H level. Therefore, in the pixel circuit110, the transistor122is brought into the on-state, and the transistors123and124are brought into the off-state. Thus, in the initialization period (C), as illustrated inFIG.19, the one end of the capacitance element74, the one end of the capacitance element75, and the output end of the transmission gate72are kept at the voltage Vref. Additionally, in the pixel circuit110, the voltageVinipasses through the transistor68, the data line14, and the transistor122in order, and is applied to the one end of the capacitance element140, and the gate node g of the transistor121. Furthermore, since the voltage Vini is applied to the other end of the capacitance element74via the data line14, the capacitance element74is charged to a voltage |Vini−Vref|. In each horizontal scanning period (H), in the compensation period (D), the control signal /Gini is set to the H level, the control signal Gorst is at the L level, the control signal /Drst is at the H level, the control signal Gref is at the H level, and the control signal Gcp is at the L level. Thus, the transistor68is changed to be in the off-state, the transistor67is kept in the off-state, the transistor66is kept in the off-state, the transmission gate73is kept in the on-state, and the transmission gate72is kept in the off-state. In addition, in the compensation period (D) of the horizontal scanning period (H) in which the i-th row is selected, the scanning signal /Gwr(i) is kept at the L level, the control signal /Gcmp(i) is changed to be at the L level, and the control signal /Gel(i) is kept at the H level. Therefore, in the pixel circuit110, the transistor122is kept in the on-state, the transistor123is brought into the on-state, and the transistor124is brought into the off-state. Thus, in the compensation period (D), as illustrated inFIG.20, the one end of the capacitance element74, the one end of the capacitance element75, and the output end of the transmission gate72are kept at the voltage Vref. Since the one end of the capacitance element140is held at the voltage Vini in the immediately preceding initialization period (C), the pixel circuit110is brought into a state where a voltage (Vel−Vini) is held as the voltage between the gate node and the source node of the transistor121. In this state, when the transistor123is brought into the on-state, the transistor121is brought into a state where the gate node and the drain node are coupled, that is, a diode coupled state. Therefore, a voltage Vgs between the gate node and the source node in the transistor121converges to a threshold voltage of the transistor121. Here, when the threshold voltage is conveniently denoted as Vth, a voltage of the gate node g of the transistor121converges to a voltage (Vel−Vth) corresponding to the threshold voltage Vth. Note that, at a start of the compensation period (D), it is necessary that a current flows from the source node toward the drain node in the diode-coupled transistor121. Thus, the voltage Vini applied to the gate node g in the initialization period (C) before the compensation period (D) is in a relationship of Vini<Vel−Vth. Additionally, in the compensation period (D), the gate node g of the transistor121is coupled to the data line14via the transistor122, and the drain node of the transistor121is coupled to the data line14via the transistor123. Therefore, a voltage of each of the data line14and the other end of the capacitance element74also converges to the voltage (Vel−Vth). Therefore, the capacitance element74is charged to a voltage |Vel−Vth−Vref|. On the other hand, in the compensation period (D), the control signals Sel(1) to Sel(1920) are sequentially and exclusively set to the H level. Note that, although omitted inFIG.15, in the compensation period (D), the control signals /Sel(1) to /Sel(1920) are sequentially and exclusively set to the L level in synchronization with the control signals Sel(1) to Sel(1920), respectively. Furthermore, when the control signal Sel(j) is set to the H level, for example, of the control signals Sel(1) to Sel(1920), the data signal output circuit30outputs, the data signals Vd(1) to Vd(3) of respective three pixels corresponding to an intersection of the scanning line12in the i-th row and the data line14belonging to the j-th group. In more detail, the data signal output circuit30outputs the data signal Vd(1) corresponding to a pixel in the i-th row and the (3j−2)-th column in a period where the control signal Sel(j) is set to the H level, outputs the data signal Vd(2) corresponding to a pixel in the i-th row and the (3j−1)-th column, and outputs the data signal Vd(3) corresponding to a pixel in the i-th row and the (3j)-th column. As a specific example, when j is “2”, the data signal output circuit30outputs the data signal Vd(1) corresponding to a pixel in the i-th row and a 4th column in a period where the control signal Sel(2) is set to the H level, and outputs the data signal Vd(2) corresponding to a pixel in the i-th row and a column, and outputs the data signal Vd(3) corresponding to a pixel in the i-th row and a 6th column. When the control signals Sel(1) to Sel(1920) are sequentially and exclusively set to the H level, a voltage of a data signal corresponding to a pixel is held in each of the capacitance elements51corresponding to the first column to the 5760-th column. Note that,FIG.20illustrates a state in which while the control signal Sel(j) corresponding to the j-th group to which the pixel circuit110belongs is set to the H level in the compensation period (D), and a voltage Vdata of the data signal Vd(1) is held in the capacitance element51. In each horizontal scanning period (H), in the writing period (D), the control signal /Gini is at the H level, the control signal Gorst is at the L level, the control signal /Drst is at the H level, the control signal Gref is set to the L level, and the control signal Gcp is set to the H level. Thus, the transistors68,67, and66are kept in the off-state, and the transmission gate73is changed to be in the off-state, and the transmission gate72is changed to be in the on-state. In addition, in the writing period (D) of the horizontal scanning period (H) in which the i-th row is selected, the scanning signal /Gwr(i) is kept at the L level, the control signal /Gcmp(i) is changed to be at the H level, and the control signal /Gel(i) is kept at the H level. Therefore, in the pixel circuit110, the transistor122is in the on-state, and the transistors123and124are brought into the off-state. Thus, in the writing period (E) of the horizontal scanning period (H) in which the i-th row is selected, as illustrated inFIG.21, due to the off-state of the transmission gate73, and the on-state of the transmission gate72, a voltage of the one end of the capacitance element74is changed from the voltage Vref in accordance with the voltage held by the capacitance element51. The voltage change passes through, via the capacitance element74, the data line14and the transistor122in this order, and propagates to the gate node g. A voltage of the gate node g after the change is held in the capacitance element140. Note that, as illustrated inFIG.21, a capacitor of the capacitance element51is denoted as Cref, a capacitor of the capacitance element74is denoted as Cblk, and a capacitor of the capacitance element75is denoted as Cdt, and a capacitor of the capacitance element140is denoted as Cpix. Additionally, the voltage of the data signal Vd(1) held in the capacitance element51in the compensation period (D) is denoted as Vdata. A voltage change amount ΔV of the gate node g from the compensation period (D) to the writing period (E) is expressed by Equation (1) below. [MathematicalEquation1]ΔV=Cblk(Cdt+Cpix)Cblk+Cdt+Cpix×Vref+Cref×VdataCblk(Cdt+Cpix)Cblk+Cdt+Cpix×(Vdata-Vref)=CrefCblk(Cdt+Cpix)Cblk+Cdt+Cpix×(Vdata-Vref)=Ka×(Vdata-Vref)(1) That is, as illustrated in Equation (1), a value of the gate node g changes to a value obtained by multiplying a voltage change amount (Vdata−Vref) at the one end of the capacitance element74by a coefficient Ka. Note that, the coefficient Ka is a coefficient less than “1”, and is determined by the capacitors Cref, Cblk, Cdt, and Cpix. In other words, each of the capacitors Cref, Cblk, Cdt and Cpix is designed to have an appropriate value, to set the coefficient Ka to be less than “1”. When the coefficient Ka is less than “1”, voltage amplitude from a lowest value to a highest value of the voltage Vdata of a data signal is compressed in accordance with the coefficient Ka, and propagates to the gate node g. When the pixel circuit110is miniaturized, a current flowing through the OLED130may change significantly for a very slight change in the voltage Vgs between the gate node and the source node of the transistor121. Even in this case, in the present exemplary embodiment, the voltage amplitude of the voltage Vdata of a data signal is compressed in accordance with the coefficient Ka, and propagates to the gate node g, and thus a current flowing through the OLED130can be controlled accurately. After the writing period (E), the light emission period (F) follows. In other words, after selection of the scanning line12in the i-th row, the control signal /Gel(i) is set to the M level when the light emission period (F) is reached. Thus, as illustrated inFIG.22, the transistor121causes a current Iel in accordance with the voltage Vgs, that is the current Iel limited by resistance between the source and the drain in transistor124, to flow through OLED130. Therefore, the OLED130is brought into an optical state of emitting light at brightness in accordance with the current Iel. As illustrated inFIG.10, such selection of the scanning line12is performed by each row in the regions (a), (b), (c), and (d) being set to the primary or secondary in the odd frame period and the even frame period. Additionally, as illustrated inFIG.14, for the light emission period (F), the selection of the scanning line12is performed by each row in the regions (a), (b), (c), and (d) being set to the primary or secondary in the odd frame period and the even frame period. In the present exemplary embodiment, for example, as illustrated inFIG.16, or as explained with reference toFIG.14described above, the two light emission periods (F) for the i-th row are set at approximately regular intervals in the odd frame period and the even frame period, and there are a total of the four light emission periods (F) from a viewpoint of the one period V (a period from the top image to the bottom image) of a vertical synchronization signal of 45 Hz. Specifically, a non-light emission period in which the control signal /Gel(i) is set to the H level is appropriately inserted, to configure the non-light emission period and the light emission period (F) to be alternately repeated. In the present exemplary embodiment, the configuration is adopted in which the amplitude of the voltage Vdata of a data signal output from the data signal output circuit30is compressed by interposing the capacitance element74to supply the amplitude to the gate node g in the pixel circuit110. On the other hand, in the present exemplary embodiment, the configuration is adopted in which in the compensation period (D), the threshold voltage Vth of the transistor121is compensated. Next, usefulness of the compensation period (D) will be described. Note that in describing this usefulness, in order to avoid complicated equations, a case is assumed in which a compression ratio of the voltage Vdata of a data signal is “1”, that is, a case is assumed in which the voltage Vdata of a data signal is supplied to the data line14as is in the writing period (E) after the compensation period (D). Further, it is assumed that in the light emission period (F), when the L level, rather than the M level, is applied to the gate node of the transistor124, and the transistor124is brought into the on-state, resistance between the source node and the drain node is ideally zero. First, the current Iel flowing through the OLED130in the light emission period (F) can be expressed as in Equation (2) below. [Mathematical Equation 2] Iel=k1(Vgs−Vth)2(2) Note that, a coefficient k1 in Equation (2) is expressed by the following Equation (3). [Mathematical Equation 3] k1=(W/2L)·μCox(3) In Equation (3), W is a channel width of the transistor121, L is a channel length of the transistor121, μ is mobility of a carrier, and Cox is a capacitor per unit area of a (gate) oxide film in the transistor121. In a configuration in which the voltage Vdata of a data signal is not compressed, and the threshold voltage of the transistor121is not compensated, when the voltage Vdata of a data signal is applied directly to the gate node g of the transistor121, the voltage Vgs between the gate node and the source node of the transistor121can be expressed as in Equation (4) below. [Mathematical Equation 4] Vgs=|Vel−Vdata| (4) At this time, the current Iel flowing through the OLED130can be expressed as in Equation (5) below. [Mathematical Equation 5] Iel=k1(Vgs−Vth)2 =k1(Vel−Vdata−Vth)2(5) As expressed in Equation (5), the current Iel is influenced by the threshold voltage Vth. Here, due to a semiconductor process, a variation of the threshold voltage Vth in the transistor121is in a range from several mV to several tens of mV. When the threshold voltage Vth in the transistor121varies in a range from several mV to several tens of mV, there is a possibility that a maximum of 40% difference in the current Iel may be generated between the adjacent pixel circuits110. Current-brightness characteristics in the OLED130are generally linear. Therefore, in a configuration that does not compensate for the threshold voltage Vth, even when a data signal of the same voltage Vdata is supplied to each of the two pixel circuits110in order to cause the two OLEDs130to emit light at the same brightness, currents flowing through the respective OLEDs130are actually different. Therefore, in a configuration that does not compensate for the threshold voltage Vth, the brightness is varied, and display quality will be significantly impaired. Therefore, in the present exemplary embodiment, the configuration is adopted in which the threshold voltage Vth compensation is performed only in a row set to the primary, and by switching the primary and secondary setting between the odd frame and the even frame, the threshold voltage Vth compensation is performed at least in one frame of either the odd frame or the even frame. When a voltage of the gate node g in the transistor121is caused to converge to the voltage (Vel−Vth) in the compensation period (D), and then the gate node g is caused to change to have the voltage Vdata, the voltage Vgs between the gate node and the source node of the transistor121can be expressed as in Equation (6) below. [Mathematical Equation 6] Vgs=Vth−k2(Vdata−Vref) (6) Note that, a coefficient k2 in Equation (6) is a coefficient determined by the capacitors Cblk and Cpix in a configuration in which the voltage Vdata of a data signal is not compressed (configuration without the capacitance element74). When the voltage Vgs is expressed as in Equation (6), the current Iel flowing through the OLED130can be expressed as in Equation (7) below. [Mathematical Equation 7] Iel=k1{Vth−k2(Vdata−Vref)−Vth}2 =k1k2(Vref−Vdata)2(7) In Equation (7), the term of the threshold voltage Vth is removed, and the current Iel is determined by the voltage Vdata of a data signal. This makes it possible to suppress a reduction in display quality due to the threshold voltage Vth of the transistor121. Note that, in the exemplary embodiment, actually as illustrated in Equation (1), the voltage amplitude from the lowest value to the highest value of the voltage Vdata of a data signal is compressed in accordance with the coefficient Ka, and propagates to the gate node g. Further, in the present exemplary embodiment, the M level is supplied to the gate node of the transistor124in the light emission period (F) to limit the current Iel, but the reduction in display quality due to the threshold voltage Vth is still suppressed. Next, in the present exemplary embodiment, usefulness of applying the M level to the gate node of the transistor124in the light emission period (F) will be described. The reason for applying the M level to the gate node of the transistor124is to maintain a constant current property by the transistor121, regardless of a change in current voltage characteristics over time in the OLED130, by causing the transistor124to operate in a saturation region. In particular, when the current Iel flows, the OLED130emits light at brightness in accordance with the current Iel. In the present exemplary embodiment, in the pixel circuit110, the voltage of the gate node g in the transistor121is held by the capacitance element140, so that the constant current property of the current Iel flowing from the power supplying line116to the OLED130is ensured. However, the OLED130has characteristics that element characteristics change due to a lapse of light emission time and that a potential of the anode (pixel electrode131) required to flow a constant current gradually increases. When the potential of the anode in the OLED130increases, an equilibrium point of potential in a path from the power supplying line116to the common electrode133changes, and a potential of the source node of the transistor124, that is the drain node of the transistor121, increases. When the potential of the drain node of the transistor121increases, the voltage between the source node and the drain node in the transistor121also varies, and the current flowing through the drain node of the transistor121also varies, and as a result, the constant current property of the OLED130is impaired. Therefore, in the present exemplary embodiment, the transistor124is caused to operate in the saturation region as a countermeasure for the impaired constant current property in association with the change over time in the element characteristics of the OLED130. When the transistor124is caused to operate in the saturation region, even when the potential of the anode in the OLED130is changed, it is the transistor124that is directly affected. The transistor121is affected by the potential variation in the drain node of the transistor124, but a variation in a drain current in the saturation region is small. Thus, influence by the variation in the drain potential in the transistor121coupled to the transistor124, and thus by a variation in a gate potential due to current leak is mitigated. In the first exemplary embodiment, a data amount in the Y direction of the video data Vid supplied from the host device250to the electro-optical device10is reduced. Furthermore, a data amount in the X direction can also be reduced by the following technique. FIG.23is a diagram for explaining the reduction in data amount in the X direction. Note that inFIG.23, for the sake of simplicity of description, when RGB correspond to one dot, vertical two dots by horizontal four dots are extracted from a matrix array. Note that, a number in a lower side in a square frame indicates a dot number in the X direction of an original image. For example, R3 means an R component belonging to a third dot in the X direction. When the original image data is illustrated by the vertical two dots by the horizontal four dots (RGB) as described above, the host device250reduces R components for two dots among four dots, does not reduce a G component, and reduces B components for the two dots among the four dots, and supplies image data to the electro-optical device10. The electro-optical device10, for the image data of the reduced R and B components, reproduces the image data of the reduced color components, by duplicating the same color component for adjacent dots, as illustrated in a lower section of the figure. For example, R2 reduced from the original image data is reproduced by replicating R1 that was not reduced. In consideration of contribution (visibility) in brightness of each color in RGB, R:3, G:6, and B:1 are defined. “10” (=3+6+1) before the data reduction becomes “8” (=1.5+6+0.5) after the data reduction. In the reduction as described above, RGBRGBRGBRGB becomes RGBGRGBG, image quality becomes ⅔, but in view of the contribution described above, the image quality becomes ⅘. In the present exemplary embodiment, since the image quality in the Y direction becomes ⅔, considering that the image quality in the X direction becomes ⅘, image quality in XY directions becomes 8/15 (=⅔×⅘), which is better than half the image quality. According to the reduction in the Y direction only, driving can be performed at a vertical synchronization frequency of 45 Hz, when the reduction is further performed in the X direction, one horizontal scanning period is shortened, and thus driving can be performed at 67.5 Hz, which is 3/2 times. 45 Hz is for the one cycle V throughout an odd frame and an even frame, so that in either subframe, driving is performed at 135 Hz, which is twice. Note that in such driving, when vertical line drawing and character are caused to be displayed, a line diagram or the like may be discolored and visually recognized, but such a display is a still image, thus it is sufficient that driving is performed by a method that does not reduce data. Second Exemplary Embodiment Next, the electro-optical device10according to a second exemplary embodiment will be described. Note that in the second exemplary embodiment, the configuration of the electro-optical device10is the same as that of the first exemplary embodiment, and resolution that can be expressed in a color image is 1080 dots by 1920 dots. Additionally, in the second exemplary embodiment, the display region100of the electro-optical device10need not be divided into the regions (a), (b), (c), and (d). FIG.24is an explanatory diagram of video data supplied to the electro-optical device10from the host device250in the second exemplary embodiment. As illustrated in this figure, in the second exemplary embodiment, the host device250supplies an image of vertical 720 lines to the electro-optical device10in the present exemplary embodiment. However, the electro-optical device10includes the vertical 1080 rows, it is necessary to perform 1.5 times extension in the vertical direction. Thus, in the second exemplary embodiment, as illustrated inFIG.25, in a certain frame period, single row selection and two-row simultaneous selection are repeated every three rows. In other words, in the single row selection, a selected row is set to the primary, in the two-row simultaneous selection, one row is set to the primary, and another is set to the secondary. In the next frame period, the row previously selected in the single row selection is set to the primary in the two-row simultaneous selection, the row previously set to the primary in the two-row simultaneous selection is set to the secondary in the two-row simultaneous selection, and the row previously set to the secondary in the two-row simultaneous selection is set to the primary in the single row selection. Note that, as illustrated inFIG.25, threshold compensation is performed in the row set to the primary, and is not performed in the row set to the secondary. In such driving, when the image illustrated inFIG.24is displayed in two frames in the electro-optical device10, transfer of the video data Vid can be completed at 30 Hz, thus power consumption can be suppressed. For example, the display for one frame at 60 Hz can be displayed in one frame at 30 Hz. Since data transfer amount is halved, it is possible to reduce logic current consumption, and, a parallel number in a high-speed I/F can be reduced to ½, for example, from 8 to 4. That is, the power consumption can be reduced. In this manner, in the electro-optical device10, the driving in the first exemplary embodiment and the second exemplary embodiment can be performed in accordance with the video data Vid supplied from the host device250. In addition, as illustrated inFIG.26, by setting all the 1st to 1080-th lines of the video data Vid illustrated in the upper section ofFIG.7to the primary, it is possible to perform driving without deterioration. Even when these driving methods are caused to be changed, power consumption does not increase, thus, for example, it is easy to selectively use the driving methods, when displaying is desirably performed at a high frame rate for applications such as games, and when a high frame rate is not needed for a still image or the like. In addition, in the exemplary embodiments and the like, the OLED130has been illustrated as an example of the display element, but other display elements may be used. For example, LEDs, mini LEDs, micro LEDs, or the like may be used as the display element. An optical state in a pixel circuit refers to a state in which these display elements emit light at brightness corresponding to a voltage of a data signal. The channel type of each of the transistors121,122,123, and124is not limited to the exemplary embodiments and the like. Further, these transistors may also be replaced with transmission gates as appropriate except for the transistor121. Additionally, the transmission gates45,72, and73may also be replaced with one-sided channel transistors. Electronic Apparatus Next, an electronic apparatus to which the electro-optical device10according to the above-described exemplary embodiments is applied will be described. The electro-optical device10is suitable for application with a small pixel and high definition display. In this regards, a head-mounted display will be described as an example of the electronic apparatus. FIG.27is a diagram illustrating appearance of a head-mounted display, andFIG.28is a diagram illustrating an optical configuration of the head-mounted display. First, as illustrated inFIG.27, a head-mounted display300includes, in terms of appearance, temples310, a bridge320, and lenses301L and301R, as with typical eye glasses. In addition, as illustrated inFIG.28, the head-mounted display300is provided with an electro-optical device for a left eye and an electro-optical device10R for a right eye in the vicinity of the bridge320and on the back side (the lower side in the figure) of the lenses301L and301R. An image display surface of the electro-optical device is disposed to be on the left side inFIG.28. According to this configuration, a display image by the electro-optical device10L is output via an optical lens302L in a 9-o'clock direction in the figure. A half mirror303L reflects the display image by the electro-optical device10L in a 6-o'clock direction, while the half mirror303L transmits light entering in a 12-o'clock direction. An image display surface of the electro-optical device10R is disposed on the right side opposite to the electro-optical device10L. According to this configuration, a display image by the electro-optical device10R is emitted via an optical lens302R in a 3-o'clock direction in the figure. A half mirror303R reflects the display image by the electro-optical device10R in the 6-o'clock direction, while the half the mirror303R transmits light entering in the 12-o'clock direction. In this configuration, a wearer of the head-mounted display300can observe the display images by the electro-optical devices10L and10R in a see-through state in which the display images by the electro-optical devices10L and10R overlap with the outside. In addition, in the head-mounted display300, of images for both eyes with parallax, an image for a left eye is displayed on the electro-optical device10L, and an image for a right eye is displayed on the electro-optical device10R, and thus, it is possible to cause a wearer to sense the displayed images as an image displayed having a depth or a three dimensional effect. Note that, in addition to the head-mounted display300, an electronic apparatus including the electro-optical device10can be applied to an electronic viewing finder in a video camera, a lens-exchangeable digital camera, and the like, a personal digital assistant, a watch display, a light valve of a projection type projector, or the like. APPENDICES Preferred aspects of the present disclosure will be understood as in the following from the above description, for example. Note that, in order to facilitate understanding of each of the aspects, in the following, the reference signs of the figures will also be denoted in parentheses for convenience, but the present disclosure is not intended to be limited to the illustrated aspects. Appendix 1 An electro-optical device (10) according to an aspect (Aspect 1) includes a first scanning line (12) disposed in an i-th row in a display region (100), a first pixel circuit (110) provided corresponding to the first scanning line (12) and a first data line (14) provided in a k-th column in the display region (100), and brought into an optical state in accordance with a voltage of the first data line (14) when the first scanning line (12) is selected, a second scanning line (12) disposed in an (i+1)-th row in the display region (100), and a second pixel circuit (110) provided corresponding to the second scanning line (12) and the first data line (14), and brought into an optical state in accordance with a voltage of the first data line when the second scanning line (14) is selected, wherein i and k are integers, of a first subframe period (odd frame period) of a frame period (V), in a period in which the first scanning line (12) and the second scanning line (12) are selected, a data signal of a voltage corresponding to an i-th row and k-th column of first image data (data of a top image) in the first subframe period (odd frame period) is output, and of a second subframe period (even frame period) of the frame period (V), in a period in which the first scanning line (12) and the second scanning line (12) are selected, a data signal of a voltage corresponding to an (i+1)-th row and the k-th column of second image data (data of a bottom image) in the second subframe period (even frame period) is output. According to Aspect 1, since one scanning line is sufficient for one row, wiring in the display region in which the pixel circuits are arrayed can be avoided from being complicated. Displaying can be performed at a high frame rate while maintaining resolution. Note that, the scanning line12in the i-th row is an example of the first scanning line, and the scanning line12in the (i+1)-th row is an example of the second scanning line, and the data line14in the k-th column is an example of the first data line. In addition, the pixel circuit110in the i-th row and j-th column is an example of the first pixel circuit, and the pixel circuit110in the (i+1)-th row and the j-th column is an example of the second pixel circuit. A period of one cycle specified by a vertical synchronization signal is an example of the frame period, and the odd frame period is an example of the first subframe period, and the even frame period is an example of the second subframe period. The top image is an example of the first image, and the bottom image is an example of the second image. Appendix 2 The electro-optical device (10) according to a specific aspect (Aspect 2) of Aspect 1 includes a scanning line drive circuit (120) configured to supply a scanning signal to the first scanning line (12) and the second scanning line (12), wherein the scanning line drive circuit (120) includes a first holding unit (Me1) holding information for setting each of the first scanning line (12) and the second scanning line (12) to a primary or a secondary, and when information for specifying selection of the scanning line (12) set to the primary is supplied, supplies the primary scanning line (12) with a scanning signal indicating that the primary scanning line (12) is to be selected, and supplies the scanning line (12) set to the secondary with a scanning signal indicating that the scanning line (12) set to the secondary is to be selected. According to Aspect 2, single row selection or two-row simultaneous selection in the scanning line (12) can be realized by setting the primary and the secondary. Appendix 3 In the electro-optical device (10) according to a specific aspect (Aspect 3) of Aspect 2, each of the first pixel circuit (110) and the second pixel circuit (110) includes a first transistor (121), a second transistor (122), a third transistor (123), a fourth transistor (124), and a display element (130), the first transistor (121) includes a gate node, a source node, and a drain node, and causes a current in accordance with a voltage between the gate node and the source node to flow to the display element (130) via the fourth transistor (124), the second transistor (122) is provided between the first data line and the gate node of the first transistor, and brought into an on-state or an off-state in accordance with selection or non-selection of a scanning line, the third transistor (123) is provided between the data line (14) and the drain node of the first transistor (121), and the fourth transistor (124) is provided between the drain node of the first transistor (121) and the display element (130), in the first subframe period (odd frame period), there is a period in which the gate node and the drain node of the first transistor (121) in the first pixel circuit (110) are electrically coupled, there is not a period in which the gate node and the drain node of the first transistor (121) in the second pixel circuit (110) are electrically coupled, and in the second subframe period (even frame period), there is not a period in which the gate node and the drain node of the first transistor (121) in the first pixel circuit (110) are electrically coupled, and there is a period in which the gate node and the drain node of the first transistor (121) in the second pixel circuit (110) are electrically coupled. According to Aspect 3, threshold compensation for the first transistor (121) is appropriately performed. Note that, the transistor121is an example of the first transistor, the transistor122is an example of the second transistor, the transistor123is an example of the third transistor, and the transistor124is an example of the fourth transistor. Appendix 4 In the electro-optical device (10) according to a specific aspect (Aspect 4) of Aspect 3, the fourth transistor (124) of the first pixel circuit (110) is controlled to be in the on-state by selection of the first light emission control line (118), the fourth transistor (124) of the second pixel circuit (110) is controlled to be in the on-state by selection of the second light emission control line (118), the scanning line drive circuit (120) includes a second holding unit (Me2) holding information for setting each of the first light emission control line (118) and the second light emission control line (118) to the primary or the secondary, supplies a light emission control signal to the first light emission control line (118) and the second light emission control line (118), and when information specifying selection of the light emission control line (118) set to the primary is supplied, supplies the primary light emission control line (118) with a light emission control signal indicating that the primary light emission control line (118) is to be selected, and supplies the light emission control line (118) set to the secondary with a light emission control signal indicating that the light emission control line (118) set to the secondary is selected. According to Aspect 4, single row selection or two-row simultaneous selection in the light emission control line (118) can be realized by setting the primary and the secondary. Note that, the light emission control line118in the i-th row is an example of the first light emission control line, and the light emission control line118in the (i+1)-th row is an example of the second light emission control line. Appendix 5 The electro-optical device (10) according to a specific aspect (Aspect 5) of Aspect 4 includes a third pixel circuit (110) provided corresponding to a third scanning line (112) and the first data line (14), and a fourth pixel circuit (110) provided corresponding to a fourth scanning line (12) and the first data line (14), the first scanning line to the fourth scanning line are arrayed in this order, in the first subframe period (odd frame period), the first scanning line (12) and the third scanning line (12) are set to the primary, in a period in which the third scanning line (12) and the fourth scanning line (14) are selected, a data signal of a voltage corresponding to an (i+2)-th row and the k-th column of the first image data (data of the top image) is output, in the second subframe period (even frame period), the second scanning line (12) and the fourth scanning line (12) are set to the primary, and in a period in which the third scanning line (12) and the fourth scanning line (12) are selected, a data signal of a voltage corresponding to an (i+3)-th row and the k-th column of the second image (bottom image) data is output. According to Aspect 5, in the first subframe period (odd frame period) and the second subframe period (even frame period), the primary and secondary are switched in the third scanning line (12) and the fourth scanning line (12). Note that, the scanning line12in the (i+2)-th row is an example of the third scan line, and the scanning line12in the (i+3)-th row is an example of the fourth scan line. Additionally, the pixel circuit110in the (i+2)-th row and the j-th column is an example of the third pixel circuit, and the pixel circuit110in the (i+3)-th row and the j-th columns is an example of the fourth pixel circuit. Appendix 6 In the electro-optical device (10) according to a specific aspect (Aspect 6) of Aspect 5, a fourth transistor (124) of the third pixel circuit (110) is controlled to be in the on-state by selection of a third light emission control line (118), and a fourth transistor (124) of the fourth pixel circuit (110) is controlled to be in the on-state by selection of a fourth light emission control line (118), and after one of the first light emission control line (118) and the second light emission control line (118) is set to the primary, and another is set to the secondary, one of the third light emission control line (118) and the fourth light emission control line (118) is set to the primary, and another is set to the secondary. Appendix 7 In the electro-optical device (10) according to a specific aspect (Aspect 7) of Aspect 6, the display region (100) includes a first region (a) and a second region (b) separated in a direction along the first scanning line (12), the second region (b) being positioned closer to a center than the first region (a), and, in the second region (b), a fifth pixel circuit (110) provided corresponding to a fifth scanning line (12) and the first data line (14), a sixth pixel circuit (110) provided corresponding to a sixth scanning line (12) and the first data line (14), a seventh pixel circuit (110) provided corresponding to a seventh scanning line (12) and the first data line (14), an eighth pixel circuit (110) provided corresponding to an eighth scanning line (12) and the first data line (14), a ninth pixel circuit (110) provided corresponding to a ninth scanning line (12) and the first data line (14), and a tenth pixel circuit (110) provided corresponding to a tenth scanning line (12) and the first data line (14), are included, the first scanning line to the tenth scanning line are arrayed in this order, in the first subframe period (odd frame period), the fifth scanning line (12), the sixth scanning line (12), the seventh scanning line (12), the ninth scanning line (12), and the tenth scanning line (12) are set to the primary, the eighth scanning line (12) is set to the secondary of the seventh scanning line (12), and in the second subframe period (even frame period), the fifth scanning line (12), the sixth scanning line (12), the eighth scanning line (12), the ninth scanning line (12), and the tenth scanning line (12) are set to the primary, and the seventh scanning line (12) is set to the secondary of the eighth scanning line (12). According to Aspect 7, resolution in the second region is improved compared to the first region. Additionally, in the first subframe period (odd frame period) and the second subframe period (even frame period), the primary and secondary are switched in the seventh scanning line (12) and the eighth scanning line (12). Note that, the region (a) is an example of the first region, and the region (b) is an example of the second region. The scanning lines12in the respective first to sixth rows in the region (b) are an example of the fifth to tenth scanning lines. Appendix 8 The electro-optical device (10) according to a specific aspect (Aspect 8) of Aspect 7 includes an eleventh pixel circuit (110) provided corresponding to the first scanning line (12), and a second data line (12) different from the first data line (12), in the first subframe period (odd frame period), in a period in which the first scanning line (12) is selected, a data signal of a voltage corresponding to the i-th row and the k-th column of the first image data (data of the top image) is output to the second data line (12), and in the second subframe period (even frame period), in a period in which the second scanning line (12) is selected, a data signal of a voltage corresponding to the (i+1)-th row and the k-th column of the second image (bottom image) data is output to the second data line (14). According to Aspect 8, a data signal supplied to a data line is also compressed, and thus a data amount can be further reduced. Note that, the data line14of R or B belonging to an even column dot is an example of the second data line. Appendix 9 An electronic apparatus according to Aspect 9 includes the electro-optical device according to any one of Aspects 1 to 8. | 91,363 |
11862104 | DETAILED DESCRIPTION The advantages and features of the present disclosure and methods for accomplishing the same will be more clearly understood from embodiments described below with reference to the accompanying drawings. However, the present disclosure is not limited to the following embodiments but may be implemented in various different forms. Rather, the present embodiments will make the disclosure of the present disclosure complete and allow those skilled in the art to completely comprehend the scope of the present disclosure. The present disclosure is only defined within the scope of the accompanying claims. The shapes, sizes, ratios, angles, numbers, and the like illustrated in the accompanying drawings for describing the embodiments of the present disclosure are merely examples, and the present disclosure is not limited thereto. Like reference numerals generally denote like elements throughout the present specification. Further, in describing the present disclosure, detailed descriptions of known related technologies may be omitted to avoid unnecessarily obscuring the subject matter of the present disclosure. The terms such as “comprising,” “including,” and “having,” used herein are generally intended to allow other components to be added unless the terms are used with the term “only.” Any references to singular may include plural unless expressly stated otherwise. Components are interpreted to include an ordinary error range even if not expressly stated. When the position relation between two components is described using the terms such as “on,” “above,” “below,” and “next,” one or more components may be positioned between the two components unless the terms are used with the term “immediately” or “directly.” The terms “first,” “second,” and the like may be used to distinguish components from each other, but the functions or structures of the components are not limited by ordinal numbers or component names in front of the components. The same reference numerals may refer to substantially the same elements throughout the present disclosure. The following embodiments can be partially or entirely bonded to or combined with each other and can be linked and operated in technically various ways. The embodiments can be carried out independently of or in association with each other. Hereinafter, various embodiments of the present disclosure will be described in detail with reference to the accompanying drawings. FIG.1is a view illustrating a gate driver according to a first embodiment of the present disclosure. Referring toFIG.1, the gate driver according to the first embodiment of the present disclosure may include a first control node (hereinafter referred to as a “Q node Q(n)”) that pulls up an output voltage, a second control node (hereinafter referred to as a “Qb node Qb(n)”) that pulls down the output voltage, a controller120-1, a first output unit120-2, and a switch unit120-4. The controller120-1may serve to charge and discharge the first control node and the second control node. The first output unit120-2may output a gate signal GOUT(n) in response to charging voltages of the first control node and the second control node. The first output unit120-2may include a first pull-up transistor T1and a first pull-down transistor T2. The first pull-up transistor T1may output a gate high voltage to an output node in response to the charging voltage of the first control node, and the first pull-down transistor T2may output a gate low voltage to the output node in response to the charging voltage of the second control node. The first output unit120-2may further include a first capacitor C1between a gate electrode of the first pull-up transistor T1and the output node. The switch unit120-4may connect a first power line L1to which a high potential voltage EVDD2is applied or a second power line L2to which a first clock signal ECLK3is applied to the first pull-up transistor using a carry signal transmitted from a signal transmission unit of a previous stage and the charging voltage of the second control node. The switch unit120-4may include a third-1 transistor T31, a third-2 transistor T32, a third-3 transistor T33, a third-4 transistor T34, and a third-5 transistor T35. The third-1 transistor T31is turned on by the second control node to supply the high potential voltage EVDD2to a first node n1. The third-1 transistor T31includes a gate electrode connected to the second control node, a first electrode connected to the first power line L1to which the high potential voltage EVDD2is applied, and a second electrode connected to the first node n1. The third-2 transistor T32is turned on by a carry signal Carry(n−1) transmitted from the signal transmission unit of the previous stage to connect the first node n1to a fourth power line L4to which a low potential voltage GVSS0is applied to discharge the first node n1. The third-2 transistor T32includes a gate electrode to which the carry signal Carry(n−1) transmitted from the signal transmission unit of the previous stage is applied, a first electrode connected to the first node n1, and a second electrode connected to the fourth power line L4. The third-3 transistor T33is turned on by the first node n1to connect a second node n2to a fifth power line L5to which a second clock signal ECLK is applied to supply the second clock signal ECLK. The third-3 transistor T33includes a gate electrode connected to the first node n1, a first electrode connected to the fifth power line L5, and a second electrode connected to the second node n2. The third-4 transistor T34is turned on by the carry signal Carry(n−1) transmitted from the signal transmission unit of the previous stage to connect the second node n2to the fourth power line L4to which the low potential voltage GVSS0is applied to discharge the second node n2. The third-4 transistor T34includes a gate electrode to which the carry signal Carry(n−1) transmitted from the signal transmission unit of the previous stage is applied, a first electrode connected to the second node n2, and a second electrode connected to the fourth power line L4. The third-5 transistor T35is turned on by the second node n2to supply a first clock signal ECLK3to a first output node GOUT(n). The third-5 transistor T35includes a gate electrode connected to the second node n2, a first electrode connected to the second power line L2to which the first clock signal ECLK3is applied, and a second electrode connected to the first output node GOUT(n). FIG.2is a view schematically illustrating the gate driver according to the embodiment of the present disclosure. Referring toFIG.2, the gate driver according to the embodiment includes a plurality of signal transmission units ST(n−2), ST(n−1), ST(n), ST(n+1), and ST(n+2) cascade-connected via a carry line through which a carry signal is transmitted. Each of the signal transmission units ST(n−2), ST(n−1), ST(n), ST(n+1), and ST(n+2) receives a start pulse or a carry signal output from the signal transmission unit of the previous stage, and receives the clock signals ECLK (including ECLK1-1, ECLK1-2, ECLK1-3and ECLK1-4) and ECLK3(including ECLK3-1, ECLK3-2, ECLK3-3and ECLK3-4). A first signal transmission unit ST(1) starts to be driven according to a start pulse VST, and each of the other signal transmission units ST(n−2), ST(n−1), ST(n), ST(n+1), and ST(n+2) receives the carry signal Carry output from the signal transmission unit of the previous stage and starts to be driven. Each of the signal transmission units ST(n−2), ST(n−1), ST(n), ST(n+1), and ST(n+2) may charge the first control node using the second clock signal or an activation clock ECLK. Each of the signal transmission units ST(n−2), ST(n−1), ST(n), ST(n+1), and ST(n+2) may generate an output signal EMOUT(n−2) to EMOUNT(n+2) using the first clock signal ECLK3. Here, the first clock signal ECLK3may be an out-of-phase signal obtained by inverting a phase of the second clock signal ECLK. FIG.3is a detailed circuit diagram illustrating a gate driver according to a second embodiment of the present disclosure. Transistors constituting the gate driver may be implemented as n-channel oxide TFTs. The circuit shown inFIG.3is a circuit of an nth (n is a positive integer) signal transmission unit ST(n). Other signal transmission units may be implemented with substantially the same circuit as the nth signal transmission unit ST(n).FIG.4is a waveform diagram illustrating voltages of input/output signals and control nodes of the gate driver shown inFIG.3. Referring toFIGS.3and4, the gate driver according to the second embodiment includes a first control node (hereinafter referred to as a “Q(n) node”), a second control node (hereinafter referred to as a “Qb(n) node”), a controller120-1, a first output unit120-2, a second output unit120-3, and a switch unit120-4. The controller120-1may serve to charge and discharge the first control node and the second control node. The controller120-1includes a sixth transistor T6, a seventh transistor T7, an eighth transistor T8, a ninth transistor T9, a tenth transistor T10, an eleventh transistor T11, a twelfth transistor T12, and a thirteenth transistor T13. The sixth transistor T6is turned on to connect a carry signal node to a buffer node Qh when the activation clock ECLK is applied. The carry signal Carry(n−1) from the signal transmission unit of the previous stage is applied to the carry signal node. The carry signal Carry(n−1) may be output from a second output node of a previous signal transmission unit, for example, an n-lth signal transmission unit ST(n−1). The sixth transistor T6includes a gate electrode to which the activation clock ECLK is applied, a first electrode connected to the carry signal node, and a second electrode connected to the buffer node Qh. The seventh transistor T7is turned on to connect the buffer node Qh to a first control node Q(n) when the activation clock ECLK is applied. The seventh transistor T7includes a gate electrode to which the activation clock ECLK is applied, a first electrode connected to the buffer node Qh, and a second electrode connected to the first control node Q(n). The sixth and seventh transistors T6and T7are turned on according to a high voltage of the activation clock ECLK during a time in which the activation clock ECLK is applied to charge the buffer node Qh and the first control node Q(n). The eighth transistor T8is turned on to connect a sixth power line L6to the buffer node Qh to charge the buffer node Qh when the first control node Q(n) is charged to a high voltage. A high potential voltage GVDD1is applied to the sixth power line L6. The eighth transistor T8includes a gate electrode connected to the first control node Q(n), a first electrode connected to the sixth power line L6, and a second electrode connected to the buffer node Qh. The ninth transistor T9is turned on to connect a second control node Qb(n) to a seventh power line L7to discharge the second control node Qb(n) when a voltage of the buffer node Qh is a high voltage VDD. A low potential voltage GVSS2is applied to the seventh power line L7. The ninth transistor T9includes a gate electrode connected to the buffer node Qh, a first electrode connected to the second control node Qb(n), and a second electrode connected to the seventh power line L7. The tenth transistor T10is turned on to connect the sixth power line L6to a gate electrode of a twelfth transistor T12when a voltage of the second control node Qb(n−1) of the signal transmission unit of the previous stage, that is, the n-lth signal transmission unit ST(n−1), is the high voltage VDD. The high potential voltage GVDD1is applied to the sixth power line L6. The tenth transistor T10includes a gate electrode connected to the second control node Qb(n−1) of the signal transmission unit of the previous stage, a first electrode connected to the sixth power line L6, and a second electrode connected to the gate electrode of the twelfth transistor T12and a third node n3. The eleventh transistor T11is turned on to connect the gate electrode of the twelfth transistor T12to an eighth power line L8when the voltage of the buffer node Qh is the high voltage VDD. A low potential voltage GVSS1is applied to the eighth power line L8. The eleventh transistor T11includes a gate electrode connected to the buffer node Qh, a first electrode connected to the gate electrode of the twelfth transistor T12, and a second electrode connected to the eighth power line L8. The twelfth transistor T12is turned on to connect the sixth power line L6to the second control node Qb(n) when the gate voltage is the high voltage VDD. A high potential voltage GVDD1is applied to the sixth power line L6. The twelfth transistor T12includes the gate electrode connected to the second electrode of the tenth transistor T10and the first electrode of the eleventh transistor T11, a first electrode connected to the sixth power line L6, and a second electrode connected to the second control node Qb(n). A second capacitor C2may be connected between the gate electrode and the second electrode of the twelfth transistor T12. The thirteenth transistor T13is turned on by the carry signal Carry(n−1) transmitted from the signal transmission unit of the previous stage to connect a ninth power line L9to the second control node Qb(n). The thirteenth transistor T13includes a gate electrode connected to the carry signal node, a first electrode connected to the second control node Qb(n), and a second electrode connected to the ninth power line L9. The second output unit120-3outputs the carry signal Carry(n) by charging and discharging the second output node. The second output unit120-3includes a second pull-up transistor T4and a second pull-down transistor T5. The second pull-up transistor T4includes a gate electrode connected to the first control node Q(n), a first electrode connected to the sixth power line L6, and a second electrode connected to the second output node. The second pull-down transistor T5is connected to the second pull-up transistor T4with the second output node therebetween. The second pull-down transistor T5includes a gate electrode connected to the second control node Qb(n), a first electrode connected to the second output node, and a second electrode connected to the ninth power line L9. The first output unit120-2outputs an output signal EMOUT(n) by charging and discharging the first output node. The first output unit120-2includes a first pull-up transistor T1driven by the first control node and a first pull-down transistor T2driven by the second control node. The first pull-up transistor T1outputs a gate high voltage to an output node in response to a charging voltage of the first control node. The first pull-up transistor T1includes a gate electrode connected to the first control node, a first electrode connected to a first power line to which a high potential voltage EVDD2is applied, and a second electrode connected to the first output node. The first pull-down transistor T2may output a gate low voltage to the output node in response to a charging voltage of the second control node. The first pull-down transistor T2includes a gate electrode connected to the second control node, a first electrode connected to the first output node, and a second electrode connected to a third power line L3to which a low potential voltage GVSS0is applied. The gate driver according to the embodiment has a structure capable of selectively applying a CLK application method and a VDD application method by applying a pseudo inverter structure using the carry signal Carry(n−1) transmitted from the signal transmission unit of the previous stage and the charging voltage of the second control node Qb(n). The gate driver according to the embodiment has a basic structure of the VDD application method, and has a structure in which an output characteristic is excellent and a floating section is eliminated by forming a current path to apply the clock signal to the output node in a section in which an output signal maintains a low level, and apply the high potential voltage to the output node in a section in which the output signal maintains a high level. FIG.5is a view for describing an output state of the switch unit shown inFIG.3, andFIGS.6A to6Care views for describing a driving principle of the switch unit shown inFIG.5. Referring toFIG.5, the switch unit120-4according to the embodiment of the present disclosure may connect the first power line L1to which the high potential voltage EVDD2is applied or the second power line L2to which the first clock signal ECLK3is applied to the first pull-up transistor using the carry signal Carry(n−1) transmitted from the signal transmission unit of the previous stage and the charging voltage of the second control node Qb(n). For example, in a section in which the carry signal Carry(n−1) is at a high voltage level and the second control node Qb(n) is at a low voltage level, the first power line L1may be connected to the first pull-up transistor T1, and in a section in which the carry signal Carry(n−1) is at a low voltage level and the second control node Qb(n) is at a high voltage level, the second power line L2may be connected to the first pull-up transistor T1. Referring toFIG.6A, when the carry signal Carry(n−1) is at a high voltage level and the second control node Qb(n) is discharged and thus is at a low voltage level in a first section ({circle around (1)}), since the third-1 transistor T31, the third-3 transistor T33, and the third-5 transistor T35are turned off, the first node n1and the second node n2each maintain a low voltage level, and since the third-2 transistor T32and the third-4 transistor T34are turned on and thus the first power line L1is connected to the first electrode of the first pull-up transistor T1, the high potential voltage EVDD2is applied to the first output node. Referring toFIG.6B, when the carry signal Carry(n−1) is at a low voltage level and the second control node Qb(n) is charged and thus is at a high voltage level in a second section ({circle around (2)}), since the third-2 transistor T32and the third-4 transistor T34are turned off, and the third-1 transistor T31, the third-3 transistor T33, and the third-5 transistor T35are turned on, the first node n1maintains the high voltage level, and the second clock signal ECLK is applied to the second node n2. When the second clock signal ECLK is at a high voltage level, the third-3 transistor T33is turned on, and a low voltage of the first clock signal ECLK3is applied to the first output node. When the second clock signal ECLK is at a low voltage level, the third-3 transistor T33is turned off, and a high voltage of the first clock signal ECLK3is not applied to the first output node. Referring toFIG.6C, when the carry signal Carry(n−1) is at the high voltage level and the second control node Qb(n) is charged and thus is at the high voltage level in a third section ({circle around (3)}), since the third-1 transistor T31, the third-2 transistor T32, the third-3 transistor T33, and the third-4 transistor T34are turned on, the first node n1and the second node n2each maintain the low voltage level according to a width ratio of the transistor, and since the third-5 transistor T35is turned off and thus the first output node is not connected to the first power line L1or the second power line L2, the low voltage level in the second section ({circle around (2)}) is maintained. FIGS.7A and7Bare views illustrating a simulation result using the gate driver shown inFIG.3. Referring toFIGS.7A and7B, according to the simulation result using the gate driver according to the embodiment of the present disclosure, it can be seen that the output signal is normally output. Further, in the gate driver according to the embodiment, it can be seen that a falling time of the output signal is improved to 0.370 ρs, which is a falling time that is improved 38% compared to 0.597 ρs of a gate driver according to a comparative example. FIG.8is a block diagram illustrating a display device according to an embodiment of the present disclosure, andFIG.9is a diagram illustrating a cross-sectional structure of the display panel shown inFIG.8. Referring toFIG.8, the display device according to an embodiment of the present disclosure includes a display panel100, a display panel driving circuit for writing pixel data to pixels of the display panel100, and a power supply140for generating power necessary for driving the pixels and the display panel driving circuit. The display panel100includes a pixel array AA that displays an input image. The pixel array AA includes a plurality of data lines102, a plurality of gate lines103intersected with the data lines102, and pixels arranged in a matrix form. The pixel array AA includes a plurality of pixel lines L1to Ln. Each of the pixel lines L1to Ln includes one line of pixels arranged along a line direction X in the pixel array AA of the display panel100. Pixels arranged in one pixel line share the gate lines103. Pixels arranged in a column direction Y along a data line direction share the same data line102. One horizontal period1H is a time obtained by dividing one frame period by the total number of pixel lines L1to Ln. Touch sensors may be disposed on the display panel100. A touch input may be sensed using separate touch sensors or may be sensed through pixels. The touch sensors may be disposed as an on-cell type or an add-on type on the screen of the display panel or implemented as in-cell type touch sensors embedded in the pixel array AA. The display panel100may be implemented as a flexible display panel. The flexible display panel may be made of a plastic OLED panel. An organic thin film may be disposed on a back plate of the plastic OLED panel, and the pixel array AA may be formed on the organic thin film. The back plate of the plastic OLED may be a polyethylene terephthalate (PET) substrate. The organic thin film is formed on the back plate. The pixel array AA and a touch sensor array may be formed on the organic thin film. The back plate blocks moisture permeation so that the pixel array AA is not exposed to humidity. The organic thin film may be a thin Polyimide (PI) film substrate. A multi-layered buffer film may be formed of an insulating material (not shown) on the organic thin film. Lines may be formed on the organic thin film so as to supply power or signals applied to the pixel array AA and the touch sensor array. To implement color, each of the pixels may be divided into a red sub-pixel (hereinafter referred to as “R sub-pixel”), a green sub-pixel (hereinafter referred to as “G sub-pixel”), and a blue sub-pixel (hereinafter referred to as “B sub-pixel”). Each of the pixels may further include a white sub-pixel. Each of the sub-pixels101includes a pixel circuit. The pixel circuit is connected to the data line102and the gate line103. Hereinafter, a pixel may be interpreted as having the same meaning as a sub-pixel. As shown inFIG.9, when viewed from a cross-sectional structure, the display panel100may include a circuit layer12, a light emitting element layer14, and an encapsulation layer16stacked on a substrate10. The circuit layer12may include a pixel circuit connected to wirings such as a data line, a gate line, and a power line, a gate driver (GIP) connected to the gate lines, a de-multiplexer array112, a circuit (not shown) for auto probe inspection, and the like. The wirings and circuit elements of the circuit layer12may include a plurality of insulating layers, two or more metal layers separated with the insulating layer therebetween, and an active layer including a semiconductor material. All transistors formed in the circuit layer12may be implemented as oxide TFTs having an n-channel type oxide semiconductor. The light emitting element layer14may include a light emitting element EL driven by a pixel circuit. The light emitting element EL may include a red (R) light emitting element, a green (G) light emitting element, and a blue (B) light emitting element. The light emitting element layer14may include a white light emitting element and a color filter. The light emitting elements EL of the light emitting element layer14may be covered by a protective layer including an organic film and a passivation film. The encapsulation layer16covers the light emitting element layer14to seal the circuit layer12and the light emitting element layer14. The encapsulation layer16may have a multilayered insulating structure in which an organic film and an inorganic film are alternately stacked. The inorganic film blocks the penetration of moisture and oxygen. The organic film planarizes the surface of the inorganic film. When the organic film and the inorganic film are stacked in multiple layers, a movement path of moisture or oxygen becomes longer compared to a single layer, so that penetration of moisture and oxygen affecting the light emitting element layer14can be effectively blocked. A touch sensor layer may be disposed on the encapsulation layer16. The touch sensor layer may include capacitive type touch sensors that sense a touch input based on a change in capacitance before and after the touch input. The touch sensor layer may include metal wiring patterns and insulating layers forming the capacitance of the touch sensors. The capacitance of the touch sensor may be formed between the metal wiring patterns. A polarizing plate may be disposed on the touch sensor layer. The polarizing plate may improve visibility and contrast ratio by converting the polarization of external light reflected by metal of the touch sensor layer and the circuit layer12. The polarizing plate may be implemented as a polarizing plate in which a linear polarizing plate and a phase delay film are bonded, or a circular polarizing plate. A cover glass may be adhered to the polarizing plate. The display panel100may further include a touch sensor layer and a color filter layer stacked on the encapsulation layer16. The color filter layer may include red, green, and blue color filters and a black matrix pattern. The color filter layer may replace the polarizing plate and increase the color purity by absorbing a part of the wavelength of light reflected from the circuit layer and the touch sensor layer. In this embodiment, by applying the color filter layer20having a higher light transmittance than the polarizing plate to the display panel, the light transmittance of the display panel100can be improved, and the thickness and flexibility of the display panel100can be improved. A cover glass may be adhered on the color filter layer. The power supply140generates DC power required for driving the pixel array AA and the display panel driving circuit of the display panel100by using a DC-DC converter. The DC-DC converter may include a charge pump, a regulator, a buck converter, a boost converter, and the like. The power supply140may adjust a DC input voltage from a host system (not shown) and thereby generate DC voltages such as a gamma reference voltage VGMA, gate-on voltages VGH and VEH, gate-off voltages VGL and VEL, a pixel driving voltage ELVDD, a pixel low-potential power supply voltage ELVSS, a reference voltage Vref, an initialization voltage Vinit and an anode voltage Vano. The gamma reference voltage VGMA is supplied to a data driver110. The gate-on voltages VGH and VEH and the gate-off voltages VGL and VEL are supplied to a gate driver120. The pixel driving voltage ELVDD, the pixel low-potential power supply voltage ELVSS, the reference voltage Vref, the initialization voltage Vinit and the anode voltage Vano are commonly supplied to the pixels. The display panel driving circuit writes pixel data (digital data) of an input image to the pixels of the display panel100under the control of a timing controller (TCON)130. The display panel driving circuit includes the data driver110and the gate driver120. A de-multiplexer (DEMUX) array112may be disposed between the data driver110and the data lines102. The de-multiplexer array112sequentially connects one channel of the data driver110to the plurality of data lines102and distributes in a time division manner the data voltage outputted from one channel of the data driver110to the data lines102, thereby reducing the number of channels of the data driver110. The de-multiplexer array112may be omitted. In this case, output buffers (AMP) of the data driver110are directly connected to the data lines102. The display panel driving circuit may further include a touch sensor driver for driving the touch sensors. The touch sensor driver is omitted fromFIG.8. In a mobile device, the timing controller130, the power supply140, the data driver110, and the like may be integrated into one drive integrated circuit (IC). The data driver110generates a data voltage Vdata by converting pixel data of an input image received from the timing controller130with a gamma compensation voltage every frame period by using a digital to analog converter (DAC). The gamma reference voltage VGMA is divided for respective gray scales through a voltage divider circuit. The gamma compensation voltage divided from the gamma reference voltage VGMA is provided to the DAC of the data driver110. The data voltage Vdata is outputted through the output buffer (AMP) in each of the channels of the data driver110. In the data driver110, the output buffer (AMP) included in one channel may be connected to adjacent data lines102through the de-multiplexer array112. The de-multiplexer array112may be formed directly on the substrate of the display panel100or integrated into one drive IC together with the data driver110. The gate driver120may be implemented as a gate in panel (GIP) circuit formed directly on a bezel area BZ of the display panel100together with the TFT array of the pixel array AA. The gate driver120sequentially outputs gate signals to the gate lines103under the control of the timing controller130. The gate driver120may sequentially supply the gate signals to the gate lines103by shifting the gate signals using a shift register. The gate signal may include a scan signal for selecting pixels of a line in which data is to be written in synchronization with the data voltage, and an EM signal defining an emission time of pixels charged with the data voltage. The gate driver120may include a scan driver121, an EM driver122, and an initialization driver123. The scan driver121outputs a scan signal Scan in response to a start pulse and a shift clock from the timing controller130, and shifts the scan signal Scan according to the shift clock timing. The EM driver122outputs an EM signal EM in response to a start pulse and a shift clock from the timing controller130, and sequentially shifts the EM signal EM according to the shift clock timing. The initialization driver123outputs an initialization signal Vinit in response to a start pulse and a shift clock from the timing controller130, and shifts the initialization signal Vinit according to the shift clock timing. Therefore, the scan signal Scan, the EM signal EM, and the initialization signal Vinit are sequentially supplied to the gate lines103of the pixel lines L1to Ln. In case of a bezel-free model, at least some of transistors constituting the gate driver120and clock wirings may be dispersedly disposed in the pixel array AA. The timing controller130receives, from a host system (not shown), digital video data DATA of an input image and a timing signal synchronized therewith. The timing signal includes a vertical synchronization signal Vsync, a horizontal synchronization signal Hsync, a main clock CLK, a data enable signal DE, and the like. Because a vertical period and a horizontal period can be known by counting the data enable signal DE, the vertical synchronization signal Vsync and the horizontal synchronization signal Hsync may be omitted. The data enable signal DE has a cycle of one horizontal period (1H). The host system may be any one of a television (TV) system, a set-top box, a navigation system, a personal computer (PC), a home theater system, a vehicle system, and a mobile device system. The timing controller130multiplies an input frame frequency by i and controls the operation timing of the display panel driving circuit with a frame frequency of the input frame frequency×i (i is a positive integer greater than 0) Hz. The input frame frequency is 60 Hz in the NTSC (National Television Standards Committee) scheme and 50 Hz in the PAL (Phase-Alternating Line) scheme. Based on the timing signals Vsync, Hsync, and DE received from the host system, the timing controller130generates a data timing control signal for controlling the operation timing of the data driver110, MUX signals MUX1and MUX2for controlling the operation timing of the de-multiplexer array112, and a gate timing control signal for controlling the operation timing of the gate driver120. The voltage level of the gate timing control signal outputted from the timing controller130may be converted into the gate-on voltages VGH and VEH and the gate-off voltages VGL and VEL through a level shifter (not shown) and then supplied to the gate driver120. That is, the level shifter converts a low level voltage of the gate timing control signal into the gate-off voltages VGL and VEL and converts a high level voltage of the gate timing control signal into the gate-on voltages VGH and VEH. The gate timing control signal includes the start pulse and the shift clock. FIG.10is a view illustrating a pixel circuit applied to a display panel shown inFIG.8, andFIG.11is a waveform diagram illustrating a driving signal of the pixel circuit shown inFIG.10. Referring toFIGS.10and11, the pixel circuit according to the embodiment of the present disclosure includes a light-emitting element OLED, a driving element DT that drives the light-emitting element OLED, a plurality of switch elements M1, M2, M3, M4, and M5that switch a current path connected to the driving element DT, and a capacitor Cst that stores a gate-source voltage Vgs of the driving element DT. The driving element DT and the plurality of switch elements M1, M2, M3, M4, and M5may be implemented as n-channel transistors. The light emitting element EL emits light by a current applied through a channel of the driving element DT according to a gate-source voltage Vgs of the driving element DT that varies according to a data voltage Vdata. The light emitting element EL may be implemented as an OLED including an organic compound layer formed between an anode and a cathode. The organic compound layer may include, but is not limited to, a hole injection layer (HIL), a hole transport layer (HTL), a light emitting layer (EML), an electron transport layer (ETL), and an electron injection layer (EIL). The anode of the light emitting element EL is connected to the driving element DT through a fourth node n4, and the cathode of the light emitting element EL is connected to a power line to which a low-potential power voltage ELVSS is applied. An organic light emitting diode used as the light emitting element may have a tandem structure in which a plurality of light emitting layers are stacked. The organic light emitting diode having the tandem structure may improve the luminance and lifespan of the pixel. A first switch element M1is turned on according to a gate-on voltage VGH of a second scan signal Scan(n) to connect a first node n1and a second node n2. The first switch element M1includes a gate electrode to which the second scan signal Scan(n) is applied, a first electrode connected to the first node n1, and a second electrode connected to the second node n2. A second switch element M2is turned on according to the gate-on voltage VGH of the second scan signal Scan(n) to supply a data voltage Vdata to a third node n3. The second switch element M2includes a gate electrode to which the second scan signal Scan(n) is applied, a first electrode connected to the third node n3, and a second electrode to which the data voltage Vdata is applied. A third switch element M3is turned on according to a gate-on voltage VGH of a first EM pulse EM1to form a current path between a pixel driving voltage ELVDD and the driving element DT. The third switch element M3includes a gate electrode to which the first EM pulse EM1is applied, a first electrode to which the pixel driving voltage ELVDD is applied, and a second electrode connected to the second node n2. A fourth switch element M4is turned on according to a gate-on voltage VGH of a second EM pulse EM2to form a current path between the driving element DT and the light-emitting element OLED. The fourth switch element M4includes a gate electrode to which the second EM pulse EM2is applied, a first electrode connected to the third node n3, and a second electrode connected to the fourth node n4. A fifth switch element M5is turned on according to a gate-on voltage VGH of a first scan signal Scan(n−1) to supply an initialization voltage Vinit1to a fourth node n4. The fifth switch element M5includes a gate electrode to which the first scan signal Scan(n−1) is applied, a first electrode to which the initialization voltage Vinit1is applied, and a second electrode connected to the fourth node n4. The capacitor Cst is connected between the first node n1and the fourth node n4. In sensing operation, the threshold voltage Vth of the driving element DT is sensed and stored in the capacitor Cst. The first EM pulse EM1and the second EM pulse EM2have the same pulse width, and the first EM pulse EM1is a pulse having a phase leading a phase of the second EM pulse EM2. The first EM pulse EM1and the second EM pulse EM2are generated by the gate driver shown inFIG.1and the gate driver shown inFIG.3. Here, an internal compensation circuit of an n-channel metal oxide semiconductor (NMOS) is described as an example, but the present disclosure is not necessarily limited thereto, and all circuits that require the EM pulses of the gate driver according to the embodiment are applicable. It will be apparent to those skilled in the art that various modifications and variations can be made in the gate driver and the display device including the same of the present disclosure without departing from the technical idea or scope of the disclosure. Thus, it is intended that the present disclosure cover the modifications and variations of this disclosure provided they come within the scope of the appended claims and their equivalents. | 38,649 |
11862105 | DETAILED DESCRIPTION The advantages and features of the present disclosure and methods for accomplishing the same will be more clearly understood from embodiments described below with reference to the accompanying drawings. However, the present disclosure is not limited to the following embodiments but may be implemented in various different forms. Rather, the present embodiments will make the disclosure of the present disclosure complete and allow those skilled in the art to completely comprehend the scope of the present disclosure. The present disclosure is only defined within the scope of the accompanying claims. The shapes, sizes, ratios, angles, numbers, and the like illustrated in the accompanying drawings for describing the embodiments of the present disclosure are merely examples, and the present disclosure is not limited thereto. Like reference numerals generally denote like elements throughout the present specification. Further, in describing the present disclosure, detailed descriptions of known related technologies may be omitted to avoid unnecessarily obscuring the subject matter of the present disclosure. The terms such as “comprising,” “including,” “having,” and “comprising” used herein are generally intended to allow other components to be added unless the terms are used with the term “only.” Any references to singular may include plural unless expressly stated otherwise. Components are interpreted to include an ordinary error range even if not expressly stated. When the position relation between two components is described using the terms such as “on,” “above,” “below,” and “next,” one or more components may be positioned between the two components unless the terms are used with the term “immediately” or “directly.” The terms “first,” “second,” and the like may be used to distinguish components from each other, but the functions or structures of the components are not limited by ordinal numbers or component names in front of the components. The following embodiments can be partially or entirely bonded to or combined with each other and can be linked and operated in technically various ways. The embodiments can be carried out independently of or in association with each other. Each of the pixels may include a plurality of sub-pixels having different colors in order to reproduce the color of the image on a screen of the display panel. Each of the sub-pixels includes a transistor used as a switch element or a driving element. Such a transistor may be implemented as a thin film transistor (TFT). A driving circuit of the display device writes pixel data of an input image to pixels on the display panel. To this end, the driving circuit of the display device may include a data driving circuit configured to supply data signal to the data lines, a gate driving circuit configured to supply a gate signal to the gate lines, and the like. In a display device of the present disclosure, the pixel circuit and the gate driving circuit may include a plurality of transistors. Transistors may be implemented as oxide thin film transistors (oxide TFTs) including an oxide semiconductor, low temperature polysilicon (LTPS) TFTs including low temperature polysilicon, or the like. In embodiments, descriptions will be given based on an example in which the transistors of the pixel circuit and the gate driving circuit are implemented as the n-channel oxide TFTs, but the present disclosure is not limited thereto. Generally, a transistor is a three-electrode element including a gate, a source, and a drain. The source is an electrode that supplies carriers to the transistor. In the transistor, carriers start to flow from the source. The drain is an electrode through which carriers exit from the transistor. In a transistor, carriers flow from a source to a drain. In the case of an n-channel transistor, since carriers are electrons, a source voltage is a voltage lower than a drain voltage such that electrons may flow from a source to a drain. The n-channel transistor has a direction of a current flowing from the drain to the source. In the case of a p-channel transistor (p-channel metal-oxide semiconductor (PMOS), since carriers are holes, a source voltage is higher than a drain voltage such that holes may flow from a source to a drain. In the p-channel transistor, since holes flow from the source to the drain, a current flows from the source to the drain. It should be noted that a source and a drain of a transistor are not fixed. For example, a source and a drain may be changed according to an applied voltage. Therefore, the disclosure is not limited due to a source and a drain of a transistor. In the following description, a source and a drain of a transistor will be referred to as a first electrode and a second electrode. A gate signal swings between a gate-on voltage and a gate-off voltage. The gate-on voltage is set to a voltage higher than a threshold voltage of a transistor, and the gate-off voltage is set to a voltage lower than the threshold voltage of the transistor. The transistor is turned on in response to the gate-on voltage and is turned off in response to the gate-off voltage. In the case of an n-channel transistor, a gate-on voltage may be a gate high voltage, and a gate-off voltage may be a gate low voltage. Hereinafter, various embodiments of the present disclosure will be described in detail with reference to the accompanying drawings. In the following embodiments, a display device will be described focusing on an organic light emitting display device, but the present disclosure is not limited thereto. Hereinafter, various embodiments of the present disclosure will be described in detail with reference to the accompanying drawings. In the following embodiments, a display device will be described focusing on an organic light emitting display device, but the present disclosure is not limited thereto. Also, the scope of this disclosure is not intended to be limited by the names of components or signals in the following embodiments and claims. Referring toFIGS.1and2, a display device according to an embodiment of the present disclosure includes a display panel100and a display panel driver for writing pixel data to pixels of the display panel100. The display panel100may be a panel having a rectangular structure with a length in the X-axis direction, a width in the Y-axis direction, and a thickness in the Z-axis direction. The display panel100includes a pixel array that displays an input image on a screen. The pixel array may be divided into a plurality of pixel regions including a first pixel region A and a second pixel region A′ in which an addressing period is separated based on an inversion timing of a low-potential power supply voltage ELVSS. The pixel array includes a plurality of data lines102, a plurality of gate lines103that intersect with the plurality of data lines102, and pixels101arranged in a matrix form. The display panel100may further include power lines commonly connected to the pixels. The power lines supply to the pixels101a voltage required for driving the pixels101. For example, the display panel100may include a VDD line to which a pixel driving voltage ELVDD is applied and a VSS line to which a low-potential power supply voltage ELVSS is applied. The power lines may further include a reference (REF) line through which a reference voltage Vref is applied and an initialization (INIT) line through which an initialization voltage Vinit is applied. The cross-sectional structure of the display panel100may include a circuit layer12, a light emitting element layer14, and an encapsulation layer16stacked on a substrate10as shown inFIG.2according to one embodiment. The circuit layer12may include a TFT array including a pixel circuit connected to wirings such as a data line, a gate line, and a power line, a de-multiplexer array112, a gate driver120, and the like. The wirings and circuit elements of the circuit layer12may include a plurality of insulating layers, two or more metal layers separated with the insulating layer therebetween, and an active layer having a semiconductor material. All transistors formed in the circuit layer12may be implemented as n-channel oxide TFTs, but the present disclosure is not limited thereto. The light emitting element layer14may include a light emitting element EL driven by a pixel circuit. The light emitting element EL may include a red (R) light emitting element, a green (G) light emitting element, and a blue (B) light emitting element. In another embodiment, the light emitting element layer14may include a white light emitting element and a color filter. The light emitting elements EL of the light emitting element layer14may be covered by a multi-passivation layer including an organic film and an inorganic film. The encapsulation layer16covers the light emitting element layer14to seal the circuit layer12and the light emitting element layer14. The encapsulation layer16may have a multilayered insulating structure in which an organic film and an inorganic film are alternately stacked. The inorganic film blocks or at least reduces the penetration of moisture and oxygen. The organic film planarizes the surface of the inorganic film. When the organic film and the inorganic film are stacked in multiple layers, a movement path of moisture or oxygen becomes longer compared to a single layer, so that penetration of moisture and oxygen affecting the light emitting element layer14can be effectively blocked or at least reduced. A touch sensor layer (not shown) may be formed on the encapsulation layer16, and a polarizing plate or a color filter layer may be disposed thereon. The touch sensor layer may include capacitive touch sensors that sense a touch input based on a change in capacitance before and after the touch input. The touch sensor layer may include metal wiring patterns and insulating films forming the capacitance of the touch sensors. The insulating films may insulate a portion where the metal wiring patterns are intersected, and may planarize the surface of the touch sensor layer. The polarizing plate may improve visibility and contrast ratio by converting the polarization of external light reflected by metal in the touch sensor layer and the circuit layer. The polarizing plate may be implemented as a circular polarizing plate or a polarizing plate in which a linear polarizing plate and a phase retardation film are bonded. A cover glass may be adhered to the polarizing plate. The color filter layer may include red, green, and blue color filters. The color filter layer may further include a black matrix pattern. The color filter layer may replace the polarizing plate by absorbing a part of the wavelength of light reflected from the circuit layer and the touch sensor layer, and increase the color purity of an image reproduced in the pixel array. The pixel array includes a plurality of pixel lines L1to Ln. Each of the pixel lines L1to Ln includes one line of pixels arranged along the line direction (X-axis direction) in the pixel array of the display panel100. Pixels arranged in one pixel line share a same gate line103. Sub-pixels arranged in the column direction Y along the data line direction share the same data line102. One horizontal period is a time obtained by dividing one frame period by the total number of pixel lines L1to Ln. The display panel100may be implemented as a non-transmissive display panel or a transmissive display panel. The transmissive display panel may be applied to a transparent display device in which an image is displayed on a screen and an actual background is visible. The display panel100may be manufactured as a flexible display panel. Each of the pixels101may be divided into a red sub-pixel, a green sub-pixel, and a blue sub-pixel for color implementation. Each of the pixels may further include a white sub-pixel. Each of the sub-pixels includes a pixel circuit. Hereinafter, a pixel may be interpreted as having the same meaning as a sub-pixel. Each of the pixel circuits is connected to data lines, gate lines, and power lines. The pixels may be arranged as real color pixels and pentile pixels. A real color pixel includes a red sub-pixel, a green sub-pixel, and a blue sub-pixel. A pentile pixel may realize a higher resolution than the real color pixel by driving two sub-pixels having different colors as one pixel101through the use of a preset pixel rendering algorithm. The pixel rendering algorithm may compensate for insufficient color representation in each pixel with the color of light emitted from an adjacent pixel. The display panel driver writes the pixel data of the input image to the pixels of the display panel100under the control of the timing controller130. The display panel driver maintains the low-potential power supply voltage ELVSS as a light-off voltage during a first addressing period in which pixel data is sequentially written in pixels of the first pixel region A one pixel line at a time. The display panel driver maintains the low-potential power supply voltage ELVSS as a light-on voltage during a second addressing period in which pixel data is sequentially written in pixels of the second pixel region A′ one pixel line at a time. The display panel driver inverts the low-potential power supply voltage ELVSS from the light-off voltage to the light-on voltage between the first addressing period and the second addressing period. The pixels may emit light when the low-potential power voltage ELVSS is the light-off voltage. The display panel driver includes a data driver110, a gate driver120, a power supply140, and a timing controller130according to one embodiment. The display panel driver may further include a de-multiplexer array112disposed between the data driver110and the data lines102. The power supply140generates direct current (DC) power required for driving the pixel array and the display panel driver of the display panel100by using a DC-DC converter. The DC-DC converter may include a charge pump, a regulator, a buck converter, a boost converter, and the like. The power supply140may adjust the level of a DC input voltage applied from a host system (not shown) and generate voltages such as a gamma reference voltage VGMA, a gate-on voltage, a gate-off voltage, the pixel driving voltage ELVDD, the low-potential power supply voltage ELVSS, an initialization voltage Vinit, and the reference voltage Vref. The gamma reference voltage VGMA is supplied to the data driver110. The gate-on voltage and the gate-off voltage are supplied to the gate driver120. The voltages such as the pixel driving voltage ELVDD, the low-potential power supply voltage ELVSS, the initialization voltage Vinit, and the reference voltage Vref are supplied to the pixels101through the power lines commonly connected to the pixels101. The power supply140may change output voltages under the control of the timing controller130. For example, the power supply140may generate a preset light-on voltage during a light-off period for suppressing emission of pixels, and may generate a light-off voltage higher than the light-on voltage during a light-on period in which light emission of pixels is allowed. The de-multiplexer array112sequentially supplies the data voltages outputted from channels of the data driver110to the data lines102using a plurality of de-multiplexers DEMUX. Each of the de-multiplexers may include a multiple of switch elements disposed on the display panel100. When the de-multiplexer is disposed between the output terminals of the data driver110and the data lines102, the number of channels of the data driver110may be reduced. The de-multiplexer array112may be omitted. The display panel driver may further include a touch sensor driver for driving the touch sensors. The touch sensor driver is omitted fromFIG.1. The data driver110and the touch sensor driver may be integrated into one drive integrated circuit (IC). In mobile devices or wearable devices, the timing controller130, the power supply140, the data driver110, and the like may be integrated into one drive IC. The display panel driver may operate in a low-speed driving mode under the control of the timing controller130. The low-speed driving mode may be set to reduce power consumption of the display device when an input image does not change during a preset number of frames as a result of analyzing the input image. In the low-speed driving mode, the power consumption of the display panel driver and the display panel100may be reduced by lowering a refresh rate (e.g., a frame frequency of the pixels), when a still image is inputted for a predetermined time or longer. The low-speed driving mode is not limited to a case where the still image is inputted. For example, when the display device operates in a standby mode or when a user command or an input image is not inputted to the display panel driver for a predetermined time or longer, the display panel driver may operate in the low-speed driving mode. The data driver110receives pixel data of the input image received as a digital signal from the timing controller130and outputs a data voltage. The data driver110generates the data voltage Vdata by converting the pixel data of the input image into a gamma compensation voltage every frame period using a digital to analog converter (DAC). The gamma reference voltage VGMA is divided into gamma compensation voltages for each grayscale through a voltage divider circuit. The gamma compensation voltage for each grayscale is provided to the DAC in the data driver110. The data voltage Vdata is outputted through an output buffer from each of the channels of the data driver110. The gate driver120may be implemented as a gate in panel (GIP) circuit formed in the circuit layer12on the display panel100together with the TFT array of the pixel array and wirings. The gate driver120may be disposed on a bezel BZ, which is non-display region of the display panel100, or may be distributedly disposed in a pixel array in which an input image is reproduced. The gate driver120sequentially outputs gate signals to the gate lines103under the control of the timing controller130. The gate driver120may sequentially supply the gate signals to the gate lines103by shifting the gate signals using a shift register. The gate signals may include various gate pulses, such as a scan pulse, an initialization pulse, a sensing pulse, and the like. The timing controller130receives digital video data DATA of an input image, and a timing signal synchronized with the digital video data, from the host system. The timing signal may include a vertical synchronization signal Vsync, a horizontal synchronization signal Hsync, a clock CLK, and a data enable signal DE. Because a vertical period and a horizontal period may be known by counting the data enable signal DE, the vertical synchronization signal Vsync and the horizontal synchronization signal Hsync may be omitted. The data enable signal DE has a cycle of one horizontal period (1H). The host system may be one of a television (TV) system, a tablet computer, a notebook computer, a navigation system, a personal computer (PC), a home theater system, a mobile device, a wearable device, and a vehicle system. The host system may scale the image signal from the video source to fit the resolution of the display panel100, and may transmit it to the timing controller130together with the timing signal. The host system may adjust the overall luminance of images reproduced on the display panel by determining the luminance of the surrounding environment based on an output signal of a luminance sensor. The host system may change a global dimming duty ratio according to a luminance value such as a display brightness value (DBV) or a peak luminance control (PLC), which is variable by a screen brightness designated by the user. The host system may classify a normal driving mode for the display device into an outdoor mode, a normal mode, a night mode, a power saving mode, and the like, and may change the global dimming duty ratio for each mode. The host system or the timing controller130may vary the global dimming duty ratio based on the average picture level (APL) of the input image, or may vary the global dimming duty ratio between a still image and a moving image by detecting the movement of an object in the input image to determine whether there is a movement. The timing controller130may multiply the input frame frequency by i (i is a natural number) in the normal driving mode, so that it can control the operation timing of the display panel driver at a frame frequency of the input frame frequency×i Hz. The input frame frequency is 60 Hz in a national television standards committee (NTSC) system and 50 Hz in a phase-alternating line (PAL) system. For example, the display panel driver may address pixel data to the pixels101with a frame frequency of 120 Hz or higher under the control of the timing controller130. In order to lower the refresh rate of pixels in the low-speed driving mode, the timing controller130may lower the driving frequency for the display panel driver by lowering the frame frequency to a frequency between 1 Hz and 30 Hz. The timing controller130generates a data timing control signal for controlling the operation timing of the data driver110based on the timing signals Vsync, Hsync, DE received from the host system, a control signal for controlling the operation timing of the de-multiplexer array112, and a gate timing control signal for controlling the operation timing of the gate driver120. The timing controller130synchronizes the data driver110, the de-multiplexer array112, the touch sensor driver, and the gate driver120by controlling the operation timing of the display panel driver. The gate timing control signal generated from the timing controller130may be inputted to the shift registers of the gate driver120through a level shifter (not shown). The level shifter may receive the gate timing control signal, generate a start pulse and a shift clock, and provide them to the shift registers of the gate driver120. The timing controller130may vary the global dimming duty ratio for every frame by varying the duty ratio of the low-potential power voltage ELVSS commonly applied to the pixels101. The timing controller130controls a duty ratio, which is a ratio between a duration of the light-on voltage and a duration of the light-off voltage in the low-potential power supply voltage ELVSS, according to the global dimming duty ratio. The duty ratio of the low-potential power supply voltage ELVSS is substantially the same as the global dimming duty ratio according to one embodiment. When the duty ratio of the low-potential power supply voltage ELVSS is changed, the boundary position of the first pixel region A and the second pixel region A′ is changed on the screen of the display panel100. For example, when the duty ratio of the low-potential power supply voltage ELVSS is less than a predetermined threshold, the size of the second pixel region A′ on the screen of the display panel100is reduced, so that the boundary between the first pixel region A and the second pixel region A′ may go down on the screen (e.g., closer to the bottom of the screen). On the other hand, when the duty ratio of the low-potential power supply voltage ELVSS is greater than the predetermined threshold, the size of the second pixel region A′ on the screen of the display panel100increases, so that the boundary between the first pixel region A and the second pixel region A′ may go up on the screen (e.g., closer to the top of the screen). Due to device characteristic deviations and process deviations caused in the manufacturing process of the display panel100, there may be differences in electrical characteristics of the driving element among pixels, and such differences may increase as driving time of the pixels elapses. In order to compensate for variations in electrical characteristics of the driving elements between pixels, an internal compensation circuit may be embedded in the pixel circuit or an external compensation circuit may be connected to the pixel circuit. The internal compensation circuit samples electrical characteristics of the driving element for each sub-pixel by using the internal compensation circuit implemented in each pixel circuit and compensates the gate-source voltage Vgs of the driving element by the electrical characteristics. The external compensation circuit compensates for the change in the electrical characteristics of the driving element by generating a compensation value based on a result of sensing the electrical characteristics of the driving element using the external compensation circuit connected to the pixel circuit. The external compensation circuit includes a REF line (or a sensing line) connected to the pixel circuit, and an analog to digital converter (ADC) that converts the sensing voltage stored in the REF line into digital data. The sensing voltage may include electrical characteristics of the driving element DT, for example, a threshold voltage and/or mobility. An integrator may be connected to the input terminal of the ADC. The timing controller130to which the external compensation circuit is applied may generate a compensation value for compensating for a change in the electrical characteristics of the driving element DT according to the sensing data inputted from the ADC, and may compensate for the change in the electrical characteristics of the driving element DT by adding or multiplying the compensation value to the pixel data of the input image. The ADC may be embedded in the data driver110. The pixel circuit of the present disclosure may include the internal compensation circuit or may be connected to the external compensation circuit, without an EM switch element. The pixel circuit may include the internal compensation circuit and may be connected to the external compensation circuit, without an EM switch element. FIG.3is a circuit diagram illustrating a pixel circuit according to an embodiment of the present disclosure. Referring toFIG.3, the pixel circuit includes a light emitting element EL, a driving element DT for driving the light emitting element EL, a capacitor Cst connected between a second node DRG and a third node DRS, and a plurality of switch elements M01and M02. In pixel circuit shown inFIG.3, the driving element DT and the switch elements M01and M02may be implemented as n-channel oxide TFTs. A voltage, such as the pixel driving voltage ELVDD, the low-potential power supply voltage ELVSS, the reference voltage Vref, or the like, is applied to this pixel circuit. The pixel driving voltage ELVDD is greater than the low-potential power supply voltage ELVSS. A gate-on voltage may be set to a voltage higher than the pixel driving voltage ELVDD. The reference voltage Vref may be set to a voltage that is less than the low-potential power supply voltage ELVSS. A gate-off voltage may be set to a voltage that is less than the reference voltage Vref. The low-potential power supply voltage ELVSS may be generated at an alternating current (AC) voltage that swings between a light-on voltage and a light-off voltage. When the low-potential power supply voltage ELVSS rises to the light-off voltage, a voltage difference between an anode electrode and a cathode electrode of the light emitting element EL becomes less than a threshold voltage of the light emitting element EL, so that thus the light emitting element EL cannot emit light. The gate driver120may include a first shift register that sequentially outputs a scan pulse SCAN. The gate driver120may further include a second shift register that sequentially outputs a sensing pulse SENSE. The light emitting element EL may be implemented as an OLED including an anode electrode, a cathode electrode, and an organic compound layer connected between the anode electrode and the cathode electrode. The organic compound layer may include, but is not limited to, a hole injection layer (HIL), a hole transport layer (HTL), an emission layer (EML), an electron transport layer (ETL), and an electron injection layer (EIL). When a voltage is applied to the anode and cathode electrodes, holes passing through the hole transport layer HTL and electrons passing through the electron transport layer ETL move to the emission layer EML to form excitons. At this time, visible light may be emitted from the emission layer EML. The OLED used as the light emitting element EL may have a tandem structure in which a plurality of emitting layers are stacked. The OLED of the tandem structure can improve the luminance and lifespan of pixels. The anode electrode of the light emitting element EL may be connected to the third node DRS, and the cathode electrode may be connected to the VSS line to which the low-potential power supply voltage ELVSS is applied. The light emitting element EL includes a capacitor CEL formed between the anode electrode and the cathode electrode. The driving element DT generates an electric current for driving the light emitting element EL according to the gate-source voltage Vgs. The driving element DT includes a gate electrode connected to the second node DRG, a first electrode connected to the first node DRD to which the pixel driving voltage ELVDD is applied, and a second electrode connected to the third node DRS. The capacitor Cst is connected between the second node DRG and the third node DRS. The gate-source voltage Vgs of the driving element DT is charged in the capacitor Cst. The first switch element M01is turned on according to the gate-on voltage of the scan pulse SCAN to supply the data voltage Vdata to the second node DRG. The first switch element M01includes a gate electrode connected to the first gate line to which the scan pulse SCAN is applied, a first electrode connected to the data line to which the data voltage Vdata is applied, and a second electrode connected to the second node DRG. The second switch element M02is turned on according to the gate-on voltage of the scan pulse SCAN or the sensing pulse SENSE to apply the reference voltage Vref to the third node DRS. The second switch element M02includes a gate electrode connected to a second gate line to which the scan pulse SCAN or the sensing pulse SENSE is applied, a first electrode connected to the third node DRS, and a second electrode connected to the REF line to which the reference voltage Vref is applied. The REF line may be connected to the external compensation circuit. In this case, the voltage of the third node DRS is stored in the capacitor on the REF line, the electrical characteristics of the driving element DT are stored in the REF line, and the voltage of the REF line is converted into digital data through an ADC. The electrical characteristics of the driving element DT may include a threshold voltage and mobility. FIG.4is a circuit diagram illustrating a pixel circuit according to another embodiment of the present disclosure. Referring toFIG.4, the pixel circuit includes a light emitting element EL, a driving element DT for supplying an electrical current to the light emitting element EL, a capacitor Cst connected between a second node DRG and a third node DRS, and a plurality of switch elements M11, M12, and M13. In this pixel circuit, the driving element DT and the switch elements M11, M12and M13may be implemented as n-channel oxide TFTs. A voltage, such as the pixel driving voltage ELVDD, the low-potential power supply voltage ELVSS, the reference voltage Vref, then initialization voltage Vinit, or the like, is applied to this pixel circuit. The pixel driving voltage ELVDD is greater than the low-potential power supply voltage ELVSS. A gate-on voltage may be set to a voltage greater than the pixel driving voltage ELVDD. A gate-off voltage may be set to a voltage less than the low-potential power supply voltage ELVSS. The reference voltage Vref may be set to a voltage less than the low-potential power supply voltage ELVSS and higher than the gate-off voltage. The initialization voltage Vinit is set to a voltage, at which the driving element DT is turned on, which is less than the pixel driving voltage ELVDD and equal to or greater than a half-gray scale of the data voltage Vdata. The low-potential power supply voltage ELVSS may be generated at an AC voltage that swings between a light-on voltage and a light-off voltage. When the low-potential power supply voltage ELVSS rises to the light-off voltage, a voltage difference between an anode electrode and a cathode electrode of the light emitting element EL becomes less than a threshold voltage of the light emitting element EL, so that thus the light emitting element EL cannot emit light. The gate driver120may include a first shift register that sequentially outputs a first scan pulse SCAN1, a second shift register that sequentially outputs a second scan pulse SCAN2, and a third shift register that sequentially outputs a third scan pulse SCAN3. The light emitting element EL may be implemented as an OLED including an anode electrode, a cathode electrode, and an organic compound layer connected between these electrodes. The anode electrode of the light emitting element EL may be connected to the third node DRS, and the cathode electrode thereof may be connected to the VSS line to which the low-potential power supply voltage ELVSS is applied. The light emitting element EL includes a capacitor CEL formed between the anode electrode and the cathode electrode. The driving element DT generates an electric current for driving the light emitting element EL according to the gate-source voltage Vgs. The driving element DT includes a gate electrode connected to the second node DRG, a first electrode connected to the first node DRD to which the pixel driving voltage ELVDD is applied, and a second electrode connected to the third node DRS. The capacitor Cst is connected between the second node DRG and the third node DRS. A first switch element M11is turned on according to a gate-on voltage of the scan pulse SCAN1to supply the data voltage Vdata to the second node DRG. The first switch element M11includes a gate electrode connected to a first gate line to which the first scan pulse SCAN1is applied, a first electrode connected to a data line DL to which the data voltage Vdata is applied, and a second electrode connected to the second node DRG. A second switch element M12is turned on according to a gate-on voltage of the second scan pulse SCAN2to supply the reference voltage Vref to the third node DRS. The second switch element M12includes a gate electrode connected to a second gate line to which the second scan pulse SCAN2is applied, a first electrode connected to the third node DRS, and a second electrode connected to a REF line RL to which the reference voltage Vref is applied. A third switch element M13is turned on according to a gate-on voltage of the scan pulse SCAN3to supply the initialization voltage Vinit to the second node DRG. The third switch element M13includes a gate electrode connected to a third gate line to which the third scan pulse SCAN3is applied, a first electrode connected to an INIT line to which the initialization voltage Vinit is applied, and a second electrode connected to the second node DRG. A gate signal as shown inFIG.5may be inputted to the pixel circuit shown inFIG.4. Referring toFIG.5, the driving period of the pixel circuit may be divided into an initialization step INIT, a sensing step SEN, an addressing step WR, a boosting step BOOST, and a light emission step EMIS. In the initialization step INIT, the driving element DT is turned on. In the sensing step SEN, when the voltage of the third node DRS rises and the gate-source voltage Vgs of the driving element DT becomes less than the threshold voltage Vth of the driving element DT, the driving element DT is turned off. When the driving element DT is turned off in the sensing step SEN, the threshold voltage Vth of the driving element DT is sampled and stored in the capacitor Cst. In the hold period HO between the sensing step SEN and the addressing step WR, all of the gate signals SCAN1, SCAN2, and SCAN3are at the gate-off voltage VGL. In the hold period HO, the second and third nodes DRG and DRS are floated to maintain their previous voltages. When the data voltage Vdata is applied to the second node DRG in the addressing step WR, the data voltage Vdata compensated by the threshold voltage Vth is applied to the gate electrode of the driving element DT. After the capacitor CEL of the light emitting element EL is charged as the voltages of the second node DRG and the third node DRS floated in the boosting step BOOST are increased, the light emitting element EL may emit light by means of a current generated according to the gate-source voltage Vgs compensated by the threshold voltage Vth of the driving element DT in the light emission step EMIS. In the light emission step EMIS, the low-potential power supply voltage ELVSS is generated at the light-on voltage Von. The third scan pulse SCAN3is generated at a gate-on voltage VGH in the initialization step INIT and the sensing step SENSE. The third scan pulse SCAN3is at the gate-off voltage VGL in the hold period HO, the addressing step WR, the boosting step BOOST, and the light emission step EMIS. The first scan pulse SCAN1is synchronized with the data voltage Vdata of the pixel data, and is generated at the gate-on voltage VGH in the addressing step WR. The first scan pulse SCAN1is at the gate-off voltage VGL in the hold period HO, the initialization step INIT, the sensing step SENSE, the boosting step BOOST, and the light emission step EMIS. The second scan pulse SCAN2is generated at the gate-on voltage VGH in the initialization step INIT. The second scan pulse SCAN2is at the gate-off voltage VGL in the sensing step SENSE, the hold period HO, the addressing step WR, the boosting step BOOST, and the light emission step EMIS. FIG.6is a diagram illustrating one frame period of a display device according to one embodiment. Referring toFIG.6, a frame period (one frame) is divided into an addressing period AT in which pixel data of an input image is written to pixels, and a vertical blank period VB in which no pixel data of an input image is written to pixels. In one embodiment, the vertical blank period VB includes a front porch FP portion, a vertical synch VS portion, and a back porch (BP) portion. The vertical synchronization signal Vsync defines one frame period. One pulse cycle of a horizontal synchronization signal Hsync and a data enable signal DE is one horizontal period 1H. The display panel driver sequentially writes pixel data corresponding to one frame to the pixels of the display panel100one-pixel line at a time during the addressing period AT. The data voltage Vdata of the pixel data is simultaneously charged to the pixels in the one pixel line in synchronization with the scan pulse in the one horizontal period 1H. As shown inFIG.7, an addressing period AT of the one frame period may include a first addressing period AT1and a second addressing period AT2divided with a time point therebetween at which the low-potential power supply voltage ELVSS is inverted. Thus, the first addressing period AT1and the second addressing period AT2are non-overlapping. The timing controller130may divide the addressing period AT of the one frame period into the first addressing period AT1and the second addressing period AT2, transmit pixel data to be written to pixels in the first pixel region A to the data driver110in the first addressing period, and then transmit pixel data to be written to pixels in the second pixel region A′ to the data driver110in the second addressing period AT2. The timing controller130may control the power supply140to temporarily stop the transmission of the pixel data during an addressing skip session set between the first addressing period AT1and the second addressing period AT2as will be further described below with respect toFIG.9. The data enable signal DE defines an effective data period including pixel data to be written to pixels within one horizontal period 1H. A pulse of the data enable signal DE is synchronized with the pixel data of the one pixel line. During the vertical blank period VB, no new pixel data is written to the pixels. Sub-pixels maintain the voltages charged from a previous frame during the vertical blank period VB. The low-potential power supply voltage ELVSS may be maintained at the light-on voltage during at least a portion of the vertical blank period VB. Before the next frame period starts, the low-potential power supply voltage ELVSS may be inverted to the light-off voltage within the vertical blank period VB. A horizontal blank period HB is a period during which there is no pixel data within one horizontal period. The horizontal blank period HB exists between one line data to be written to the sub-pixels in an ith (i being a positive integer) pixel line and one line data to be written to the sub-pixels in an (i+1)th pixel line. In the display device of the present disclosure, as shown inFIGS.1and7, the low-potential power supply voltage ELVSS is lowered to the light-on voltage Von within the addressing period AT in which pixel data is sequentially written one-pixel line at a time, so that the pixels starts to emit the light. Therefore, in the present disclosure, global dimming may be started within the addressing period AT and may be performed until the vertical blank period VB. As a result, the global dimming control method of the present disclosure may secure a global dimming period for a sufficiently long time for every frame period since the global dimming period occurs during the addressing period AT rather than the vertical blank period VB, thereby linearly controlling the global dimming duty ratio within a wide range. Accordingly, the luminance of the first and second pixel regions A and A′ may not be linearly varied in a wide range of the duty ratio. Referring toFIGS.1and7, a screen of the display panel100may include a first pixel region A and a second pixel region A′. An addressing period of one frame period [(N−1)th to (N+1)th Frame]) may be divided into a first addressing period AT1in which pixel data is sequentially written to pixels in the first pixel region A and a second addressing period AT2in which pixel data is sequentially written to pixels in the second pixel region A′. InFIG.7, ‘(N−1)th Frame’ denotes an (N−1)th frame period, ‘Nth Frame’ denotes an Nth frame period, and ‘(N+1)th Frame’ denotes an (N+1)th frame period. During the first addressing period AT1in which the pixels in the first pixel region A are scanned, the low-potential power supply voltage ELVSS is generated at the light-off voltage Voff, so that the pixels in the first pixel region A do not emit light. During the second addressing period AT2in which the pixels in the second pixel region A′ are scanned, the low-potential power supply voltage ELVSS is inverted to the light-on voltage Von. Therefore, the pixels in the first and second pixel regions A and A′ start to emit light from the starting point of the second addressing period AT2in which the second pixel region A′ is scanned. The first pixel region A may include two or more pixel lines from a first pixel line to an (I−1)th pixel line, where I is a positive integer equal to or greater than 2. The second pixel region A′ may include two or more pixel lines from an I-th pixel line to an nth pixel line, where n is a positive integer greater than I by 2 or more. The low-potential power supply voltage ELVSS is supplied to all of the pixels in the first and second pixel regions A and A′ through a VSS line formed as a common electrode in the screen of the display panel. Therefore, when the voltage levels of the low-potential power supply voltage ELVSS is changed, the voltage level of the low-potential power supply voltage ELVSS applied to all of the pixels is simultaneously changed. During the first addressing period AT1, the data voltage Vdata of pixel data is sequentially charged one-pixel line at a time from the first pixel line to the (I−1)th pixel line included in the first pixel region A along the shift direction of the scan pulse. During the first addressing period AT1, the low-potential power supply voltage ELVSS maintains the light-off voltage Voff. For this reason, all of the pixels included in the first and second pixel regions A and A′ do not emit light during the first addressing period ATE The second addressing period AT2starts when pixel data starts to be written into the pixels in the I-th pixel line included in region A′. During the second addressing period AT2, the data voltage Vdata of pixel data is sequentially charged one-pixel line at a time from the I-th pixel line to the nth pixel line included in the second pixel region A′ along the shift direction of the scan pulse. When the second addressing period AT2starts, the low-potential power supply voltage ELVSS is inverted to the light-on voltage Von, and during the second addressing period AT2, the low-potential power supply voltage ELVSS is generated at the light-on voltage Von. As a result, since the low-potential power supply voltage ELVSS maintains the light-on voltage Von during the second addressing period AT2, the pixels included in the first and second pixel regions A and A′ may emit light with a target luminance corresponding to the gray level of the pixel data during the second addressing period. The low-potential power supply voltage ELVSS may be generated at a light-on voltage Von from the start of the second addressing period to the end of the vertical blank period VB. Accordingly, a maximum light-on duration of the pixels is a duration from the start of the second addressing period AT2to the end of the vertical blank period VB. The timing controller130may vary the global dimming duty ratio according to the driving mode or the analysis result of an input image. As the global dimming duty ratio is high, a time point at which the low-potential power supply voltage ELVSS is inverted to the light-on voltage Von is advanced, so that a position of the Ith pixel line at which the second addressing period AT2starts is changed to a position of a pixel line having an earlier scanning timepoint. On the contrary, as the global dimming duty ratio is low, a time point at which the low-potential power supply voltage ELVSS is inverted to the light-on voltage Von is delayed, so that a position of the Ith pixel line at which the second addressing period AT2starts is changed to a position of a pixel line having a later scanning time point. Meanwhile, when the low-potential power supply voltage ELVSS is inverted, the gate-source voltage Vgs of the driving element DT may be changed, as illustrated inFIG.8. Referring toFIG.8, when the low-potential power voltage ELVSS is changed within the addressing period AT, a voltage of the third node DRS coupled to the VSS line through the capacitor CEL, that is, a source voltage of the driving element DT, is changed to the low-potential power voltage ELVSS. At this time, the third node DRS is changed based on ΔELVSS*a CAP ratio. ΔELVSS is a variation amount of the low-potential power supply voltage ELVSS, and the CAP ratio is a ratio of the capacitors Cst and CEL which are connected to the third node DRS. In this case, the gate-source voltage Vgs of the driving element DT is changed in the pixel line in which the low-potential power voltage ELVSS is inverted and thus the luminance at the pixels in the corresponding pixel line is changed, so that a dim line may be visually recognized on the screen. In order to prevent or at least reduce this problem, the present disclosure may set an addressing skip session, as shown inFIGS.9and10, in which addressing is temporarily stopped when the low-potential power voltage ELVSS is inverted. The addressing skip session may be set between the first addressing period AT1and the second addressing period AT2within the addressing period AT of one frame period. During the addressing skip session set between the first addressing period AT1and the second addressing period AT2, the low-potential power supply voltage ELVSS may be changed from the light-off voltage Voff to the light-on voltage Von. In addition, the addressing skip session may be set within the vertical blank period VB before entering the next frame period. During the addressing skip session set within the vertical blank period VB, the low-potential power supply voltage ELVSS may be changed from the light-on voltage Von to the light-off voltage Voff. FIG.9is a diagram illustrating an example in which an addressing skip session SK (e.g., an intermediate period) is set between a first addressing period AT1and a second addressing period AT2according to one embodiment.FIG.10is a waveform diagram illustrating an example in which a scan pulse is not generated in an addressing skip session SK according to one embodiment. Referring toFIGS.9and10, the gate driver120sequentially outputs gate signals to the first to (I−1)th pixel lines during the first addressing period AT1under the control of the timing controller130and supplies the gate signals to the gate lines of the pixel lines. After that the first addressing period AT1, the gate driver120does not (e.g., refrains) output the gate signals, particularly, scan pulses, during the addressing skip session SK, and maintains the voltages of the gate lines at a gate-off voltage during the addressing skip session SK. As a result, during the addressing skip session SK, the data voltage Vdata of pixel data is not applied to the second nodes DRG of the pixel circuits since the scan pulses are not applied to all pixels in the Ith pixel line, as well as in the screen. The timing controller130delays the pixel data of the input image during the addressing skip session SK using a line memory or a delay circuit, and then transmits the pixel data of the Ith pixel line to the data driver110when the second addressing period AT2starts. The timing controller130may temporarily stop driving the output buffers between the output terminals of the data driver110and the data lines or turn off the output switch elements during the addressing skip session SK, so that the output terminals of the data driver110are separated from the data lines. In another embodiment, the timing controller130may turn off the switch elements of the de-multiplexer array112disposed between the output terminals of the data driver110and the data lines to electrically separate the output terminals of the data driver110from the data lines during the addressing skip session SK. The gate driver120sequentially supplies the gate signals to the gate lines of the pixel lines from the Ith pixel line to the nth pixel line responsive to the second addressing period AT2starting after the addressing skip session SK under the control of the timing controller130. FIG.11is diagrams illustrating an example of global dimming according to an embodiment of the present disclosure. Referring toFIG.11, the timing controller130may adjust the luminance of an image reproduced on the screen of the display panel by varying the global dimming duty ratio. That is, the global duty ratio is adjustable. For example, the timing controller130may adjust the luminance on the screen by varying the global dimming duty ratio to 25%, 50%, 75%, or the like. The position of the pixel line synchronized with the addressing skip session on the screen may vary according to the global dimming duty ratio. FIG.12is a waveform diagram illustrating an example in which a low-potential power supply voltage is changed and a data voltage is held during an addressing skip session according to one embodiment. Referring toFIG.12, an input image may be a vertical gradation image in which a gray level value is gradually decreased from a first pixel line to an nth pixel line. In this case, the gray scale value of the data voltage Vdata of the pixel data is gradually decreased for each pixel line. The scan pulse SCAN may be sequentially applied to the pixel lines in synchronization with the data voltage Vdata during the first addressing period AT1(e.g., “only addressing” inFIG.12). The low-potential power supply voltage ELVSS maintains the light-off voltage Voff during the first addressing period AT1. Accordingly, data addressing is performed during the first addressing period AT1without emission, so that the pixels in the first pixel region A do not emit light and are charged with the data voltage Vdata. During the addressing skip session SK (e.g., “addressing skip” inFIG.12), the voltage of the data lines maintains a previous data voltage and the scan pulse maintains at the gate-off voltage VGL (Data Hold) so that data addressing is not performed. During the addressing skip session SK, the low-potential power supply voltage ELVSS is changed from the light-off voltage Voff to the light-on voltage Von. The data voltage Vdata of the pixel data is generated at a gray scale voltage of the pixel data to be written to the pixels in the Ith pixel line when the second addressing period AT2(e.g., “addressing+emission” inFIG.12) starts after the addressing skip session SK. The scan pulse SCAN may be sequentially applied to the pixel lines in synchronization with the data voltage Vdata during the second addressing period AT2starting after the addressing skip session SK. Accordingly, when the second addressing period AT2starts, the data driver110resumes outputting the data voltage Vdata, and the gate driver120resumes outputting the scan pulse SCAN. The low-potential power supply voltage ELVSS maintains the light-on voltage Von during the second addressing period AT2and the vertical blank period VB. Accordingly, during the second addressing period AT2, data addressing is performed on the pixels in the second pixel region A′, and at the same time, the pixels in the first and second pixel regions A and A′ may emit light according to the global dimming duty ratio. In this case, the pixels in the first pixel region A emit light with a target luminance corresponding to the gray scale of pixel data written in a first addressing period of a current frame, and the pixels in the second pixel region A′ emit light while performing data addressing that is updated from the pixel data written in a previous frame to pixel data of the current frame. FIGS.13A to13Eare views illustrating an example in which data addressing, addressing skip, and light emission are sequentially performed along a scanning direction of a display panel according to one embodiment. InFIGS.13A to13E, in an upper drawing which illustrates a screen of a display panel, a black screen represents that pixels that do not emit light. During the first addressing period AT1, as shown inFIG.13A, data addressing is performed and pixel data is written to the pixels in the first pixel region A. During the first addressing period AT1, the pixels in the first and second pixel regions A and A′ do not emit light. During the addressing skip session SK, as shown inFIG.13B, the low-potential power supply voltage ELVSS is changed from the light-off voltage Voff to the light-on voltage Von. At this time, scanning is stopped with respect to the first and second pixel regions A and A′, so that the pixels maintain a previous data voltage and do not emit light. After the addressing skip session SK, the second addressing period AT2starts. The low-potential power supply voltage ELVSS is generated at the light-on voltage Von during the second addressing period AT2. As shown inFIG.13C, when the data voltage Vdata of the pixel data to be written in the Ith pixel line is outputted from the data driver110and the scan pulse SCAN synchronized with the data voltage Vdata is outputted from the gate driver120, the second addressing period AT2is started. After the data voltage Vdata of the pixel data is charged to the pixels in the Ith pixel line, the pixels in the first and second pixel regions A and A′ start to emit light. During the second addressing period AT2, as shown inFIG.13D, data addressing is performed on the pixels in the (I+J)th pixel line. Here, each of I and J is a positive integer, and ‘I+J’ is a positive integer less than n. During the second addressing period AT2, the pixel lines in the first and second pixel regions A and A′ may be emitted because the low-potential power supply voltage ELVSS is the light-on voltage Von. As the execution time of the second addressing period AT2increases, the screen area that emits light is enlarged. When the data addressing is finished on the nth pixel line to which a last scan pulse is applied, all pixels of the screen are emitted (Full Emission) as illustrated inFIG.13E. According to the present disclosure, the luminance of pixels may be adjusted in a state in which the data voltage is fixed to a predetermined voltage or higher by adjusting the duty ratio upon performing the global dimming. The method of varying the duty ratio of lighting-on and -off the pixels may provide a stain improvement effect at low luminance. The global dimming duty ratio can be adaptively applied according to the brightness of the surrounding environment because the luminance required on the screen is different depending on the usage environment. For example, the duty ratio of the low-potential power supply voltage ELVSS may be varied in proportion to the brightness of the surrounding environment of the display device. In the case of the outdoor mode, since high luminance is required on the screen, pixels may be driven with a maximum duty ratio, that is, a duty ratio of 100%, as illustrated inFIG.14A. The duty ratio of the low-potential power supply voltage ELVSS becomes high when the duty ratio of the low-potential power supply voltage ELVSS is changed from a normal mode, a power saving mode, or a night mode to an outdoor mode. In a case of the normal mode, the global dimming duty ratio may be applied according to the brightness designated by the user, and as shown inFIG.14B, the duty ratio of 50% as a default value may be applied. In a case of the power saving mode or the night mode, the global dimming duty ratio may be lowered to a duty ratio of 20% or less since pixels are driven at low luminance, as shown inFIG.14C. When the driving mode is changed to the power saving mode or the night mode, the duty ratio of the low-potential power supply voltage ELVSS is lowered. The power saving mode may be entered when the remaining amount of the battery is less than a preset value. Since the night environment is more sensitive to stains on the display, the global dimming may be applied to improve the image quality improvement effect. As described above, the present disclosure may provide image quality optimized for the usage environment and reduce power consumption by adaptively varying the global dimming duty ratio according to the usage environment or the driving mode. Furthermore, according to the present disclosure, power consumption may be further reduced without deteriorating image quality based on the result of analyzing the input image. For example, as shown inFIG.15, when an input image is a moving image, power consumption in the moving image may be reduced by lowering the luminance of the moving image compared to a still image. Compared to the still images, the moving image has more complexity, such as many edges and a lot of movement of objects, so a user does not react sensitively to the increase or decrease in luminance Therefore, even if the luminance of the screen decreases in the moving image, the image quality that the user may recognize is low. The timing controller130may analyze the input image and vary the duty ratio of the low-potential power supply voltage ELVSS, as shown inFIG.16, to lower the global dimming duty ratio of the moving image than that of the still image, thereby decreasing the luminance of the screen on which the moving image is reproduced and reducing power consumption.FIG.16illustrates an example in which the global dimming duty ratio for the still image is 100% and the global dimming duty ratio for the video is lowered to 30%. The global dimming duty ratio is substantially the same as the duty ratio of the low-potential power supply voltage ELVSS. The timing controller130may lower the global dimming duty ratio when the input image is changed from the still image to the moving image, and may increase the global dimming duty ratio when the input image is changed from the moving image to the still image. The timing controller may enter the low-speed driving mode for the still image and lower the frame frequency, thereby reducing power consumption even in the still image. The duty ratio of the low-potential power supply voltage ELVSS may be varied in proportion to an average brightness of one first frame image as shown inFIGS.17and18. FIGS.17and18are diagrams illustrating an example in which a global dimming duty ratio is varied based on an average picture level (APL). Referring toFIGS.17and18, the average picture level APL is a value representing an average brightness of one frame image and is calculated as an average value of accumulated distribution values for each gray scale level of the one frame image. An image with a higher average image level (APL) is a brighter image, and an image with a lower average image level (APL) is a darker image. The timing controller130may vary the global dimming duty ratio in proportion to the average picture level (APL) of the one frame image calculated for every frame. The timing controller130increases the global dimming duty ratio to increase the luminance of the screen by increasing the duty ratio of the low-potential power supply voltage ELVSS in a bright image having a high average picture level (APL). The timing controller130may lower the global dimming duty ratio to lower the luminance of the screen by lowering the duty ratio of the low-potential power supply voltage ELVSS in a dark image having a low average picture level (APL). In addition, the timing controller may increase the voltage range between the maximum voltage and the minimum voltage of the data voltage Vdata in dark images with a low average picture level (APL), (e.g., extend the data voltage range) to improve low-gray scale representation in the dark images. In one embodiment, a display device comprises: a display panel including a first display area comprising a first plurality of pixels, and a second display area comprising a second plurality of pixels, each pixel from the first plurality of pixels and the second plurality of pixels including a corresponding light emitting element; a data driver circuit configured to output a plurality of data voltages of an image to the first plurality of pixels and the second plurality of pixels; a gate driver configured to output a plurality of scan signals to the first plurality of pixels and the second plurality of pixels; and a power supply configured to generate a low-potential power supply voltage that is applied to a corresponding light emitting element included in each pixel from the first plurality of pixels and the second plurality of pixels, the low-potential power supply voltage switching between a first level such that the light emitting element in each respective pixel is capable of emitting light, and a second level such that the light emitting element in each respective pixel cannot emit light, wherein a frame period of the display device includes an addressing period during which the plurality of data voltages of the image and the plurality of scan signals are output to the first plurality of pixels and the second plurality of pixels, and a blank period during which the plurality of data voltages and the plurality of scan signals are not output to the first plurality of pixels and the second plurality of pixels, wherein during a first portion of the addressing period the low-potential power supply voltage is at the second level such that none of the first plurality of pixels in the first display area emit light and none of the second plurality of pixels in the second display area emit light, and during a second portion of the addressing period that is subsequent the first portion, the low-potential power supply voltage is at the first level such that the first plurality of pixels in the first display area emit light to display a first part of the image and at least a portion of the second plurality of pixels in the second display area emit light to display at least a portion of a second part of the image. In one embodiment, each of the first plurality of pixels the second plurality of pixels respectively comprises: a driving element including a first electrode of the driving element that is connected to a first node to which a first power line applies a pixel driving voltage to the first node, a gate electrode of the driving element that is connected to a second node, and a second electrode of the driving element that is connected to a third node; a light emitting element including an anode connected to the third node and a cathode to which the low-potential power supply voltage is applied; a capacitor between the second node and the third node; and a first switch element including a first electrode of the first switch element that is connected to a data line to which a data voltage from the plurality of data voltages is applied, a gate electrode of the first switch element to which a scan signal from the plurality of scan signals is applied, and a second electrode of the second switch element that is connected to the second node. In one embodiment, during the first portion of the addressing period, first data voltages from the plurality of data voltages are written to the first plurality of pixels in the first display area via first switch elements included in the first plurality of pixels without writing second data voltages from the plurality of data voltages to the second plurality of pixels in the second display area while the low-potential power supply voltage is at the second level, and during the second portion of the addressing period, light emitting elements included in the first plurality of pixels in the first display area emit light corresponding to the first data voltages to display the first part of the image, and at least a portion of the second data voltages from the plurality of data voltages are written to a portion of the second plurality of pixels in the second display area via first switch elements included in the second plurality of pixels, and light emitting elements included in the portion of the second plurality of pixels emit light corresponding to the portion of the second data voltages to display the portion of the second part of the image while the low-potential power supply voltage is at the first level, wherein the first plurality of pixels are arranged in a first plurality of pixel lines in the first display area, and the second plurality of pixels are arranged in a second plurality of pixel lines in the second display area, wherein during the second portion of the addressing period, the first plurality of pixel lines substantially simultaneously display the first part of the image, and the second plurality of pixel lines sequentially display a corresponding portion of the second part of the image in the second display area as each second pixel line is written with corresponding second data voltages. In one embodiment, the addressing period further includes an intermediate period between the first portion of the addressing period and the second portion of the addressing period, and the low-potential power supply voltage switches from the second level to the first level during the intermediate period. In one embodiment, the gate driver refrains from outputting the plurality of scan signals during the intermediate period. In one embodiment, a duty ratio of the low-potential power supply voltage at the first level and the low-potential power supply voltage at the second level during the addressing period is adjustable between one of a plurality of duty ratios, wherein a luminance of the image displayed by the display device is based on a selected duty ratio from the plurality of duty ratios. In one embodiment, the luminance of the image is increased as the duty ratio increases and the luminance of the image is decreased as the duty ratio decreases. In one embodiment, a duty ratio from the plurality of duty ratios is selected based on whether the image is a still image or a moving image. In one embodiment, a first duty ratio from the plurality of duty ratios for the still image is associated with a greater luminance than a second duty ratio from the plurality of duty ratios for the moving image. In one embodiment, each of the plurality of duty ratios is associated with a corresponding average brightness and a duty ratio for the frame period is selected from the plurality of duty ratios based an average brightness of the image to be displayed during the frame period. In one embodiment, a size of the first display area and a size of the second display area is based on the duty ratio selected from the plurality of duty ratios. In one embodiment, the size of the first display area decreases and the size of the second area increases as the duty ratio increases, and the size of the first display area increases and the size of the second area decrease as the duty ratio decreases. In one embodiment, a display device comprises: a display panel including a first display area comprising a first plurality of pixels, and a second display area comprising a second plurality of pixels, each pixel from the first plurality of pixels and the second plurality of pixels including a corresponding light emitting element; a data driver circuit configured to output a plurality of data voltages of an image to the first plurality of pixels and the second plurality of pixels; a gate driver configured to output a plurality of scan signals to the first plurality of pixels and the second plurality of pixels; and a power supply configured to generate a low-potential power supply voltage that is applied to a corresponding light emitting element included in each pixel from the first plurality of pixels and the second plurality of pixels, the low-potential power supply voltage switching between a first level such that the light emitting element in each respective pixel is capable of emitting light, and a second level that is greater than the first level such that the light emitting element in each respective pixel cannot emit light, wherein a frame period of the display device includes an addressing period during which the plurality of data voltages of the image and the plurality of scan signals are output to the first plurality of pixels and the second plurality of pixels, and during the addressing period the low-potential power supply voltage switches from the second level to the first level such that the light emitting element in each respective pixel can emit light to display the image. In one embodiment, the low-potential power supply voltage is applied a cathode of each of corresponding light emitting element. In one embodiment, a duty ratio of the low-potential power supply voltage at the first level and the low-potential power supply voltage at the second level during the addressing period is adjustable between one of a plurality of duty ratios. In one embodiment, during a first portion of the addressing period the low-potential power supply voltage is at the second level such that none of the first plurality of pixels in the first display area emit light and none of the second plurality of pixels in the second display area emit light, and during a second portion of the addressing period that is subsequent the first portion, the low-potential power supply voltage is at the first level such that the first plurality of pixels in the first display area emit light to display a first part of the image in the first display area and at least a portion of the second plurality of pixels in the second display area emit light to display at least a portion of a second part of the image in the second display area. In one embodiment, a display device comprises: a display panel including a first display area comprising a first plurality of pixels, and a second display area comprising a second plurality of pixels, each pixel from the first plurality of pixels and the second plurality of pixels including a corresponding light emitting element; a data driver circuit configured to output a plurality of data voltages of an image to the first plurality of pixels and the second plurality of pixels; a gate driver configured to output a plurality of scan signals to the first plurality of pixels and the second plurality of pixels; and a power supply configured to generate a low-potential power supply voltage that is applied to a corresponding light emitting element included in each pixel from the first plurality of pixels and the second plurality of pixels, the low-potential power supply voltage switching between a first level such that the light emitting element in each respective pixel is capable of emitting light, and a second level that is greater than the first level such that the light emitting element in each respective pixel cannot emit light, wherein a frame period of the display device includes an addressing period during which the plurality of data voltages of the image and the plurality of scan signals are output to the first plurality of pixels and the second plurality of pixels, and during the addressing period the low-potential power supply voltage switches from the second level to the first level, wherein the display device is configured to operate in one of a plurality of modes where each mode has a corresponding a duty ratio of the first level of the low-potential power supply voltage and the second level of the low-potential power supply voltage from a plurality of different duty ratios, wherein the display device is configured to operate in one of a plurality of modes where each mode has a corresponding a duty ratio of the first level of the low-potential power supply voltage and the second level of the low-potential power supply voltage from a plurality of different duty ratios. In one embodiment, the plurality of modes include an outdoor mode having a first duty ratio of the low-potential power supply voltage, a normal mode having a second duty ratio of the low-potential power supply voltage that is less than the first duty ratio of the outdoor mode, and a power saving mode having a third duty ratio of the low-potential power supply voltage that is less than the second duty ratio. In one embodiment, the plurality of modes include a still image mode during which the image displayed in the frame period is a still image and a moving image mode during which the image displayed during the frame period is a moving image, wherein a duty ratio of the still image mode is greater than a duty ratio of the moving image mode. In one embodiment, during a first portion of the addressing period the low-potential power supply voltage is at the second level such that none of the first plurality of pixels in the first display area emit light and none of the second plurality of pixels in the second display area emit light, and during a second portion of the addressing period that is subsequent the first portion, the low-potential power supply voltage is at the first level such that the first plurality of pixels in the first display area emit light to display a first part of the image in the first display area and at least a portion of the second plurality of pixels in the second display area emit light to display at least a portion of a second part of the image in the second display area. The objects to be achieved by the present disclosure, the means for achieving the objects, and effects of the present disclosure described above do not specify essential features of the claims, and thus, the scope of the claims is not limited to the disclosure of the present disclosure. Although the embodiments of the present disclosure have been described in more detail with reference to the accompanying drawings, the present disclosure is not limited thereto and may be embodied in many different forms without departing from the technical concept of the present disclosure. Therefore, the embodiments disclosed in the present disclosure are provided for illustrative purposes only and are not intended to limit the technical concept of the present disclosure. The scope of the technical concept of the present disclosure is not limited thereto. Therefore, it should be understood that the above-described embodiments are illustrative in all aspects and do not limit the present disclosure. The protective scope of the present disclosure should be construed based on the following claims, and all the technical concepts in the equivalent scope thereof should be construed as falling within the scope of the present disclosure. | 77,443 |
11862106 | DETAILED DESCRIPTION Reference will now be made in more detail to aspects of some embodiments, which are illustrated in the accompanying drawings, wherein like reference numerals refer to like elements throughout. In this regard, the present embodiments may have different forms and should not be construed as being limited to the descriptions set forth herein. Accordingly, the embodiments are merely described below, by referring to the figures, to explain aspects of the present description. As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items. Throughout the disclosure, the expression “at least one of a, b, or c” indicates only a, only b, only c, both a and b, both a and c, both b and c, all of a, b, and c, or variations thereof. As the disclosure allows for various changes and numerous embodiments, aspects of some embodiments will be illustrated in the drawings and described in more detail in the written description. The attached drawings for illustrating aspects of some embodiments of the present disclosure are referred to in order to gain a sufficient understanding of the present disclosure, the merits thereof, and the objectives accomplished by the implementation of the present disclosure. The disclosure may, however, be embodied in many different forms and should not be construed as being limited to the embodiments set forth herein. It will be understood that although the terms “first”, “second,” etc. may be used herein to describe various elements, these elements should not be limited by these terms, and these elements are only used to distinguish one element from another. As used herein, the singular forms “a,” “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising” used herein specify the presence of stated features or elements, but do not preclude the presence or addition of one or more other features or elements. It will be understood that when a layer, region, or element is referred to as being “formed on” another layer, region, or element, it can be directly or indirectly formed on the other layer, region, or element. That is, for example, intervening layers, regions, or elements may be present. Sizes of elements in the drawings may be exaggerated for convenience of explanation. In other words, since sizes and thicknesses of elements in the drawings are arbitrarily illustrated for convenience of explanation, the following embodiments are not limited thereto. In the present embodiments, an expression such as “A and/or B” indicates A, B, or A and B. Also, an expression such as “at least one of A and B” indicates A, B, or A and B. It will be understood that when X is connected to Y, X may be electrically connected, functionally connected, or directly connected to Y. Here, X and Y may each be an object (e.g., an apparatus, a device, a circuit, a wire, an electrode, a terminal, a conductive layer, a layer, or the like). Therefore, a connection relationship between X and Y is not limited to a certain connection relationship, e.g., a connection relationship illustrated in the drawings or stated in the detailed description, and may include relationships other than the connection relationship in the drawings or detailed description. When X is electrically connected to Y, there may be, for example, at least one device (e.g., a switch, a transistor, a capacitor device, an inductor, a resistor, a diode, or the like), which enables an electrical connection between X and Y, between X and Y. In embodiments below, the term “on” used in relation to a device state may indicate an activation state of the device, and the term “off” may indicate an inactivation state of the device. The term “on” used in relation to a signal received by a device may indicate a signal for activating a device, and the term “off” may indicate a signal for inactivating the device. The device may be activated according to a high-level voltage or a low-level voltage. For example, a P-channel transistor may be activated according to a low-level voltage, and an N-channel transistor may be activated according to a high-level voltage. Therefore, “on” voltages of the P-channel transistor and the N-channel transistor have opposite voltage levels (low vs. high). FIG.1is a schematic block diagram of a display apparatus according to some embodiments. A display apparatus10according to some embodiments may be realized as or incorporated into an electronic apparatus such as a smartphone, a cell phone, a smartwatch, a navigation device, a game device, a television (TV), an automotive head unit, a laptop computer, a tablet computer, a Personal Media Player (PMP), or a Personal Digital Assistants (PDA). Also, the electronic apparatus may be a flexible apparatus. Referring toFIG.1, the display apparatus10may include a pixel unit110, a scan driver130, a sensor140, a data driver150, and a controller160. According to some embodiments, the display apparatus10may operate in a sensing period, in which the display apparatus10operates in a sensing mode, or a driving period, in which the display apparatus10operates in a display mode. The sensing period may be a period in which characteristic information of each of pixels PX included in the pixel unit110, for example, at least one of a threshold voltage, mobility, and deterioration information of a driving transistor and/or an organic light-emitting diode included in each pixel PX, is extracted. The driving period may be a period in which a certain image is displayed by the pixels PX included in the pixel unit110, in response to data signals. According to some embodiments, the sensing period may be shown after power is applied (power on), between the driving periods, and before power is turned off (power off). The scan driver130may be connected to scan lines SCL and SSL, generate scan signals in response to a first control signal CONT1 from the controller160, and thus sequentially provide the scan signals to the scan lines SCL and SSL. The scan signal may be a signal having a pulse of an on-voltage at which a transistor included in the pixel PX may be turned on. The on-voltage may be a high-level or low-level voltage. The scan driver130may include a shift register. For example, the scan driver130may be configured to sequentially provide the scan signals to the scan lines SCL and SSL during the sensing period and the driving period. The sensor140may be connected to sensing lines SL and sense characteristic information from the pixels PX through the sensing lines SL during the sensing period, in response to a second control signal CONT2 from the controller160. According to some embodiments, the sensing line SL may be included in each vertical line (a column). According to some embodiments, one sensing line SL may be shared by pixels PX in multiple columns. The sensor140may convert the sensed characteristic information into sensing data in a digital form and may output the sensing data. The sensing data may be used to convert data to compensate for a characteristic deviation of the pixels PX. The sensor140may include a plurality of sensing integrated circuits (ICs). The sensing ICs may be realized as readout ICs configured to extract the characteristic information of the pixels PX. The sensor140may be enabled in the sensing period and disabled in the driving period. The data driver150may be connected to a plurality of data lines DL and provide data signals to the data lines DL during the data period, in response to a third control signal CONT3 from the controller160. The data driver150may generate the data signals according to data DATA provided from the controller160during the driving period. The data signals in the form of a voltage or a current that are generated by the data driver150may be provided to the data lines DL. The data signals provided to the data lines DL may be provided to the pixels PX selected in response to the scan signals. The pixels PX may emit light having a brightness corresponding to the data signal during the driving period, and thus, images may be displayed on the pixel unit110. According to some embodiments, the data driver150may provide a reference voltage to the data lines DL during the sensing period, according to the control of the controller160. For example, the reference voltage may be set to be a certain voltage at which a current may flow from driving transistors included in the pixels PX. According to some embodiments of the present disclosure, the data driver150does not necessarily provide the reference voltage to the pixels PX during the sensing period. For example, when the pixels PX are connected to other voltage sources and/or current sources, the data driver150may drive the data lines DL only in the driving period. In the pixel unit110, the scan lines SCL and SSL, the data lines DL, the sensing lines SL, and the pixels PX connected thereto may be included. The pixels PX may be repeatedly arranged along a first direction (an x direction, a row direction) and a second direction (a y direction, a column direction). The scan lines SCL and SSL may be regularly separated or spaced apart from each other, may be arranged in a row, and may provide the scan signals, respectively. The data lines DL may be regularly separated from each other, may be arranged in a column, and may provide the data signals, respectively. The sensing lines SL may be regularly separated from each other, may be arranged in a column, and may sense the characteristic information of each pixel PX. According to some embodiments, in the case of an organic electroluminescent display apparatus, the pixels PX may be driven according to a driving voltage ELVDD and a common voltage ELVSS. The pixels PX may output the characteristic information through the sensing lines SL during the sensing period, and may emit light in response to the data signals provided through the data lines DL during the driving period. The controller160may control the operations of the scan driver130, the sensor140, and the data driver150. Also, the controller160may store, in a memory, sensing data from the sensor140, compensate for data that is input from the outside by using the stored sensing data, and output the compensated data DATA to the data driver150. According to some embodiments, the data DATA and the sensing data may be digital signals. The controller160may include a level shifter170. However, one or more embodiments are not limited thereto. According to some embodiments, for example, the level shifter170may be separately formed on the outside of the controller160. The level shifter170may generate the first to third control signals CONT1 to CONT3 according to the clock signal, the control signal, etc. The first control signal CONT1 may include a scan start signal, the clock signals, etc. The second control signal CONT2 may include a sensing start signal, the clock signals, switch control signals, etc. The third control signal CONT3 may include a source start signal, the clock signals, etc. The display apparatus10may include the display panel, and the display panel may include a substrate. The display apparatus10may include a display area, where an image is displayed, and a non-display area surrounding the display area outside the display area. The pixel unit110may be arranged in the display area of the substrate, and driving circuits such as the scan driver130, the sensor140, and the data driver150may be arranged in the non-display area. For example, some or all portions of the scan driver130may be directly formed in the non-display area of the substrate in a process of forming a transistor that forms a pixel circuit in the display area of the substrate in a Gate In Panel (GIP) manner. The data driver150may be arranged on a Flexible Printed Circuit Board (FPCB) electrically connected to a pad that is on one side of the substrate. According to some embodiments, the data driver150may be directly arranged on the substrate in a Chip On Glass (COG) manner or a Chip On Plastic (COP) manner. Hereinafter, a case where the display apparatus10is an organic light-emitting display apparatus is described, but the display apparatus is not limited thereto. According to some embodiments, the display apparatus10may be a display apparatus such as an inorganic light-emitting display apparatus (or an inorganic EL display apparatus) or a quantum dot light-emitting display apparatus. FIG.2is an equivalent circuit diagram of a pixel according to some embodiments. Referring toFIG.2, each pixel PX may include a pixel circuit PC and an organic light-emitting diode OLED connected to the pixel circuit PC as a display element. The pixel circuit PC includes a first transistor (T1, a driving transistor), a second transistor (T2, a switching transistor), a third transistor (T3, a sensing control transistor), and a capacitor Cst. The first transistor T1 may include a first electrode connected to a driving power line PL configured to provide a driving voltage ELVDD, and a second electrode connected to a second node Nb. A gate electrode of the first transistor T1 may be connected to a first node Na. The first transistor T1 may control a driving current flowing in the organic light-emitting diode OLED from the driving power line PL, according to a voltage stored in the capacitor Cst. The second transistor T2 may include a gate electrode connected to a first scan line SCL, a first electrode connected to the data line DL, and a second electrode connected to the first node Na. The second transistor T2 may be turned on in response to a first scan signal SC input through the first scan line SCL, may electrically connect the data line DL to the first node Na, and may be configured to transmit, to the first node Na, the data signal DS input through the data line DL. The third transistor T3 may include a gate electrode connected to a second scan line SSL, a first electrode connected to the second electrode of the first transistor T1, and a second electrode connected to the sensing line SL. The third transistor T3 may be turned on in response to a second scan signal SS provided through the second scan line SSL during the sensing period and may electrically connect the sensing line SL to the second electrode of the first transistor T1. The capacitor Cst may be connected between the first node Na and the second electrode of the first transistor T1. The capacitor Cst may store a voltage corresponding to a difference between a voltage from the second transistor T2 and a potential of the second electrode of the first transistor T1. The organic light-emitting diode OLED may include a first electrode (a pixel electrode, an anode) connected to the second node Nb and a second electrode (an opposite electrode, a cathode) to which a common voltage ELVSS is applied. The organic light-emitting diode OLED may emit light having a certain brightness because of the driving current. FIG.2illustrates that transistors of the pixel circuit are N-type transistors, but one or more embodiments are not limited thereto. For example, the transistors of the pixel circuit may be P-type transistors, or some of the transistors may be P-type transistors, and others thereof may be N-type transistors. According to some embodiments, the first transistor T1 may at least be an oxide semiconductor transistor that includes an active layer including an amorphous or crystalline oxide semiconductor. For example, the first to third transistors T1 to T3 may each be an oxide semiconductor transistor. The oxide semiconductor transistor may have great off-current characteristics. Alternatively, according to some embodiments, at least one of the first to third transistors T1 to T3 may be a Low-Temperature Poly-Silicon (LTPS) thin film transistor including an active layer including polysilicon. The LTPS thin film transistor may have high electron mobility and thus have fast driving characteristics. FIG.3is a diagram for explaining operations of a pixel and a sensor, according to some embodiments. The sensor140may include a first switching device SW1, a second switching device SW2, and at least one Analog-Digital Converter (ADC)146. The first switching device SW1 may be connected between the sensing line SL and an initialization voltage source. The first switching device SW1 may be turned on in response to a first control signal S1 provided from the controller160and may provide an initialization voltage Vint from the initialization voltage source to the sensing line SL. The second switching device SW2 may be connected between the sensing line SL and the ADC146. The second switching device SW2 may be turned on in response to a second control signal S2 provided from the controller160and may connect the sensing line SL to the ADC146. The ADC146may sense a voltage or a current of the sensing line SL. The ADC146may convert sensed analog characteristic information into digital sensing data. The sensor140may further include a memory148connected to the ADC146. The memory148may function as a buffer in which the digital sensing data from the ADC146is temporarily stored. In the memory148, digital sensing data corresponding to the characteristic information of each pixel may be stored. The digital sensing data stored in the memory148may be provided to the controller160. The controller160may output data DATA, which is obtained after the characteristic deviation between the pixels PX is compensated for, based on sensing data including the characteristic information of each pixel PX. During the sensing period, the data driver150may provide, to the data line DL, a reference voltage at which a current from the pixels PX may flow. According to some embodiments, the data driver150may not provide the reference voltage. In this case, in the sensing period, the data lines DL may be electrically connected to a certain current source and/or a voltage source to drive the pixels PX. Also, in a certain period of the sensing period, the first scan signal SC and the second scan signal SS may be respectively provided to the first scan lines SCL and the second scan lines SSL. In the pixels PX in a row to which the first scan signal SC and the second scan signal SS are provided, the second transistor T2 and the third transistor T3 may be turned on. When the second transistor T2 is turned on, the reference voltage from the data line DL may be transmitted to the first node Na. When the third transistor T3 is turned on, the first switching device SW1 may be turned on in response to the first control signal S1, and the initialization voltage Vint may be provided to a node, which is connected to the second electrode of the first transistor T1, through the sensing line SL. Then, the first switching device SW1 may be turned off, and the second switching device SW2 may be turned on in response to the second control signal S2; thus, the second electrode of the first transistor T1 may be electrically connected to the sensing line SL. The reference voltage is applied to the first node Na, and the first transistor T1 is turned on. Accordingly, in the pixels PX in a corresponding row, a current corresponding to the reference voltage may be generated, and the current may be provided to the sensing line SL via the third transistor T3 of the pixels PX. The sensing line SL may have a certain resistance value, and thus, a voltage corresponding to a certain current flowing in a corresponding pixel PX may be applied to each sensing line SL. The voltage applied to the sensing line SL may be stored in a line capacitor CLine parasitically generated in the sensing line SL. The voltage stored in the sensing line SL may include characteristic information of the first transistor T1 included in the pixel PX of a currently sensed row. The current flowing in the first transistor T1 according to the reference voltage may correspond to a threshold voltage, mobility, and deterioration information of the first transistor T1. A method of extracting the characteristic information of the pixel PX is not limited to the embodiments described above. For example, the characteristic information of the pixel PX may be extracted in various ways that are well known. During the driving period, the data DATA, which is output from the controller160, may be input to the data driver150, and the data driver150may generate the data signals corresponding to the data DATA and output the generated data signal DS to the data lines DL. During the driving period, the first and second scan signals SC and SS may be respectively provided to the first and second scan lines SCL and SSL. In the pixels PX of a row, to which the first and second scan signals SC and SS are transmitted, the second and third transistors T2 and T3 may be turned on. When the second transistor T2 is turned on, the data signal from the data line DL may be transmitted to the first node Na of the corresponding pixel PX. When the third transistor T3 is turned on, the initialization voltage Vint from the sensing line SL may be transmitted to the second node Nb of the corresponding pixel PX. Accordingly, a voltage between the first node Na and the second node Nb may be charged in the capacitor Cst. The first transistor T1 is turned on, and the first transistor T1 having turned on may be configured to provide a driving current corresponding to the data signal to the organic light-emitting diode OLED. Accordingly, the driving current flows from the driving power line PL in a current path via the first transistor T1 and the organic light-emitting diode OLED. Then, the organic light-emitting diode OLED may emit light at a brightness corresponding to the driving current. Because the data signal is generated according to the data DATA, the characteristic deviation between the pixels PX may be compensated for, and thus, images having uniform quality may be displayed on the display panel. FIG.4schematically illustrates a scan driver according to some embodiments. Referring toFIG.4, the scan driver130may include first to nthstages ST1 to STn. The first to nthstages ST1 to STn may sequentially output first scan signals SC1 to SCn and second scan signals SS1 to SSn respectively to first scan lines and second scan lines, in one frame period. Each of the first to nthstages ST1 to STn may be connected to any one of the first scan lines SCL and any one of the second scan lines SSL. Each of the first to nthstages ST1 to STn may receive at least one clock signal CK and at least one voltage signal VG, generate the first scan signal SC to provide the same to the first scan line SCL, and generate the second scan signal SS to provide the same to the second scan line SSL. For example, the ithstage STi may provide the first scan signal SCi to the first scan line SCL and the second scan signal SSi to the second scan line SSL. That is, each of the first to nthstages ST1 to STn may provide the first and second scan signals SC and SS to the first and second scan lines SCL and SSL. Each of the first to nthstages ST1 to STn may provide a carry signal CR to a front-end stage or a rear-end stage, in response to one of carry clock signals. The front-end stage may be at least one previous stage, and the rear-end stage may be at least one subsequent stage. FIG.5schematically illustrates an arbitrary stage forming a scan driver, according to some embodiments. Referring toFIG.5, a stage ST may include an output controller134, a node controller131controlling a first control node Q, and an inverter INV inverting a voltage of the first control node Q and providing the voltage to a second control node QB. The node controller131and the inverter INV may include at least one transistor and at least one capacitor. The output controller134may include a pull-up transistor SWFU for outputting an on-voltage and a pull-down transistor SWFD for outputting an off-voltage. When the pull-up transistor SWFU is turned on, a signal of a high voltage may be output according to the clock signal CK. The pull-up transistor SWFU may include a first pull-up transistor configured to output a first scan signal SC of a high voltage, a second pull-up transistor configured to output a second scan signal SS of a high voltage, and a third pull-up transistor configured to output a carry signal CR of a high voltage. When the pull-down transistor SWFD is turned on, a signal of a low voltage may be output in response to the voltage signal VG. The pull-down transistor SWFD may include a first pull-down transistor configured to output a first scan signal SC of a low voltage, a second pull-down transistor configured to output a second scan signal SS of a low voltage, and a third pull-down transistor configured to output a carry signal CR of a low voltage. FIG.6schematically illustrates a portion of a stage, according to some embodiments. Each of the first to nthstages ST1 to STn may include a plurality of nodes, and some of the nodes are referred to as the first to third output nodes N1 to N3 and first and second control nodes Q and QB. Hereinafter, an arbitrary stage ST for outputting the first and second scan signals SC and SS to an arbitrary row of the pixel unit110is described as an example. A first clock signal SC_CK, a second clock signal SS_CK, and a third clock signal CR_CK may be provided to the stage ST. The first clock signal SC_CK, the second clock signal SS_CK, and the third clock signal CR_CK may be square-wave signals in which a high voltage and a low voltage are repeatedly shown. Here, a high-voltage period may be less than a low-voltage period. The high-voltage period may correspond to a pulse width of a scan signal and may be variously set according to a structure of the pixel circuit PC. A pulse width of the first and second scan signals SC and SS may be a period from a point in time, when an off-voltage level (hereinafter, an off voltage) is transited to an on-voltage level (hereinafter, an on voltage), to a point in time when the transition from the on voltage to the off voltage is completed. The stage ST may include the node controller131, an inverter133, and an output controller134. The output controller134may include a first output controller135, a second output controller137, and a third output controller139. The node controller131may be connected between a first voltage input terminal V1 and a second voltage input terminal V2. The node controller131may control a voltage of the first control node Q according to a start signal (e.g., an external signal SW or a jthcarry signal (CRj) applied to an input terminal IN, a kthcarry signal CRk applied to a carry input terminal CRI, a first voltage VDD applied to the first voltage input terminal V1, and a second voltage VSS1 applied to the second voltage input terminal V2. Here, the jthcarry signal CRj and the kthcarry signal CRk may each be a carry signal of a front-end stage or a rear-end stage. The front-end stage may be at least one previous stage, and the rear-end stage may be at least one subsequent stage. High-voltage periods of the jthcarry signal CRj and the kthcarry signal CRk do not overlap each other. The first voltage VDD may be set as, for example, an on voltage at which a transistor is turned on. The second voltage VSS1 may be lower than the first voltage VDD and set as, for example, an off voltage. The node controller131may include a first transistor, a second transistor, and a third transistor. The first transistor may include a 1-1 transistor T1-1 and a 1-2 transistor T1-2 that are connected between the input terminal IN and the first control node Q in series. Gates of the 1-1 transistor T1-1 and the 1-2 transistor T1-2 may be connected to the input terminal IN. The 1-1 transistor T1-1 and the 1-2 transistor T1-2 may be turned on in response to start signals STV/CRj of a high voltage provided to the input terminal IN and may provide the start signals STV/CRj to the first control node Q. The second transistor may include a 2-1 transistor T2-1 and a 2-2 transistor T2-2 that are connected between the first control node Q and the second voltage input terminal V2 in series. Gates of the 2-1 transistor T2-1 and the 2-2 transistor T2-2 may be connected to the carry input terminal CRI. The 2-1 transistor T2-1 and the 2-2 transistor T2-2 may be turned on when the kthcarry signal CRk having a high voltage is supplied, and may be configured to set a voltage of the first control node Q as the second voltage VSS1. An intermediate node (a common electrode) between the 1-1 transistor T1-1 and the 1-2 transistor T1-2 and an intermediate node (a common electrode) between the 2-1 transistor T2-1 and the 2-2 transistor T2-2 may be connected to the third transistor. The third transistor may include a 3-1 transistor T3-1 and a 3-2 transistor T3-2 that are connected between the first voltage input terminal V1 and intermediate nodes of the first and second transistors in series. Gates of the 3-1 transistor T3-1 and the 3-2 transistor T3-2 may be connected to the first control node Q. The 3-1 transistor T3-1 and the 3-2 transistor T3-2 may be turned on or off according to the voltage of the first control node Q. The third transistor may be turned on when the first control node Q has a high voltage and may maintain the levels of the intermediate nodes of the first and second transistors to high levels, thus reducing the leakage current from the first control node Q. The first control node Q may be set (pre-charged) to have a high voltage by the start signal STV/CRj and set (discharged) to have a low voltage by the kthcarry signal CRk. The inverter133may be connected between the first control node Q and the second control node QB. The inverter133may invert the voltage of the first control node Q and provide the inverted voltage to the second control node QB. The inverter133may include at least one transistor. The first output controller135may output the first clock signal SC_CK or a third voltage VSS2 to a first output terminal OUT1 connected to a first output node N1, according to the voltages of the first control node Q and the second control node QB. The third voltage VSS2 may be set to be lower than the second voltage VSS1. The first output controller135may include a fourth transistor T4 and a fifth transistor T5 connected between a first clock input terminal CLK1 and a third voltage input terminal V3. The first output controller135may further include a first capacitor C1. The fourth transistor T4 may be connected between the first clock input terminal CLK1 and the first output terminal OUT1. A gate of the fourth transistor T4 may be connected to the first control node Q. The fourth transistor T4 may be turned on or off according to a voltage of the first control node Q. The fourth transistor T4 may be a first pull-up transistor. The fourth transistor T4 may be turned on when the first control node Q is set to have a high voltage and may output the first clock signal SC_CK having a high voltage as a high voltage of the first scan signal SC. The fifth transistor T5 may be connected between the first output terminal OUT1 and the third voltage input terminal V3. A gate of the fifth transistor T5 may be connected to the second control node QB. The fifth transistor T5 may be turned on or off according to the voltage of the second control node QB. The fifth transistor T5 may be a first pull-down transistor. The fifth transistor T5 may be turned on when the second control node QB is set to have a high voltage, and may output the third voltage VSS2 as a low voltage of the first scan signal SC. The first capacitor C1 may be connected between the first output node N1 and the first control node Q. The fourth transistor T4 may be turned on when the first control node is charged to have a high voltage, the first clock signal SC_CK having a high voltage is output as a high voltage of the first scan signal SC, and in this case, the voltage of the first control node Q may be bootstrapped by the first capacitor C1. The second output controller137may output the second clock signal SC_CK or the third voltage VSS2 to the second output terminal OUT2 connected to the second output node N2, according to the voltages of the first control node Q and the second control node QB. The second output controller137may include a sixth transistor T6 and a seventh transistor T7 connected between a second clock input terminal CLK2 and a third voltage input terminal V3. The second output controller137may further include a second capacitor C2. The sixth transistor T6 may be connected between the second clock input terminal CLK2 and the second output terminal OUT2. A gate of the sixth transistor T6 may be connected to the first control node Q. The sixth transistor T6 may be turned on or off according to the voltage of the first control node Q. The sixth transistor T6 may be a second pull-up transistor. The sixth transistor T6 may be turned on when the first control node Q is set to have a high voltage and may output the second clock signal SC_CK having a high voltage as a high voltage of the second scan signal SS. The seventh transistor T7 may be connected between the second output terminal OUT2 and the third voltage input terminal V3. A gate of the seventh transistor T7 may be connected to the second control node QB. The seventh transistor T7 may be turned on or off according to the voltage of the second control node QB. The seventh transistor T7 may be a second pull-down transistor. The seventh transistor T7 may be turned on when the second control node QB is set to have a high voltage and may output the third voltage VSS2 as a low voltage of the second scan signal SS. The second capacitor C2 may be connected between the second output node N2 and the first control node Q. The sixth transistor T6 may be turned on when the first control node Q is charged to have a high voltage, the second clock signal SS_CK may be output as a high voltage of the second scan signal SS, and in this case, the voltage of the first control node Q may be bootstrapped by the second capacitor C2. The third output controller139may output the third clock signal CR_CK or the second voltage VSS1 to a third output terminal OUT3 connected to the third output node N3, according to the voltages of the first control node Q and the second control node QB. The third output controller139may include an eighth transistor T8 and a ninth transistor T9 connected between the third clock input terminal CLK3 and the second voltage input terminal V2. The eighth transistor T8 may be connected between the third clock input terminal CLK3 and the third output terminal OUT3. A gate of the eighth transistor T8 may be connected to the first control node Q. The eighth transistor T8 may be turned on or off according to the voltage of the first control node Q. The eighth transistor T8 may be a third pull-up transistor. The eighth transistor T8 may be turned on when the first control node Q is set to have a high voltage and may output the third clock signal CR_CK having a high voltage as a high voltage of the carry signal CR. The ninth transistor T9 may be connected between the third output terminal OUT3 and the second voltage input terminal V2. A gate of the ninth transistor T9 may be connected to the second control node QB. The ninth transistor T9 may be turned on or off according to the voltage of the second control node QB. The ninth transistor T9 may be a third pull-down transistor. The ninth transistor T9 may be turned on when the second control node QB is set to have a high voltage and may output the second voltage VSS1 as a low voltage of the carry signal CR. A start signal having a high voltage may be provided to the input terminal IN, the node controller131may set the first control node Q to have a high voltage, the first output controller135may output the first clock signal SC_CK having a high voltage as the first scan signal SC, the second output controller137may output the second clock signal SS_CK having a high voltage as the second scan signal SS, and the third output controller139may output the third clock signal CR_CK having a high voltage as the carry signal CR. In this case, the second control node QB may be set to have a low voltage by the inverter INV. Then, when a voltage of the first control node Q is changed to a low voltage, the first output controller135may output the third voltage VSS2 of a low voltage as the first scan signal SC, the second output controller137may output the third voltage VSS2 of a low voltage as the second scan signal SS, and the third output controller139may output the second voltage VSS1 of a low voltage as the carry signal CR. The second control node QB may be set to have a high voltage by the inverter INV. FIGS.7A and7Bare diagrams illustrating pulses of a clock signal and a scan signal, according to some embodiments. Referring toFIGS.7A and7B, the first clock signal SC_CK, the second clock signal SS_CK, and the third clock signal CR_CK may each be a signal having a high-voltage pulse. The high-voltage pulse may have a pulse width including a rising time TR, which is a period when a low voltage is transited (rises) to a high voltage, a falling time TF, which is a period when a high voltage is transited (drops) to a low voltage, and an on time TO, which is a period when a high voltage is maintained. A pulse of the first clock signal SC_CK may have a first pulse width TW1 of a first rising time TR1, a first falling time TF1, and a first on time TO1. A pulse of the second clock signal SS_CK may have a second pulse width TW2 of a second rising time TR2, a second falling time TF2, and a second on time TO2. A pulse of the third clock signal CR_CK may have a third pulse width TW3 of a third rising time TR3, a third falling time TF3, and a third on time TO3. Unlike a reference clock signal Ref having a rising edge RE and a falling edge FE because of vertical rising and falling, rising edges RE and falling edges FE of the first clock signal SC_CK, the second clock signal SS_CK, and the third clock signal CR_CK may rise or fall with certain gradients. The gradient of the rising edge RE may be determined according to the rising time TR, and the gradient of the falling edge FE may be determined according to the falling time TF. As a resolution increases, an RC load of a clock line may increase, and as charging/discharging of the clock line continues, overheating may be locally observed because of heat emission from a portion of a display panel, to which an IC is attached, and a wiring part that includes the clock lines configured to apply signals to the scan driver. The level shifter170may output a reference clock signal Ref when a slew rate is 100%. The heat emission from the display panel may decrease by setting the slew rate of the level shifter170to be lower than about 100% to increase the rising time TR and/or the falling time TF of a clock signal. According to some embodiments, the rising times TR of the first clock signal SC_CK, the second clock signal SS_CK, and the third clock signal CR_CK and the falling times TF of the second clock signal SS_CK and the third clock signal CR_CK may be set to be long, and the falling time TF of the first clock signal SC_CK may be set to be short. For example, gradients of the rising edges RE of the first clock signal SC_CK, the second clock signal SS_CK, and the third clock signal CR_CK may be identical to each other, and a gradient of the falling edge FE of the first clock signal SC_CK may be greater than gradients of the falling edges FE of the second clock signal SS_CK and the third clock signal CR_CK. Voltages of the first clock signal SC_CK, the second clock signal SS_CK, and the third clock signal CR_CK may increase (be changed) from low voltages to high voltages. The high voltages of the first clock signal SC_CK, the second clock signal SS_CK, and the third clock signal CR_CK may be maintained from a second point in time t2 to a third point in time t3. The voltage of the first clock signal SC_CK may decrease (be changed) from a high voltage to a low voltage from the third point in time t3 to a fourth point in time t4. The voltages of the second clock signal SS_CK and the third clock signal CR_CK may decrease (be changed) from a high voltage to a low voltage from the third point in time t3 to a fifth point in time t5. The first clock signal SC_CK may be pulled down faster than the second clock signal SS_CK and the third clock signal CR_CK. The first rising time TR1 of the first clock signal SC_CK, the second rising time TR2 of the second clock signal SS_CK, and the third rising time TR3 of the third clock signal CR_CK may be identical to each other. The second falling time TF2 of the second clock signal SS_CK may be identical to the third falling time TF3 of the third clock signal CR_CK, and the first falling time TF1 of the first clock signal SC_CK may be shorter than the second falling time TF2 of the second clock signal SS_CK and the third falling time TF3 of the third clock signal CR_CK. The first falling time TF1 of the first clock signal SC_CK may be shorter than the first rising time TR1 thereof. The first on time TO1 of the first clock signal SC_CK, the second on time TO2 of the second clock signal SS_CK, and the third on time TO3 of the third clock signal CR_CK may be identical to each other. Accordingly, the second pulse width TW2 of the second clock signal SS_CK may be identical to the third pulse width TW3 of the third clock signal CR_CK, and the first pulse width TW1 of the first clock signal SC_CK may be less than the second pulse width TW2 of the second clock signal SS_CK and the third pulse width TW3 of the third clock signal CR_CK. As described above with reference toFIG.6, the first scan signal SC, the second scan signal SS, and the carry signal CR, which are output from the stage ST, may be generated according to the first clock signal SC_CK, the second clock signal SS_CK, and the third clock signal CR_CK, respectively. Therefore, as illustrated inFIG.7B, waveforms of the first scan signal SC, the second scan signal SS, and the carry signal CR may be identical to those of the first clock signal SC_CK, the second clock signal SS_CK, and the third clock signal CR_CK, respectively. That is, a rising time of the first scan signal SC may be identical to rising times of the second scan signal SS and the carry signal CR, and a falling time of the first scan signal SC may be shorter than falling times of the second scan signal SS and the carry signal CR. The first clock signal SC_CK may be a signal used to generate the first scan signal SC configured to turn on the second transistor T2 of the pixel circuit PC to apply a data signal to a pixel. According to some embodiments, the heat emission may be reduced by setting the rising times TR and the falling times TF of the first clock signal SC_CK, the second clock signal SS_CK, and the third clock signal CR_CK to be longer than those of the reference clock signal Ref, and the image quality features may be secured by setting the falling time TF of the first clock signal SC_CK to be shorter than the falling times TF of the second clock signal SS_CK and the third clock signal CR_CK so that the image quality may be maintained. According to some embodiments, the falling time TF of the first clock signal SC_CK may be maintained as illustrated inFIG.7A, and according to a heat emission target of the display panel, the rising time of the first clock signal SC_CK and the rising and falling times of the second clock signal SS_CK and the third clock signal CR_CK may be changed. FIGS.8A and8Bare diagrams illustrating pulses of a clock signal and a scan signal, according to some embodiments. The embodiments ofFIGS.8A and8Bare different from those ofFIGS.7A and7Bin that the rising time TR of the first clock signal SC_CK is changed. Referring toFIGS.8A and8B, the rising times TR and the falling times TF of the second clock signal SS_CK and the third clock signal CR_CK may be set to be long, and the rising time TR and the falling time TF of the first clock signal SC_CK may be set to be short. For example, the gradients of the rising edge RE and the falling edge FE of the first clock signal SC_CK may be greater than those of the rising edges RE and the falling edges FE of the second clock signal SS_CK and the third clock signal CR_CK. The voltage of the first clock signal SC_CK may increase from a low voltage to a high voltage from the first point in time t1 to the second point in time t2. The voltages of the second clock signal SS_CK and the third clock signal CR_CK may increase from low voltages to high voltages from the first point in time t1 to the third point in time t3. The high voltage of the first clock signal SC_CK may be maintained from the second point in time t2 to the fourth point in time t4. The high voltages of the second clock signal SS_CK and the third clock signal CR_CK may be maintained from the third point in time t3 to the fourth point in time t4. The voltage of the first clock signal SC_CK may decrease from the high voltage to the low voltage from the fourth point in time t4 to the fifth point in time t5. The voltages of the second clock signal SS_CK and the third clock signal CR_CK may decrease from the high voltage to the low voltage from the fourth point in time t4 to a sixth point in time t6. The first clock signal SC_CK may be pulled up and down faster than the second clock signal SS_CK and the third clock signal CR_CK. The second rising time TR2 of the second clock signal SS_CK may be identical to the third rising time TR3 of the third clock signal CR_CK, and the first rising time TR1 of the first clock signal SC_CK may be shorter than the second rising time TR2 of the second clock signal SS_CK and the third rising time TR3 of the third clock signal CR_CK. The second falling time TF2 of the second clock signal SS_CK may be identical to the third falling time TF3 of the third clock signal CR_CK, and the first falling time TF1 of the first clock signal SC_CK may be shorter than the second falling time TF2 of the second clock signal SS_CK and the third falling time TF3 of the third clock signal CR_CK. The second on time TO2 of the second clock signal SS_CK may be identical to the third on time TO3 of the third clock signal CR_CK, and the first on time TO1 of the first clock signal SC_CK may be longer than the second on time TO2 of the second clock signal SS_CK and the third on time TO3 of the third clock signal CR_CK. The second pulse width TW2 of the second clock signal SS_CK may be identical to the third pulse width TW3 of the third clock signal CR_CK, and the first pulse width TW1 of the first clock signal SC_CK may be less than the second pulse width TW2 of the second clock signal SS_CK and the third pulse width TW3 of the third clock signal CR_CK. As illustrated inFIG.8B, waveforms of the first scan signal SC, the second scan signal SS, and the carry signal CR may be identical to those of the first clock signal SC_CK, the second clock signal SS_CK, and the third clock signal CR_CK, respectively. That is, the rising time of the first scan signal SC may be shorter than the rising times of the second scan signal SS and the carry signal CR, and the falling time of the first scan signal SC may be shorter than the falling times of the second scan signal SS and the carry signal CR. According to some embodiments, the pre-charging of the first scan line SCL according to the first scan signal SC may be reinforced by setting the first rising time TR1 of the first clock signal SC_CK to be short and the first on time TO1 thereof to be long. FIGS.9A and9Bare diagrams illustrating pulses of a clock signal and a scan signal, according to some embodiments. Compared to the embodiments ofFIGS.8A and8B, in the embodiments ofFIGS.9A and9B, the third rising time TR3 and the third falling time TF3 of the third clock signal CR_CK are changed. Referring toFIGS.9A and9B, the third rising time TR3 and the third falling time TF3 of the third clock signal CR_CK may be set short, and the third on time TO3 thereof may be set long. For example, the gradients of the rising edges RE of the first clock signal SC_CK and the third clock signal CR_CK may be greater than the gradient of the rising edge RE of the second clock signal SS_CK. According to some embodiments, the gradient of the falling edge FE of the first clock signal SC_CK may be greater than the gradients of the falling edges FE of the second clock signal SS_CK and the third clock signal CR_CK. According to some embodiments, the gradient of the falling edge FE of the first clock signal SC_CK may be identical to the gradient of the falling edge FE of the third clock signal CR_CK. Alternatively, the gradient of the falling edge FE of the first clock signal SC_CK may be less than the gradient of the falling edge FE of the third clock signal CR_CK. The voltages of the first clock signal SC_CK and the third clock signal CR_CK may increase from a low voltage to a high voltage from the first point in time t1 to the second point in time t2. The voltage of the second clock signal SS_CK may increase from a low voltage to a high voltage from the second point in time t2 to the third point in time t3. The high voltage of the first clock signal SC_CK may be maintained from the second point in time t2 to the fourth point in time t4. The high voltage of the second clock signal SS_CK may be maintained from the third point in time t3 to the fourth point in time t4. The voltage of the first clock signal SC_CK may fall from the high voltage to the low voltage from the fourth point in time t4 to the fifth point in time t5. The voltage of the second clock signal SS_CK may fall from the high voltage to the low voltage from the fourth point in time t4 to the sixth point in time t6. The voltage of the third clock signal CR_CK may fall from the high voltage to the low voltage from the fifth point in time t5 to the sixth point in time t6. The first clock signal SC_CK and the third clock signal CR_CK may be pulled up faster than the second clock signal SS_CK. The first clock signal SC_CK may be pulled down faster than the second clock signal SS_CK and the third clock signal CR_CK. The first rising time TR1 of the first clock signal SC_CK may be identical to the third rising time TR3 of the third clock signal CR_CK and shorter than the second rising time TR2 of the second clock signal SS_CK. The first falling time TF1 of the first clock signal SC_CK, the second falling time TF2 of the second clock signal SS_CK, and the third falling time TF3 of the third clock signal CR_CK may be different from each other. The first falling time TF1 of the first clock signal SC_CK may be identical to or different from the third falling time TF3 of the third clock signal CR_CK. The first on time TO1 of the first clock signal SC_CK, the second on time TO2 of the second clock signal SS_CK, and the third on time TO3 of the third clock signal CR_CK may be different from each other. The first on time TO1 of the first clock signal SC_CK and the third on time TO3 of the third clock signal CR_CK may be longer than the second on time TO2 of the second clock signal SS_CK. The second pulse width TW2 of the second clock signal SS_CK may be identical to the third pulse width TW3 of the third clock signal CR_CK, and the first pulse width TW1 of the first clock signal SC_CK may be less than the second pulse width TW2 of the second clock signal SS_CK and the third pulse width TW3 of the third clock signal CR_CK. As illustrated inFIG.9B, the waveforms of the first scan signal SC, the second scan signal SS, and the carry signal CR may be identical to those of the first clock signal SC_CK, the second clock signal SS_CK, and the third clock signal CR_CK, respectively. That is, the rising time of the first scan signal SC may be identical to that of the carry signal CR and shorter than that of the second scan signal SS, and the falling time of the first scan signal SC may be shorter than that of the second scan signal SS. The falling time of the first scan signal SC may be identical to or different from the falling time of the carry signal CR. According to some embodiments, the stage operation characteristics of the scan driver may be improved by setting the rising time TR and the falling time TF of the third clock signal CR_CK to be short and the third on time TO3 thereof to be long. FIG.10is a diagram illustrating a pulse of a clock signal, according to some embodiments. The embodiments described with respect toFIG.10are different from the embodiments described with respect toFIG.7Ain that the first clock signal SC_CK is combined with the second clock signal SS_CK. In this case, in the embodiments described with respect toFIG.6, one of the first output controller135and the second output controller137may be omitted. For example, the second output controller137of each stage ST may be omitted, and the level shifter170may not output the second clock signal SS_CK and may output the first clock signal SC_CK to the scan driver130. The first clock signal SC_CK may be output to the first scan line SCL and the second scan line SSL as the first scan signal SC and the second scan signal SS through the first output terminal OUT1 of the first output controller135. The first rising time TR1 of the first clock signal SC_CK and the second clock signal SS_CK may be identical to the third rising time TR3 of the third clock signal CR_CK. The first falling time TF1 of the first clock signal SC_CK and the second clock signal SS_CK may be shorter than the third falling time TF3 of the third clock signal CR_CK. In the embodiments of the present disclosure, the rising times TR and the falling times TF of the second clock signal SS_CK and the third clock signal CR_CK, which are irrelevant to the image quality, may be set to be long to reduce the heat emission from the display panel, and the falling time TF of the first clock signal SC_CK, which is relevant to the image quality, may be shorter than the rising time TR thereof. In the embodiments of the present disclosure, a rising time TR, a falling time TF, and an on time TO of a clock signal may indicate a rising time TR, a falling time TF, and an on time TO of a pulse of a clock signal, respectively. A circuit of the stage ST ofFIG.6is illustrative, and circuit elements forming the node controller131, the first output controller135, the second output controller137, and the third output controller139and connection relationships therebetween may vary, and the one or more embodiments of the disclosure may be applied to a stage ST configured to output the first scan signal SC and the second scan signal SS to the pixel circuit PC ofFIG.2by respectively using the first clock signal SC_CK, the second clock signal SS_CK, and the third clock signal CR_CK. According to one or more embodiments, there may be provided a scan driver for reducing the heat emission from a display panel and a display apparatus including the scan driver. Effects of the disclosure are not limited to those stated above and may variously expand without departing from the scope of the disclosure. It should be understood that embodiments described herein should be considered in a descriptive sense only and not for purposes of limitation. Descriptions of features or aspects within each embodiment should typically be considered as available for other similar features or aspects in other embodiments. While one or more embodiments have been described with reference to the figures, it will be understood by those of ordinary skill in the art that various changes in form and details may be made therein without departing from the spirit and scope as defined by the following claims, and their equivalents. | 56,139 |
11862107 | DETAILED DESCRIPTION Advantages and characteristics of the present disclosure and a method of achieving the advantages and characteristics will be clear by referring to exemplary aspects described below in detail together with the accompanying drawings. However, the present disclosure is not limited to the exemplary aspects disclosed herein but will be implemented in various forms. The exemplary aspects are provided by way of example only so that those skilled in the art can fully understand the disclosures of the present disclosure and the scope of the present disclosure. Therefore, the present disclosure will be defined only by the scope of the appended claims. The shapes, sizes, ratios, angles, numbers, and the like illustrated in the accompanying drawings for describing the exemplary aspects of the present disclosure are merely examples, and the present disclosure is not limited thereto. Like reference numerals generally denote like elements throughout the specification. Further, in the following description of the present disclosure, a detailed explanation of known related technologies may be omitted to avoid unnecessarily obscuring the subject matter of the present disclosure. The terms such as “including,” “having,” and “comprising” used herein are generally intended to allow other components to be added unless the terms are used with the term “only”. Any references to singular may include plural unless expressly stated otherwise. Components are interpreted to include an ordinary error range even if not expressly stated. When the position relation between two parts is described using the terms such as “on”, “above”, “below”, and “next”, one or more parts may be positioned between the two parts unless the terms are used with the term “immediately” or “directly”. When an element or layer is disposed “on” another element or layer, it may be disposed directly on the another element or layer, or another layer or another element may be interposed therebetween. Although the terms “first”, “second”, and the like are used for describing various components, these components are not confined by these terms. These terms are merely used for distinguishing one component from the other components. Therefore, a first component to be mentioned below may be a second component in a technical concept of the present disclosure. Same reference numerals generally denote same elements throughout the specification. A size and a thickness of each component illustrated in the drawing are illustrated for convenience of description, and the present disclosure is not limited to the size and the thickness of the component illustrated. The features of various aspects of the present disclosure can be partially or entirely adhered to or combined with each other and can be interlocked and operated in technically various ways, and the aspects can be carried out independently of or in association with each other. In the present disclosure, a display apparatus may include a liquid crystal module (LCM) including a display panel and a driver for driving the display panel, an organic light emitting diode display module (OLED module), and a quantum dot module (QD module). In addition, the display apparatus may also include equipment display apparatus including complete product or final product of LCM, OLED or QD module, for example, notebook computer, television, computer monitor, automotive display apparatus, or other vehicle display apparatuses, and set electronic devices or set device (set apparatus) such as mobile electronic devices of smart phone or electronic pad. Accordingly, the display apparatus according to the present disclosure may include application products or set apparatuses such as final products including the LCM, OLED or QD module as well as display apparatuses such as LCM, OLED or QD module. If needed, the LCM, OLED or QD module configured as the display panel, the driver, and the like may be expressed as the “display apparatus”, and the electronic device of the final product including the LCM, OLED or QD module may be expressed as the “set apparatus”. For example, the display apparatus may include a display panel of LCD, OLED or QD, and a source printed circuit board (source PCB) as a controller for driving the display panel. Meanwhile, the set apparatus may further include a set PCB as a set controller, which is electrically connected to the source PCB, so as to control the entire set apparatus. The display panel used for the present exemplary aspect may be all types of display panels, for example, a liquid crystal display panel, an organic light emitting diode OLED display panel, a quantum dot QD display panel, an electroluminescent display panel, and the like. The display panel is not limited to a particular display panel including a flexible substrate for an OLED display panel and a backplate support structure disposed beneath the display panel, thereby being capable of achieving bezel bending. The display panel used in the display apparatus according to an exemplary aspect of the present disclosure is not limited in shape and size. More specifically, when the display panel is an OLED display panel, the display panel may include a plurality of gate lines, a plurality of data lines, and a plurality of pixels PXL (seeFIG.4A) provided in respective intersections between the gate lines and the data lines. In addition, the display panel may further include an array including thin film transistors as elements for selectively applying a voltage to each of the pixels, an OLED layer disposed on the array, and an encapsulation substrate or an encapsulation layer disposed on the array to cover the OLED layer. The encapsulation layer protects the thin film transistors and the OLED layer from external impact and suppresses the permeation of moisture or oxygen into the OLED layer. Layers formed on the array may include an inorganic light emitting layer, for example, a nano-sized material layer or a quantum dot layer, and the like. In the present disclosure,FIG.1illustrates an exemplary OLED display panel which may be integrated into display apparatuses. FIG.1is a plan view illustrating an exemplary display apparatus which may be included in an electronic device. Referring toFIG.1, a display apparatus100includes at least one active area in which an array of pixels is formed. One or more inactive areas may be disposed around the active area. That is, the inactive areas may be disposed at one or more side surfaces of the active area. InFIG.1, the inactive areas surround the active area having a rectangular shape. However, the shape of the active area and the shape/placement of the inactive areas adjacent to the active area are not limited to the example shown inFIG.1. The active area and the inactive area may be in any shape suitable for the design of the electronic device employing the display apparatus100. The shape of the active area may be, for example, a pentagonal shape, a hexagonal shape, a circular shape, an oval shape, and the like. Each pixel in the active area may be associated with a pixel circuit. The pixel circuit may include one or more switching transistors and one or more driving transistors on a substrate101. Each pixel circuit may be electrically connected to a gate line and a data line to communicate with one or more driving circuits, such as a gate driver and a data driver located in the inactive area. Each pixel may include an organic light emitting diode. Each driving circuit may be implemented with a thin film transistor (TFT) in the inactive area as shown inFIG.1. Such a driving circuit may be referred to as a gate driver which is a gate-in panel (GIP). Also, some of the components, such as data driver-IC, may be mounted on a separate printed circuit board. Also, they may be coupled to a connection interface (pad/bump, pin, etc.) disposed in the inactive area using a circuit film such as flexible printed circuit board (FPCB), chip-on-film (COF), tape-carrier-package (TCP), or the like. The inactive area may be bent together with the connection interface so that the printed circuit (COF, PCB, etc.) may be located on the back side of the display apparatus100. The display apparatus100may further include a power controller that supplies various voltages or currents to the pixel circuit, the data driver, the GIP, etc. or controls the supply. The power controller may also be referred to as “power management IC (PMIC)”. Also, the display apparatus100may include a voltage line for supplying high-potential power VDD (i.e., high potential power line), a voltage line for supplying low-potential power VSS (i.e., low potential power line) and a voltage line for supplying reference voltage VREF, respectively, related to driving of the pixel circuit as shown inFIG.1. With a decrease in size of the display apparatus100, an oxide semiconductor advantageous for low-speed driving efficient in power consumption may be applied to the GIP. The oxide semiconductor is not limited to the GIP, but may be used as a transistor for driving a pixel in the active area. Driving at a scanning rate of less than 60 Hz may be referred to as low-speed driving, and specifically, the scanning rate may be in the range of from 1 Hz to 5 Hz. Driving at a scanning rate of 60 Hz or more in the range of from 120 Hz to 240 Hz may be referred to as high-speed driving. Meanwhile, the display apparatus100may further include various additional components for generating various signals or driving organic light emitting diodes in the active area. The additional components for driving the organic light emitting diodes may include an inverter circuit, a multiplexer, an electro static discharge circuit and the like. The display apparatus100may also include additional components associated with functionalities other than for driving the organic light emitting diodes. For example, the display apparatus100may include additional components for providing a touch sensing functionality, a user authentication functionality (e.g., fingerprint scan), a multi-level pressure sensing functionality, a tactile feedback functionality and the like. The above-described additional components may be located in an external circuit connected to the inactive area and/or the connection interface. The voltage line for supplying low-potential power VSS may be disposed on an outer inactive area I/A of the display apparatus100so as to surround an active area A/A. This is to easily supply low-potential power to cathode electrodes of all the organic light emitting diodes disposed in the active area A/A with a minimized electric resistance in a shortest distance. FIG.2is a cross-sectional view of the active area A/A of the display apparatus as taken along a line I-I′. In the display apparatus100, thin film transistors102,103,104,105,106, and108, organic light emitting diodes112,114, and116, and various functional layers are located on the substrate101. The substrate101may be a glass or plastic substrate. If the substrate101is a plastic substrate, the substrate101may be made of polyimide-based or polycarbonate-based material and thus may have flexibility. In particular, polyimide may be processed under a high temperature and may be coated, and thus is widely used for a plastic substrate. A buffer layer130is a functional layer for protecting the electrodes and lines from impurities such as alkali ions or the like coming out from the substrate101or lower layers. The buffer layer130may be made of silicon oxide SiOx, silicon nitride SiNx, or a multilayer thereof. The buffer layer130may include a multi-buffer131and/or an active buffer132. The multi-buffer131may be formed by alternately laminating silicon nitride (SiNx) and silicon oxide (SiOx), and may delay diffusion of moisture and/or oxygen permeating into the substrate101. The active buffer132protects a semiconductor layer102of the transistor and functions to block various kinds of defects introduced from the substrate101. The active buffer132may be made of amorphous silicon a-Si, or the like. The thin film transistor may have a structure in which the semiconductor layer102, a gate insulating layer103, a gate electrode104, an interlayer insulating layer105, and source and drain electrodes106and108are sequentially disposed. The semiconductor layer102is located on the buffer layer130. The semiconductor layer102may be made of polysilicon p-Si. In this case, a predetermined region may be doped with an impurity. In addition, the semiconductor layer102may be made of amorphous silicon a-Si, or may be made of various organic semiconductor materials such as pentacene. Further, the semiconductor layer102may be made of an oxide. The gate insulating layer103may be made of an insulating inorganic material, such as silicon oxide SiOx or silicon nitride (SiNx), or may also be made of an insulating organic material or the like. The gate electrode104may be made of various conductive materials such as magnesium (Mg), aluminum (Al), nickel (Ni), chromium (Cr), molybdenum (Mo), tungsten (W), gold (Au) or an alloy thereof. The interlayer insulating layer105may be made of an insulating material, such as silicon oxide SiOx or silicon nitride SiNx, or may also be made of an insulating organic material or the like. A contact hole may be formed by selectively removing portions of the interlayer insulating layer105and the gate insulating layer103so as to expose source and drain regions. The source and drain electrodes106and108are formed as a single-layered or a multi-layered structure with an electrode material on the interlayer insulating layer105. If needed, a passivation layer made of an inorganic insulating material may cover the source and drain electrodes106and108. A first planarization layer107-1may be located on the thin film transistor. The first planarization layer107-1protects the thin film transistor and the like and flattens an upper portion thereof. The first planarization layer107-1may have various shapes. The first planarization layer107-1may be made of one or more of acrylic-based resin, epoxy resin, phenol resin, polyamide-based resin, polyimide-based resin, unsaturated polyester-based resin, polyphenylene-based resin, and polyphenylene sulfide-based resin, but is not limited thereto. Various metal layers serving as lines and electrodes may be disposed on the first planarization layer107-1. A second planarization layer107-2is located on the first planarization layer107-1. The planarization layer is implemented including two planarization layers due to an increase in the number of various signal lines as the display apparatus100is developed to a higher resolution. Therefore, it is difficult to place all lines in a single layer while ensuring a minimum gap between the lines. Thus, an additional layer is needed. This additional layer (the second planarization layer) provides sufficient room for the placement of lines, which makes it easier to design the placement of lines/electrodes. Further, if a dielectric material is used for the planarization layers107-1and107-2, the planarization layers107-1and107-2may be used for forming a capacitance between the metal layers. The organic light emitting diode may have a structure in which an anode electrode112, an organic light emitting layer114, and a cathode electrode116are sequentially disposed. That is, the organic light emitting diode may include the anode electrode112formed on the planarization layers107-1and107-2, the organic light emitting layer114located on the anode electrode112, and the cathode electrode116located on the organic light emitting layer114. The anode electrode112may be electrically connected to a drain electrode108of a driving thin film transistor through a connection electrode108-2. When the organic light emitting display apparatus100is of a top-emission type, the anode electrode112may be made of an opaque conductive material having high reflectivity. For example, the anode electrode112may be made of silver (Ag), aluminum (Al), gold (Au), molybdenum (Mo), tungsten (W), chromium (Cr) or an alloy thereof. The connection electrode108-2may be made of the same material as the source and drain electrodes106and108. A bank110is formed in a region except for an emission region. Accordingly, the bank110has a bank hole for exposing the anode electrode112corresponding to the emission region. The bank110may be made of an inorganic insulating material, such as a silicon nitride (SiNx) film or a silicon oxide SiOx film, or an organic insulating material, such as BCB, acrylic-based resin and imide-based resin. The organic light emitting layer114is disposed on the anode electrode112which is exposed by the bank110. The organic light emitting layer114may include a light emitting layer, an electron injection layer, an electron transport layer, a hole transport layer, a hole injection layer and the like. The cathode electrode116is disposed on the organic light emitting layer114. When the organic light emitting display apparatus100is of a top-emission type, the cathode electrode116may be made of a transparent conductive material, such as indium tin oxide (ITO), indium zinc oxide (IZO), or the like. Thus, light generated from the organic light emitting layer114is emitted to an upper portion of the cathode electrode116. An encapsulation layer120is located on the cathode electrode116. The encapsulation layer120blocks the permeation of oxygen and moisture from the outside in order to suppress oxidation of the light emitting material and the electrode material. When the organic light emitting diode is exposed to moisture or oxygen, a pixel shrinkage in which the emission region is reduced may occur or dark spots may appear in the emission region. The encapsulation layer may be formed as an inorganic film made of glass, metal, aluminum oxide (AlOx) or silicon (Si)-based material. Alternatively, the encapsulation layer may have a structure in which an organic film and an inorganic film are alternately laminated. The inorganic film serves to block the permeation of moisture or oxygen, and the organic film serves to planarize the surface of the inorganic film. The reason why the encapsulation layer is formed by a plurality of thin film layers is to make a permeation path of moisture and oxygen longer and more complicated than a single layer, which makes the permeation of moisture/oxygen into the organic light emitting diode difficult. Specifically, the encapsulation layer120may include a first inorganic insulating film121, an organic insulating film122and a second inorganic insulating film123. The first inorganic insulating film121, the organic insulating film122and the second inorganic insulating film123may be sequentially disposed. The barrier film140is disposed on the encapsulation layer120so as to encapsulate the entire substrate101including the organic light emitting diode. The barrier film140may be a phase difference film or an optically isotropic film. When the barrier film has optically isotropic characteristics, light incident into the barrier film is transmitted as it is without phase delay. Further, an organic film or an inorganic film may be further disposed on an upper or lower surface of the barrier film. The organic film or the inorganic film formed on the upper or lower surface of the barrier film serves to block the permeation of moisture or oxygen from the outside. An adhesive layer145may be located between the barrier film140and the encapsulation layer120. The adhesive layer145bonds the encapsulation layer120and the barrier film140. The adhesive layer145may be a heat-curable or naturally curable adhesive. For example, the adhesive layer145may be made of a material such as barrier pressure sensitive adhesive (B-PSA). A touch panel (film), a polarizing film, a top cover and the like may be further disposed on the barrier film140. FIG.3Aillustrates the configuration of a gate driver applied to the display apparatus. Referring toFIG.3A, the GIP outputs an output signal SN(n) of a gate high voltage VGH while a node Q2is deactivated to the gate high voltage VGH and a node QB is activated to a gate low voltage VGL. Then, the GIP outputs an output signal SN(n) of the gate low voltage VGL while the node Q2is activated to the gate low voltage VGL and the node QB is deactivated to the gate high voltage VGH. In other words, the GIP outputs the output signal SN(n) of the gate low voltage VGL from when the node Q is bootstrapped in synchronization with a timing when the node Q2is activated. To this end, the GIP may include a Q2controller, a QB controller, an output unit and a first stabilization unit. The Q2controller may be implemented with a transistor T3. The transistor T3activates the node Q2by applying a start signal VST of the gate low voltage VGL to the node Q2in response to a clock signal CLK. A gate electrode of the transistor T3is connected to an input terminal of the clock signal CLK. A first electrode and a second electrode of the transistor T3are connected to an input terminal of the start signal VST and the node Q2, respectively. The QB controller activates the node QB as opposed to the node Q2in response to the clock signal CLK, the start signal VST and a potential of the node Q2. The QB controller may be implemented with a capacitor C_ON, a transistor T5, a transistor T4, a transistor T6and a capacitor CB. The capacitor C_ON is connected between the input terminal of the clock signal CLK and a node Q1. The transistor T5supplies the clock signal CLK to the node QB according to a potential of the node Q1. A gate electrode of the transistor T5is connected to the node Q1, and a first electrode and a second electrode of the transistor T5are connected to the input terminal of the clock signal CLK and the node QB, respectively. The transistor T4supplies the gate high voltage VGH to the node Q1in response to the start signal VST. A gate electrode of the transistor T4is connected to the input terminal of the start signal VST, and a first electrode and a second electrode of the transistor T4are connected to the node Q1and an input terminal of the gate high voltage VGH, respectively. With this configuration, the potential of the node Q1changes in synchronization with the clock signal CLK while the start signal VST is held at the gate high voltage VGH. Also, the potential of the node Q1has the gate high voltage VGH while the start signal VST is held at the gate low voltage VGL. The transistor T6supplies the gate high voltage VGH to the node QB according to the potential of the node Q2. A gate electrode of the transistor T6is connected to the node Q2, and a first electrode and a second electrode of the transistor T6are connected to the node QB and the input terminal of the gate high voltage VGH, respectively. The capacitor CB is connected between the node QB and the gate high voltage VGH to stabilize a potential of the node QB. The output unit includes a transistor T1serving as a pull-down element, a transistor T2serving as a pull-up element and a capacitor CQ. The transistor T1supplies an output signal SN(n) of the gate low voltage VGL to an output node from when the node Q is bootstrapped in synchronization with a timing when the node Q2is activated. A gate electrode of the transistor T1is connected to the node Q, and a first electrode and a second electrode of the transistor T1are connected to an input terminal of the gate low voltage VGL and the output node, respectively. The capacitor CQ is connected between the node Q and the output node. When the output signal SN(n) changes from the gate high voltage VGH to the gate low voltage VGL, the capacitor CQ reflects a change in potential of the output node to a potential of the node Q. Thus, the capacitor CQ functions to bootstrap the node Q. The transistor T2supplies the output signal SN(n) of the gate high voltage VGH to the output node while the node QB is activated prior to the node Q2. A gate electrode of the transistor T2is connected to the node QB, and a first electrode and a second electrode of the transistor T2are connected to the output node and the input terminal of the gate high voltage VGH, respectively. The first stabilization unit may be implemented with a transistor TA. A gate electrode of the transistor TA is connected to the input terminal of the gate low voltage VGL, and a first electrode and a second electrode of the transistor TA are connected to the node Q2and the node Q, respectively. When the node Q is bootstrapped, a channel current between the first electrode and the second electrode of the transistor TA becomes zero. In other words, when the node Q is bootstrapped, the transistor TA is turned off and thus blocks an electrical connection between the node Q2and the node Q. While the node Q is not bootstrapped, the transistor TA maintains a turn-on state. The transistor TA maintains the turn-on state and is turned off only when the node Q is bootstrapped. Thus, the transistor TA blocks a current flow between the node Q2and the node Q. Therefore, when the node Q is bootstrapped, the potential of the node Q2becomes different from the potential of the node Q. Even when the potential of the node Q changes at the moment when the node Q is bootstrapped, the potential of the node Q2does not change. Therefore, the transistors T3and T6connected to the node Q2are not overloaded at the moment when the node Q is bootstrapped. If there is no transistor TA, a drain-to-source voltage of the transistor T3and a gate-to-source voltage Vgs of the transistor T6may increase to a voltage level equal to or greater than a critical value due to the bootstrapping. If such an overload phenomenon continues, an element breakdown phenomenon, so-called, a breakdown phenomenon may occur. The transistor TA may suppress breakdown of the transistors T3and T6connected to the node Q2at the moment when the node Q is bootstrapped. As for the transistor T2shown inFIG.3A, if a drain-to-source voltage VGH-VGL is high when the potential of the output node is held at the gate low voltage VGL and this state lasts for a long time, the transistor T2may be easily degraded. If a leakage current Ileak flows in the transistor T2due to the degradation, a normal output signal SN(n) may not be output. FIG.3Bis a graph showing changes in output from a transistor between normal temperature and high temperature in connection withFIG.3A. Referring toFIG.3B, a voltage and an output current Iout depending on the temperature of a transistor can be seen. The X-axis of the graph represents a gate-to-source voltage Vgs, and there is no difference in output current value between normal temperature and high temperature at about −2 V or less. However, a difference in output current value may be made when the voltage changes to positive from about −1 V. Referring toFIG.3B, when the gate-to-source voltage Vgs is 0 V, there is a difference in output current values of the transistors between normal temperature and high temperature. It can be seen that a current output at a higher temperature is higher. When the display apparatus100is in a high temperature environment, a leakage current of the GIP may increase. FIG.3Cillustrates a frame during high-speed driving and low-speed driving and an output from the GIP during low-speed driving. High-speed driving and low-speed driving may be applied together to the display apparatus100, and the display apparatus100may achieve reduction in power consumption during low-speed driving. Referring toFIG.3C, during 120 Hz high-speed driving, a main frame is refreshed about every 8.3 ms so as to be normally operated. Here, an output voltage of the GIP may be about −9 V. During 1 Hz low-speed driving, the main frame needs to be refreshed every 1 second. Therefore, an output value of the main frame needs to be held in a subframe period after about 8.3 ms when the main frame outputs. As a holding time increases, an output value of the GIP may increase. The GIP may output an increased voltage of about −7 V or more in the subframe period. Referring toFIG.3AthroughFIG.3C, this phenomenon may easily occur at a high temperature. In the configuration diagram of the GIP shown inFIG.3A, a leakage current flowing from the transistor T3to the transistor T1and the node Q causes an increase in potential of the node Q2. When an output of the transistor T1decreases, the leakage current is output through the transistor T2. FIG.4Aillustrates characteristics of the display apparatus ofFIG.1in connection with the exemplary aspects of the present disclosure. Referring toFIG.4A, a driver IC200may be disposed on an upper side of the substrate101, and a pad for low-potential power VSS and a pad for high-potential power VDD may be disposed on left and right sides of the driver IC. In a region for the driver IC200, pads for signals controlling the GIP may be disposed. Specifically, the pads for the clock signal CLK, the start signal VST, the gate high voltage VGH and the gate low voltage VGL may be disposed. Lines extended from the pads for the clock signal CLK, the start signal VST, the gate high voltage VGH and the gate low voltage VGL may be connected to the GIP. The GIP may generate emission signals, scan signals, and the like required for the pixel circuits in the active area A/A. Referring toFIG.4A, a subframe control pad SFC for a subframe controller300may be further disposed in the region for the driver IC200in order to implement the exemplary aspects of the present disclosure. The subframe controller300may be disposed between the GIP and the pixel PXL in the active area A/A. A gate electrode of the subframe controller300may be connected to the subframe control pad SFC and a source electrode may be connected to the gate low voltage VGL. A drain electrode of the subframe controller300may be connected to a line extended from the GIP to the pixel PXL in the active area A/A. In an example, the drain electrode of the subframe controller may be electrically connected to an output terminal of the GIP, and the output terminal of the GIP may be connected to the pixel PXL. A source electrode of the subframe controller300may be connected to the pad for the gate low voltage VGL. For the subframe controller300, the driver IC200adds a signal to output a signal to the subframe control pad SFC at the moment when the main frame is ended and the subframe is started. Thus, the subframe controller300may be turned on. When the subframe controller300is turned on, a voltage of the gate low voltage VGL may be applied to an output signal of the GIP. A voltage of about −9 V is continuously applied to the gate low voltage VGL. When the gate low voltage VGL is applied to the output signal of the GIP through the turned-on subframe controller300, an increase in output of the GIP in the subframe period may be minimized. FIG.4Bis a graph showing an output value of the GIP according to the exemplary aspect shown inFIG.4A. Referring toFIG.4B, during 1 Hz low-speed driving, the subframe controller300may be turned on at the moment when the subframe period, in which an output of the GIP needs to be held, starts after 8.3 ms which is the main frame period. That is, the subframe controller300may be in a turn-off state during a main frame period of the pixel and in a turn-on state during a subframe period of the pixel. When the subframe controller300is changed from a turn-off state to a turn-on state, −9 V of the gate low voltage VGL is applied to an output terminal of the GIP, and, thus, an increase in output of the GIP shown inFIG.3Cmay be minimized. Therefore, it is possible to suppress abnormal display caused by an abnormal output of the GIP. The display apparatus according to the exemplary aspects of the present disclosure may include a liquid crystal display apparatus LCD, a field emission display apparatus FED, an organic light emitting diode OLED display apparatus and a quantum dot display apparatus. The display apparatus according to the exemplary aspects of the present disclosure may also include equipment display apparatus including complete product or final product of LCM, OLED or QD module, for example, notebook computer, television, computer monitor, automotive display apparatus, or other vehicle display apparatuses, and set electronic devices or set device (set apparatus) such as mobile electronic devices of smart phone or electronic pad. The exemplary aspects of the present disclosure can also be described as follows: According to an aspect of the present disclosure, there is provided a display apparatus. The display apparatus includes an active area. The display apparatus further includes an inactive area surrounding the active area. The display apparatus further includes a pixel disposed in the active area. The display apparatus further includes a driver IC, a gate driver, a low-potential power line, a high-potential power line and a subframe controller disposed in the inactive area. The subframe controller is disposed between the pixel and the gate driver. The driver IC may include a subframe control pad connected to a gate electrode of the subframe controller. The driver IC may include a gate low voltage pad and a gate high voltage pad connected to the gate driver. The gate low voltage pad may be connected to a source electrode of the subframe controller. The gate low voltage may be −9 V. The gate driver may include an oxide transistor. A drain electrode of the subframe controller may be electrically connected to an output terminal of the gate driver, and the output terminal of the gate driver may be connected to the pixel. The gate driver may be driven in a 1 Hz low-speed driving mode. The subframe controller may be in a turn-off state during a main frame period of the pixel and in a turn-on state during a subframe period of the pixel. When the subframe controller is in a turn-on state, a drain electrode of the subframe controller may output an output voltage of −9 V. According to another aspect of the present disclosure, there is provided a display apparatus. The display apparatus includes an active area. The display apparatus further includes an inactive area surrounding the active area. The display apparatus further includes a pixel disposed in the active area. The display apparatus further includes a driver IC, a gate driver, a low-potential power line, a high-potential power line and a subframe controller disposed in the inactive area. The gate driver is driven in a low-speed driving mode. The gate driver may include an oxide transistor. The subframe controller may be disposed between the pixel and the gate driver. The driver IC may include a subframe control pad connected to a gate electrode of the subframe controller. The driver IC may include a gate low voltage pad and a gate high voltage pad connected to the gate driver, and the gate low voltage pad may be connected to a source electrode of the subframe controller. The gate low voltage may be −9 V. The subframe controller may be in a turn-off state during a main frame period of the pixel and in a turn-on state during a subframe period of the pixel. When the subframe controller is in a turn-on state, a drain electrode of the subframe controller may output an output voltage of −9 V. Although the exemplary aspects of the present disclosure have been described in detail with reference to the accompanying drawings, the present disclosure is not limited thereto and may be embodied in many different forms without departing from the technical concept of the present disclosure. Therefore, the exemplary aspects of the present disclosure are provided for illustrative purposes only but not intended to limit the technical concept of the present disclosure. The scope of the technical concept of the present disclosure is not limited thereto. Therefore, it should be understood that the above-described exemplary aspects are illustrative in all aspects and do not limit the present disclosure. The protective scope of the present disclosure should be construed based on the following claims, and all the technical concepts in the equivalent scope thereof should be construed as falling within the scope of the present disclosure. | 36,343 |
11862108 | DETAILED DESCRIPTION The technical solutions in the embodiments of the present disclosure will be described clearly and completely with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the embodiments described are only some embodiments of the present disclosure, rather than all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without making any creative effort, shall fall within the protection scope of the present disclosure. The transistors used in all embodiments of the present disclosure may be transistors, thin film transistors, or field effect transistors or other devices with the same characteristics. In the embodiments of the present disclosure, to distinguish two poles of a transistor except for a control pole, one pole is referred to as a first pole, and the other pole is referred to as a second pole. In practical operation, when the transistor is a triode, the control electrode may be a base electrode, the first electrode may be a collector electrode, and the second electrode may be an emitter electrode; alternatively, the control electrode may be a base electrode, the first electrode may be an emitter electrode, and the second electrode may be a collector electrode. In practical operation, when the transistor is a thin film transistor or a field effect transistor, the control electrode may be a gate electrode, the first electrode may be a drain electrode, and the second electrode may be a source electrode; alternatively, the control electrode may be a gate electrode, the first electrode may be a source electrode, and the second electrode may be a drain electrode. As shown inFIG.1, the shift register unit according to the embodiment of the present disclosure includes an output end OUT, a node control end S1, a first output node control circuit11, a second node control circuit12, a second output node control circuit13, and an output circuit14, wherein,the second node control circuit12is electrically connected to the first clock signal line CK, the node control end S1, the first output node N1and the second node N2, respectively, and is configured to control, under the control of the first clock signal provided by the first clock signal line CK, the node control signal provided by the node control end S1to be provided to the second node N2, and further configured to control the potential of the second node N2according to the potential of the first output node N1and the first clock signal;the first output node control circuit11is electrically connected to the second node N2and a first output node N1, respectively, for controlling the potential of the first output node N1;the second output node control circuit13is electrically connected to the second node N2and a second output node N4, respectively, for controlling the potential of the second output node N4;the output circuit14is electrically connected to the first output node N1, the second output node N4, the first voltage line V1, the second voltage line V2and the output end OUT, respectively, and is configured to control a light emitting control signal output by the output end OUT according to a first voltage signal provided by the first voltage line V1and a second voltage signal provided by the second voltage line V2under control of a potential of the first output node N1and a potential of the second output node N4. In the shift register unit according to the embodiment of the present disclosure, the second node control circuit12is electrically connected to the node control end S1, so as to facilitate wiring and avoid the problem of troublesome wiring. In the embodiment of the present disclosure, the first voltage line may be a high voltage line, and the second voltage line may be a low voltage line, but not limited thereto. As shown inFIG.2, on the basis of the embodiment of the shift register unit shown inFIG.1, the shift register unit according to the embodiment of the present disclosure further includes an INPUT terminal INPUT; the first output node control circuit11is further electrically connected to the INPUT terminal INPUT, the first output node N1, a first clock signal line CK, a second clock signal line CB, and a first voltage line V1, and is configured to, under the control of the first clock signal, write an INPUT signal provided by the INPUT terminal INPUT into the first output node N1, and control a potential of the first output node N1according to a potential of the second node N2, a second clock signal, and a first voltage signal;the second clock signal line CB is configured to provide the second clock signal, and the first voltage line V1is configured to provide the first voltage signal. Optionally, the node control end is the first clock signal line. As shown inFIG.3, on the basis of the embodiment of the shift register unit shown inFIG.2, the node control end is the first clock signal line CK;the second node control circuit12is electrically connected to the first clock signal line CK for easy wiring. In the embodiment of the present disclosure, the shift register unit may include an output end that is a K-th-stage output end, and the node control end is a K+N-stage output end;K and N are both positive integers;the (K+N)-th stage light-emitting control signal end is configured to provide an effective voltage signal when the input end provides an ineffective voltage signal. In the embodiment of the present disclosure, when the light emission control transistor, the gate of which is connected to the light emission control signal, in the pixel circuit is an n-type transistor, the invalid voltage signal may be a low voltage signal, and the valid voltage signal may be a high voltage signal;when the light emission control transistor is a p-type transistor, the invalid voltage signal may be a high voltage signal, and the valid voltage signal may be a low voltage signal, but not limited thereto. As shown inFIG.4, on the basis of the embodiment of the shift register unit shown inFIG.2, OUT is the K-th stage light-emitting control signal terminal, N is equal to 5, and the node control end is the (K+5)-th stage light-emitting control signal terminal OUT (K+5);the second node control circuit12is electrically connected to the K+5 th-level emission control signal terminal OUT (K+5);OUT (K+5) provides a low voltage signal when INPUT provides a high voltage signal. Optionally, the second node control circuit comprises a first transistor and a second transistor,a control electrode of the first transistor and a first electrode of the first transistor are both electrically connected to the first clock signal line, and a second electrode of the first transistor is electrically connected to the second node;a control electrode of the second transistor is electrically connected to the first output node, a first electrode of the second transistor is electrically connected to the first clock signal line, and a second electrode of the second transistor is electrically connected to the second node. According to an embodiment of the present disclosure, the first output node control circuit may include a third transistor, a fourth transistor, a fifth transistor, and a first capacitor;a control electrode of the third transistor is electrically connected to the first clock signal line, a first electrode of the third transistor is electrically connected to the input end, and a second electrode of the third transistor is electrically connected to the first output node;a control electrode of the fourth transistor is electrically connected to the second clock signal line, and a second electrode of the fourth transistor is electrically connected to the first output node;a control electrode of the fifth transistor is electrically connected to the second node, a first electrode of the fifth transistor is electrically connected to the first voltage line, and a second electrode of the fifth transistor is electrically connected to the first electrode of the fourth transistor;the first electrode plate of the first capacitor is electrically connected to the first output node, and the second electrode plate of the first capacitor is electrically connected to the second clock signal line. According to another specific embodiment, the first output node control circuit includes a third transistor, a fourth transistor, a fifth transistor, and a first capacitor;a control electrode of the third transistor is electrically connected to the first clock signal line, a first electrode of the third transistor is electrically connected to the input end, and a second electrode of the third transistor is electrically connected to the first output node;a control electrode of the fourth transistor is electrically connected to the first output node, and a second electrode of the fourth transistor is electrically connected to the second clock signal line;a control electrode of the fifth transistor is electrically connected to the second node, a first electrode of the fifth transistor is electrically connected to the first voltage line, and a second electrode of the fifth transistor is electrically connected to the first electrode of the fourth transistor;a first electrode plate of the first capacitor is electrically connected to the first output node, and a second electrode plate of the first capacitor is electrically connected to the first electrode of the fourth transistor. Optionally, the shift register unit according to at least one embodiment of the present disclosure may further include a first isolation circuit;the first isolation circuit is electrically connected to a first control voltage line and configured to control the first isolation node to be communicated with the first output node under the control of a first control voltage provided by the first control voltage line;the first output node control circuit is configured to, under control of the first clock signal, write an input signal provided by the input end into a first isolation node, and when the first isolation circuit controls communication between the first isolation node and the first output node, write the input signal into the first output node;the second node control circuit is directly electrically connected to the first isolation node, and the second node control circuit is electrically connected to the first output node through the first isolation circuit; the second node control circuit is configured to control the first clock signal to be written into the second node under the control of the potential of the first isolation node. In at least one embodiment of the present disclosure, when the first isolation transistor included in the first isolation circuit is a p-type transistor, the first control voltage line may be a low voltage line. In particular implementations, the first isolation circuit may include a first isolation transistor;a control electrode of the first isolation transistor is electrically connected to the first control voltage line, a first electrode of the first isolation transistor is electrically connected to the first isolation node, and a second electrode of the first isolation transistor is electrically connected to the first output node. Optionally, the shift register unit according to at least one embodiment of the present disclosure may further include a second isolation circuit;the second output node control circuit is electrically connected to the second node through the second isolation circuit;the second isolation circuit is further electrically connected to a second control voltage line and configured to control the second node to be communicated with the second output node control circuit under the control of a second control voltage provided by the second control voltage line. In at least one embodiment of the present disclosure, when the second isolation transistor included in the second isolation circuit is a p-type transistor, the second control voltage line may be a low voltage line. In particular implementations, the second isolation circuit may include a second isolation transistor;a control electrode of the second isolation transistor is electrically connected to the second control voltage line, a first electrode of the second isolation transistor is electrically connected to the second node, and a second electrode of the first isolation transistor is electrically connected to the second output node control circuit. In a specific implementation, the second output node control circuit may be electrically connected to the first output node, the second clock signal line, and the first voltage line, respectively, and configured to control a potential of the second output node according to the second clock signal, the potential of the second node, and the first voltage signal, and to maintain the potential of the second output node under control of a potential of the second node, a potential of the first output node, and the second clock signal. In the embodiment of the present disclosure, the second output node control circuit may control the potential of the second output node under control of the potential of the second node, the potential of the first output node, and the second clock signal, and the second output node control circuit may be further configured to maintain the potential of the second output node. In an embodiment of the present disclosure, the second output node control circuit may include a third node control sub-circuit and a second output node control sub-circuit, wherein,the third node control sub-circuit is electrically connected to the second node, the second clock signal line and a third node respectively, and is configured to write a second clock signal into the third node under the control of the potential of the second node and adjusting the potential of the third node according to the potential of the second node;the second output node control sub-circuit is electrically connected to the third node, the second clock signal line, the second output node, the first output node and the first voltage line, is configured to control the communication between the third node and the second output node under the control of the second clock signal, is configured to write a first voltage signal into the second output node under the control of the potential of the first output node, and is configured to maintain the potential of the second output node. In a specific implementation, the second output node control circuit may include a third node control sub-circuit and a second output node control sub-circuit, the third node control sub-circuit adjusts a potential of a third node, and the second output node control sub-circuit controls a potential of the second output node. As shown inFIG.5, on the basis of the embodiment of the shift register unit shown inFIG.3, the second output node control circuit includes a third node control sub-circuit51and a second output node control sub-circuit52, wherein,the third node control sub-circuit51is electrically connected to the second node N2, the second clock signal line CB, and a third node N3, respectively, and is configured to write a second clock signal into the third node N3under the control of the potential of the second node N2, and adjust the potential of the third node N3according to the potential of the second node N2;the second output node control sub-circuit52is electrically connected to the third node N3, the second clock signal line CB, the second output node N4, the first output node N1, and the first voltage line V1, respectively, and is configured to control communication between the third node N3and the second output node N4under control of the second clock signal, to write a first voltage signal into the second output node N4under control of a potential of the first output node N1, and to maintain a potential of the second output node N4. As shown inFIG.6, on the basis of the embodiment of the shift register unit shown inFIG.4, the second output node control circuit includes a third node control sub-circuit51and a second output node control sub-circuit52, wherein,the third node control sub-circuit51is electrically connected to the second node N2, the second clock signal line CB, and a third node N3, respectively, and is configured to write a second clock signal into the third node N3under the control of the potential of the second node N2, and adjust the potential of the third node N3according to the potential of the second node N2;the second output node control sub-circuit52is electrically connected to the third node N3, the second clock signal line CB, the second output node N4, the first output node N1, and the first voltage line V1, respectively, and is configured to control communication between the third node N3and the second output node N4under control of the second clock signal, to write a first voltage signal into the second output node N4under control of a potential of the first output node N1, and to maintain a potential of the second output node N4. Optionally, the third node control sub-circuit includes a sixth transistor and a second capacitor;a control electrode of the sixth transistor is electrically connected to the second node, a first electrode of the sixth transistor is electrically connected to the second clock signal line, and a second electrode of the sixth transistor is electrically connected to the third node;the first electrode plate of the second capacitor is electrically connected to the second node, and the second electrode plate of the second capacitor is electrically connected to the third node;the second output node control sub-circuit comprises a seventh transistor, an eighth transistor and a third capacitor;a control electrode of the seventh transistor is electrically connected to the second clock signal line, a first electrode of the seventh transistor is electrically connected to the third node, and a second electrode of the seventh transistor is electrically connected to the second output node;a control electrode of the eighth transistor is electrically connected to the first output node, a first electrode of the eighth transistor is electrically connected to the first voltage line, and a second electrode of the eighth transistor is electrically connected to the second output node;the first electrode plate of the third capacitor is electrically connected to the second output node, and the second electrode plate of the third capacitor is electrically connected to the first voltage line. Optionally, the output circuit includes a ninth transistor and a tenth transistor, whereina control electrode of the ninth transistor is electrically connected to the second output node, a first electrode of the ninth transistor is electrically connected to the first voltage line, and a second electrode of the ninth transistor is electrically connected to the output end;a control electrode of the tenth transistor is electrically connected to the first output node, a first electrode of the tenth transistor is electrically connected to the output end, and a second electrode of the tenth transistor is electrically connected to the second voltage line. As shown inFIG.7, on the basis of the embodiment of the shift register unit shown inFIG.5,the second node control circuit12includes a first transistor T1and a second transistor T2,the gate electrode of the first transistor T1and the source electrode of the first transistor T1are electrically connected to the first clock signal line CK, and the drain electrode of the first transistor T1is electrically connected to the second node N2;a gate of the second transistor T2is electrically connected to the first output node N1, a source of the second transistor T2is electrically connected to the first clock signal line CK, and a drain of the second transistor T2is electrically connected to the second node N2;the first output node control circuit11includes a third transistor T3, a fourth transistor T4, a fifth transistor T5, and a first capacitor C1;a gate of the third transistor T3is electrically connected to the first clock signal line CK, a source of the third transistor T3is electrically connected to the INPUT terminal INPUT, and a drain of the third transistor T3is electrically connected to the first output node N1;a gate of the fourth transistor T4is electrically connected to the second clock signal line CB, and a drain of the fourth transistor T4is electrically connected to the first output node N1;a gate of the fifth transistor T5is electrically connected to the second node N2, a source of the fifth transistor T5is electrically connected to a high voltage line VGH, and a drain of the fifth transistor T5is electrically connected to a source of the fourth transistor T4;a first electrode plate of the first capacitor C1is electrically connected to the first output node N1, and a second electrode plate of the first capacitor C1is electrically connected to the second clock signal line CB;the third node control sub-circuit51includes a sixth transistor T6and a second capacitor C2;a gate of the sixth transistor T6is electrically connected to the second node N2, a source of the sixth transistor T6is electrically connected to the second clock signal line CB, and a drain of the sixth transistor T6is electrically connected to the third node N3;the first electrode plate of the second capacitor C2is electrically connected to the second node N2, and the second electrode plate of the second capacitor C2is electrically connected to the third node N3;the second output node control sub-circuit52includes a seventh transistor T7, an eighth transistor T8, and a third capacitor C3;a gate of the seventh transistor T7is electrically connected to the second clock signal line CB, a source of the seventh transistor T7is electrically connected to the third node N3, and a drain of the seventh transistor T7is electrically connected to the second output node N4;a gate of the eighth transistor T8is electrically connected to the first output node N1, a source of the eighth transistor T8is electrically connected to the high voltage line VGH, and a drain of the eighth transistor T8is electrically connected to the second output node N4;a first electrode plate of the third capacitor C3is electrically connected to the second output node N4, and a second electrode plate of the third capacitor C3is electrically connected to the high voltage line VGH;the output circuit14includes a ninth transistor T9and a tenth transistor T10, wherein,the gate of the ninth transistor T9is electrically connected to the second output node N4, the source of the ninth transistor T9is electrically connected to the high voltage line VGH, and the drain of the ninth transistor T9is electrically connected to the output end OUT;a gate of the tenth transistor T10is electrically connected to the first output node N1, a source of the tenth transistor T10is electrically connected to the output end OUT, and a drain of the tenth transistor T10is electrically connected to a low voltage line VGL. In the embodiment of the shift register unit shown inFIG.7, the first voltage line is a high voltage line, and the second voltage line is a low voltage line. In the embodiment shown inFIG.7, all transistors are p-type thin film transistors, but not limited thereto. As shown inFIG.8, in operation of the shift register unit embodiment of the present disclosure shown inFIG.7,in a first stage T1, INPUT provides a high voltage, CB provides a high voltage, CK provides a low voltage, T3is turned on, T1is turned on, a potential of N2is a low voltage, a potential of N1is a high voltage, T6is turned on, a potential of N3is a high voltage, T7is turned off, a potential of N4is maintained at a high voltage, T2is turned off, T4is turned off, T5is turned on, T9and T10are both turned off, and a potential of a light emission control signal output by OUT is maintained at a low voltage;in a second stage T2, INPUT provides a high voltage, CB provides a low voltage, CK provides a high voltage, T1and T3are turned off, N2is at a low voltage, T4and T5are both turned on, N1is at a high voltage, T6is turned on, N3is at a low voltage, T7is turned on, N4is at a low voltage, T8is turned off, T9is turned on, T10is turned off, and OUT provides a high voltage;in a third stage T3, INPUT provides a high voltage, CB provides a high voltage, CK provides a low voltage, T1and T3are turned on, a potential of N2is a low voltage, a potential of N1is a high voltage, T2is turned off, T4is turned off, T6is turned on, a potential of N3is a high voltage, T7is turned off, a potential of N4is maintained as a low voltage, T9is turned on, T10is turned off, and OUT outputs a high voltage;in a fourth stage T4, INPUT provides a low voltage, CB provides a low voltage, CK provides a high voltage, T1and T3are turned off, the potential of N2is the low voltage, T4and T5are turned on, the potential of N1becomes the high voltage, T8is turned off, T6is turned on, the potential of N3is the low voltage, T7is turned on, the potential of N4is the low voltage, T9is turned on, T10is turned off, and OUT outputs the high voltage;in a fifth stage T5, INPUT provides a low voltage, CB provides a high voltage, CK provides a low voltage, T1and T3are both on, the potential of N2is the low voltage, the potential of N1is the low voltage, T2is on, T4is off, T6is on, the potential of N3is the high voltage, T7is off, T8is on, the potential of N4is the high voltage, T9is off, T10is on, and OUT outputs the low voltage;in the sixth phase T6, INPUT provides low voltage, CB provides low voltage, CK provides high voltage, T1and T3are both off, N1is at low voltage, T2is on, N2is at high voltage, T4is on, T5is off, T6is off, N3is at high voltage, T7is on, N4is at high voltage, T9is off, T10is on, and OUT outputs low voltage. InFIG.8, a reference numeral OUT (K+1) is a light-emitting control signal terminal of the K+1 th stage, and the light-emitting control signal terminal of the K+1 th stage is a light-emitting control signal terminal of the shift register unit of the K+1 th stage. FIG.9is a simulated operation timing diagram of the embodiment of the shift register unit shown inFIG.7of the present disclosure. As shown inFIG.10, on the basis of at least one embodiment of the shift register unit shown inFIG.4, the shift register unit according to at least one embodiment of the present disclosure further includes a first isolation circuit101and a second isolation circuit102;the first isolation circuit101is electrically connected to a first control voltage line Vc1, and is configured to control communication between the first isolation node N01and the first output node N1under the control of a first control voltage supplied by the first control voltage line Vc1;the first output node control circuit11is electrically connected to the first output node N1through the first isolation circuit101, the first output node control circuit11is directly electrically connected to the first isolation node N01, and the first output node control circuit11is configured to, under the control of the first clock signal, write an INPUT signal provided by the INPUT terminal INPUT into a first isolation node N01, and when the first isolation circuit101controls communication between the first isolation node N01and the first output node N1, write the INPUT signal into the first output node N1;the second output node control circuit13is electrically connected to the second node N2through the second isolation circuit102; a connection node of the second isolation circuit102and the second output control circuit13is a second isolation node N02;the second isolation circuit102is further electrically connected to a second control voltage line Vc2, for controlling the communication between the second node N2and the second output node control circuit13under the control of a second control voltage supplied by the second control voltage line Vc2. In at least one embodiment of the shift register unit as shown inFIG.10, the second node control circuit12is directly electrically connected to the first isolation node N01, and the second node control circuit12is electrically connected to the first output node N1through the first isolation circuit101; the second node control circuit12is configured to control the writing of the first clock signal into the second node N2under the control of the potential of the first isolation node N01. In at least one embodiment of the shift register unit shown inFIG.10, when the first isolation transistor included in the first isolation circuit101is a p-type transistor, the Vc1may be a low voltage line, and when the second isolation transistor included in the second isolation circuit102is a p-type transistor, the Vc2may be a low voltage line. In at least one embodiment of the shift register unit shown inFIG.10, a first isolation circuit101and a second isolation circuit102are added to prevent the potential of N01from being too low to affect the potential of N1, and prevent the potential of N02from being too low to affect the potential of N2, thereby improving the stability of the circuit. At least one embodiment of the shift register unit shown inFIG.11differs from at least one embodiment of the shift register unit shown inFIG.7in that: a first isolation transistor T12and a second isolation transistor T11are added; the gate of T12and the gate of T11are electrically connected to a low voltage line VGL, the source of T12is electrically connected to a first isolation node N01, and the drain of T12is electrically connected to a first output node N1; the source of T11is electrically connected to a second node N2, and the drain of T11is electrically connected to a second isolation node N02; n02is electrically connected to the grid of T6;the gate of T8is electrically connected to the first isolation node N01;the drain of T4is electrically connected to the second clock signal line CB, and the second electrode plate of C1is electrically connected to the source of T4. In at least one embodiment of the shift register unit shown inFIG.11, all the transistors are p-type thin film transistors, but not limited thereto. In operation of at least one embodiment of the shift register unit of the present disclosure as shown inFIG.11,in the first stage, the second stage, the third stage and the fourth stage, when the potential of N1is high voltage, T4is turned off, the potential of N2is low voltage, and T5is turned on, the second electrode plate of C2is connected to high voltage VGH, and the second electrode plate of C2is not electrically connected to the second clock signal terminal CB, so that the influence of the jump of the potential of the second clock signal provided by CB on the potential of N1is prevented, the turn-off of T10is ensured, the influence of the turn-on of T10on the potential of the signal output by OUT is prevented, and the output of high voltage by OUT is ensured;in the fifth stage and the sixth stage, the potential of N1is low voltage, T4is open, and the second electrode plate of C2is connected to the second clock signal terminal CB, so that when the potential of the second clock signal jumps from high voltage to low voltage, the potential of N1can be pulled down further, which is favorable for OUT to output low voltage. As shown inFIG.12A, reference numeral J1denotes a display substrate, reference numeral a0denotes a display region, reference numeral B1denotes a first edge region, and reference numeral B2denotes a second edge region. A plurality of light emission control lines, a plurality of gate lines and a plurality of data lines, and a plurality of subpixels defined by the intersections of the plurality of gate lines and the plurality of data lines may be disposed in the display region a0of the display substrate J1;a scanning drive circuit including a plurality of shift register units according to at least one embodiment of the present disclosure may be disposed in the first edge region B1and/or the second edge region B2;the scanning drive circuit comprises a plurality of shift register units which are in one-to-one correspondence with the plurality of light-emitting control lines, and each shift register unit is coupled with the corresponding light-emitting control line and configured to provide light-emitting control signals for the corresponding light-emitting control line. In a specific implementation, one of the light-emitting control lines is coupled to the light-emitting control terminals of the corresponding row of pixel circuits. Optionally, the display substrate further includes a plurality of rows of pixel circuits disposed on the base; the pixel circuit comprises a light-emitting control end;The shift register units included in the scanning drive circuit correspond to the row pixel circuits one to one. And the signal output line of the shift register unit is coupled with the light-emitting control end of the corresponding row of pixel circuits and is configured to provide a light-emitting control signal for the light-emitting control end of the corresponding row of pixel circuits. In at least one embodiment of the present disclosure, the pixel circuit may be disposed in an effective display area of the display substrate, and the scanning drive circuit may be disposed in an edge area of the display substrate. As shown inFIG.12B, reference numeral Y1is a scanning drive circuit, reference numeral S11is a first-stage shift register unit included in the scanning drive circuit S1, reference numeral S12is a second-stage shift register unit included in the scanning drive circuit S1, reference numeral S1M-1is an M−1-stage shift register unit included in the scanning drive circuit S1, reference numeral SIM is an M-th-stage shift register unit included in the scanning drive circuit S1, and M is an integer greater than 3;inFIG.12B, reference numeral R1is a first row pixel circuit, reference numeral R2is a second row pixel circuit, reference numeral RM−1 is an M−1 row pixel circuit, and reference numeral RM is an M-th row pixel circuit;S11corresponds to R1, S12corresponds to R2, S1M−1 corresponds to RM−1, and S1M corresponds to RM;S11provides a first row light control signal for R1, S12provides a second row light control signal for R2, S1M−1 provides an M−1 row light control signal for R1M−1, and S1M provides an M-th row light control signal for R1M. As shown inFIG.12B, in the edge region, the display substrate may further include a gate driving circuit, where the gate driving circuit includes multiple stages of gate driving units, and the gate driving units are in one-to-one correspondence with the pixel rows and configured to provide corresponding gate driving signals for the pixels in the corresponding rows;inFIG.12B, reference numeral Y2denotes a gate driving circuit, reference numeral S21denotes a first row of gate driving units included in the gate driving circuit, reference numeral S22denotes a second row of gate driving units included in the gate driving circuit, reference numeral S2M−1 denotes an M−1 th row of gate driving units included in the gate driving circuit, and reference numeral S2M denotes an M-th row of gate driving units included in the gate driving circuit. As shown inFIG.12C, on the basis of the embodiment of the shift register unit shown inFIG.7, the electrodes of the respective transistors, and the terminals of the respective capacitors are numbered;inFIG.12C, the gate labeled G1is gate of T1, the source labeled S1is source of T1, and the drain labeled D1is drain of T1; a gate labeled G2is gate of T2, a source labeled S2is source of T2, and a drain labeled D2is drain of T2; a gate labeled G3is gate of T3, a source labeled S3is source of T3, and a drain labeled D3is drain of T3; a gate labeled G4is gate of T4, a source labeled S4is source of T4, and a drain labeled D4is drain of T4; a gate labeled G5is gate of T5, a source labeled S5is source of T5, and a drain labeled D5is drain of T5; a gate labeled G6is gate of T6, a source labeled S6is source of T6, and a drain labeled D6is drain of T6; a gate labeled G7is gate of T7, a source labeled S7is source of T7, and a drain labeled D7is drain of T7; a gate labeled G8is gate of T8, a source labeled S8is source of T8, and a drain labeled D8is drain of T8; a gate labeled G9is gate of T9, a source labeled S9is source of T9, and a drain labeled D9is drain of T9; a gate labeled G10is gate of T10, a source labeled S10is source of T10, and a drain labeled D10is drain of T10;a first electrode plate with C1and C1a, a second electrode plate with C1and C1b, a first electrode plate with C2and C2a, a second electrode plate with C2and C2b, a first electrode plate with C3aand C3and a second electrode plate with C3band C3.FIG.18shows a schematic layout diagram of a shift register unit according to an embodiment of the disclosure.FIG.13is a schematic view of an active layer inFIG.18,FIG.14is a schematic view of a first gate metal layer inFIG.18, andFIG.15is a schematic view of a second gate metal layer inFIG.18;FIG.16is a schematic diagram of a via hole inFIG.18, andFIG.17is a schematic diagram of a source drain metal layer inFIG.18. In specific implementation, the source layer, the first gate metal layer, the second gate metal layer and the source drain metal layer may be sequentially disposed on the substrate to form the display substrate. In at least one embodiment of the present disclosure, the at least one shift register unit may include a plurality of transistors; the conductive portions at both sides of the channel portion of each transistor may correspond to the first electrode and the second electrode of the transistor, respectively, or may be coupled to the first electrode of the transistor and the second electrode of the transistor, respectively. In at least one embodiment shown inFIG.12C-18, the first voltage lines are high voltage lines VGH, and the second voltage lines are low voltage lines VGL. As shown inFIG.17, the first clock signal line CK, the second clock signal line CB, the high voltage line VGH, and the low voltage line VGL are all formed on the source-drain metal layer, and the first clock signal line CK, the second clock signal line CB, the high voltage line VGH, and the low voltage line VGL all extend along a first direction (in at least one embodiment shown inFIG.12C to18, the first direction may be a vertical direction, but is not limited thereto). As shown inFIG.12C-18, CK and CB are both located on a side of VGL away from the display area, CK and CB are disposed side by side and in close proximity, CK is disposed on a side of CB away from VGL, at least one embodiment of the shift register unit is located between VGL and CB, and an orthographic projection of the shift register unit on the substrate at least partially overlaps an orthographic projection of VGH on the substrate. In at least one embodiment shown inFIG.12C-18, the positions of CK and CB may be interchanged. In at least one embodiment shown inFIG.12C to18, the ninth transistor T9and the tenth transistor T10included in the output circuit may be positioned between the high voltage line VGH and the low voltage line VGL. In at least one embodiment shown inFIG.12C to18, since T9is electrically connected to the high voltage line VGH and T10is electrically connected to the low voltage line VGL, T9and T10are disposed between VGH and VGL, and a space between tenth transistors included in the shift register unit adjacent in the longitudinal direction is utilized to set the output end OUT such that T9and T10are disposed between VGH and VGL, and other signal lines and components included in other transistors are not disposed between the high voltage line VGH and an output circuit (which includes T9and T10), and other signal lines and components included in other transistors are not disposed between the low voltage line VGL and the output circuit, and the distance from VGH to T9and T10is narrowed, and the distance from VGL to T9and T10is narrowed, so that the lateral width of the shift register unit is reduced. As shown inFIG.12C to18, the source S1of T1and the gate G1of T1are both electrically connected to the first clock signal line CK. As shown inFIGS.13to18, the source S1of the first transistor T1is electrically connected to the first conductive connection portion L1through the first via hole H1, and the gate G1of the T1is electrically connected to the conductive connection portion L0;the conductive connection portion L0is electrically connected to the first clock signal line CK through the third via hole H3and the fourth via hole H4;l0is electrically connected to L1through the second via H2, so that S1is electrically connected to the first clock signal line CK. In at least one embodiment shown inFIG.12C-18, the conductive connecting parts L0and G1are formed on the first gate metal layer, the first conductive connecting part L1, the first clock signal line CK and the second clock signal line CB are formed on the source-drain metal layer, and S1is formed on the active layer. By adopting the layout of the shift register unit shown inFIGS.13to18, S1is electrically connected to the first clock signal line CK, so that the use of a low voltage line can be reduced, the wiring is facilitated, and the space is saved. InFIG.13, reference numeral a1denotes a first active pattern, reference numeral S1denotes a source of T1, and reference numeral D1denotes a drain of T1; a source labeled S2is source of T2, a drain labeled D2is drain of T2; a source labeled S3is source of T3, a drain labeled D3is drain of T3; a source labeled S4is source of T4; a source labeled S5is source of T5; a source labeled S6is source of T6, a drain labeled D6is drain of T6; a source labeled S7is source of T7, a drain labeled D7is drain of T7; labeled S8is source of T8. In the embodiments corresponding toFIG.13-18, D7is multiplexed as the drain of T8, D3is multiplexed as the drain of T4, S4is multiplexed as the drain of T5, and G2is a double-gate transistor, but not limited thereto. InFIG.14, the gate denoted by G1is gate of T1, the gate denoted by G21is a first gate pattern of gate of T2, and the gate denoted by G22is a second gate pattern of the gate of T2; a gate with a designation of G3is gate of T3, a gate with a designation of G4is gate of T4, a gate with a designation of G5is gate of T5, a gate with a designation of G6is gate of T6, a gate with a designation of G7is gate of T7, a gate with a designation of G8is gate of T8, a gate with a designation of G9is gate of T9, a gate with a designation of G10is gate of T10; a first electrode plate with the reference number of C1, C2aand C3, which are respectively labeled as C1a, C2and C3a; reference numeral L0denotes a conductive connection portion. InFIG.15, reference numeral INPUT is an INPUT terminal, reference numeral OUT is an output end, reference numeral C1bis a second electrode plate of C1, reference numeral C2bis a second electrode plate of C2, and reference numeral C3bis a second electrode plate of C3. InFIG.16, reference numeral H1is a first via, reference numeral H2is a second via, reference numeral H3is a third via, and reference numeral H4is a fourth via. InFIG.17, reference numeral STV is a start signal line, reference numeral CK is a first clock signal line, reference numeral CB is a second clock signal line, reference numeral L1is a first conductive connection portion, reference numeral L2is a second conductive connection portion, reference numeral VGH is a high voltage line, reference numeral VGL is a low voltage line, reference numeral D91is a first electrode pattern included in the drain of T9, reference numeral D92is a second electrode pattern included in the drain of T9, reference numeral D10is the drain of T10, reference numeral S9is the source of T9, reference numeral S10is the source of T10. InFIG.18, reference numeral STV is a start signal line, reference numeral CK is a first clock signal line, reference numeral CB is a second clock signal line, reference numeral L1is a first conductive connection portion, reference numeral VGH is a high voltage line, reference numeral VGL is a low voltage line, reference numeral G1is a gate of T1, reference numeral S1is a source of T1, and reference numeral D1is a drain of T1. Moreover, in at least one embodiment of the present disclosure, the first electrode plate C1aof C1may be provided in an L shape, and in a case that the longitudinal space is sufficient, the electrode plate of C1may be expanded longitudinally, so as to reduce the lateral space, which is beneficial to reducing the frame. The embodiment of the shift register unit shown inFIG.19differs from the embodiment of the shift register unit shown inFIG.7in the following way:the source electrode of T1is electrically connected to the K+5 th-stage emission control signal terminal OUT (K+5); the (K+5)-th stage light-emitting control signal end is a light-emitting control signal end of the (K+5)-th stage shift register unit;the light-emitting control signal terminal OUT is a K-th stage light-emitting control signal terminal, and K is a positive integer. As shown inFIG.20, in operation of the shift register unit embodiment of the present disclosure shown inFIG.19,in a first phase T1, INPUT provides a high voltage, CB provides a high voltage, CK provides a low voltage, OUT (K+5) outputs a low voltage, T3is turned on, T1is turned on, the potential of N2is a low voltage, the potential of N1is a high voltage, T6is turned on, the potential of N3is a high voltage, T7is turned off, the potential of N4is maintained at a high voltage, T2is turned off, T4is turned off, T5is turned on, T9and T10are both turned off, and the potential of a light emission control signal output by OUT is maintained at a low voltage;in a second stage T2, INPUT provides a high voltage, CB provides a low voltage, CK provides a high voltage, OUT (K+5) outputs a low voltage, T1and T3are turned off, the potential of N2is a low voltage, T4and T5are both turned on, the potential of N1is a high voltage, T6is turned on, the potential of N3is a low voltage, T7is turned on, the potential of N4is a low voltage, T8is turned off, T9is turned on, T10is turned off, and OUT provides a high voltage;in a third stage T3, INPUT provides a high voltage, CB provides a high voltage, CK provides a low voltage, OUT (K+5) outputs a low voltage, T1and T3are turned on, the potential of N2is a low voltage, the potential of N1is a high voltage, T2is turned off, T4is turned off, T6is turned on, T7is turned off, the potential of N4is maintained as a low voltage, T9is turned on, T10is turned off, and OUT outputs a high voltage;in a fourth stage T4, INPUT provides a low voltage, CB provides a low voltage, CK provides a high voltage, OUT (K+5) outputs a low voltage, T1and T3are turned off, the potential of N2is a low voltage, T4and T5are turned on, the potential of N1becomes a high voltage, T8is turned off, T6is turned on, the potential of N3is a low voltage, T7is turned on, the potential of N4is a low voltage, T9is turned on, T10is turned off, OUT outputs a high voltage;in a fifth phase T5, INPUT provides a low voltage, CB provides a high voltage, CK provides a low voltage, OUT (K+5) outputs a low voltage, T1and T3are both on, the potential of N2is a low voltage, the potential of N1is a low voltage, T2is on, T4is off, T6is on, the potential of N3is a high voltage, T7is off, T8is on, the potential of N4is a high voltage, T9is off, T10is on, OUT outputs a low voltage;in the sixth phase T6, INPUT provides a low voltage, CB provides a low voltage, CK provides a high voltage, OUT (K+5) outputs a low voltage, T1and T3are both off, the potential of N1is a low voltage, T2is on, the potential of N2is a high voltage, T4is on, T5is off, T6is off, the potential of N3is a high voltage, T7is on, the potential of N4is a high voltage, T9is off, T10is on, and OUT outputs a low voltage. FIG.21is a simulated operation timing diagram of the embodiment of the shift register unit shown inFIG.19of the present disclosure. The display substrate comprises a scanning drive circuit and a display area, wherein the scanning drive circuit and the display area are arranged on a substrate, the scanning drive circuit comprises a plurality of shift register units, the scanning drive circuit further comprises a first voltage line, a second voltage line and a clock signal line, and the clock signal line comprises a first clock signal line and a second clock signal line; the first voltage line, the second voltage line, the first clock signal line, and the second clock signal line extend along a first direction, the display region includes at least one driving transistor configured to drive a light emitting element to display;the first clock signal line and the second clock signal line are positioned on one side of the second voltage line far away from the display area, the shift register unit is positioned between the second voltage line and the clock signal line, and the orthographic projection of the shift register unit on the substrate is at least partially overlapped with the projection of the first voltage line on the substrate. In the display substrate according to the embodiment of the disclosure, the shift register unit is disposed between the clock signal line and the second voltage line, and a forward projection of the shift register unit on the substrate at least partially overlaps a projection of the first voltage line on the substrate, so that the shift register unit is electrically connected to the clock signal line, the second voltage line and the first voltage signal line. Optionally, the second node control circuit included in the shift register unit is located between the clock signal line and the first voltage line. Alternatively, the first clock signal line and the second clock signal line may be arranged side by side and next to each other,in particular implementations, the second node control circuit includes a first transistor; the gate of the first transistor is electrically connected to the conductive connecting part, and the gate of the first transistor and the conductive connecting part are both formed on the first grid metal layer; the conductive connecting part is connected to the first clock signal line through a corresponding via hole, so that the gate of the first transistor is electrically connected to the first clock signal line;the source electrode of the first transistor is electrically connected to the first conductive connecting part through the corresponding through hole; the conductive connection part is electrically connected to the first conductive connection part through a corresponding via hole so that the source electrode of the first transistor is electrically connected to the first clock signal line;the first conductive connecting part and the first clock signal line are formed on the source drain metal layer, and the source electrode of the first transistor is formed on the active layer. In at least one embodiment of the present disclosure, the gate of the first transistor and the source of the first transistor are electrically connected to the first clock signal line, so that the number of voltage lines used can be reduced, and the first transistor can be disposed closer to the first clock signal line to facilitate electrical connection of the first transistor and the first clock signal line. The scanning drive circuit comprises a plurality of stages of the shift register units. In a specific implementation, the shift register unit may include an input end;except for the first stage of shift register unit, the input end of each stage of shift register unit is electrically connected to the output end of the adjacent upper stage of shift register unit. As shown inFIG.22, the scanning drive circuit according to the embodiment of the disclosure includes a plurality of stages of the shift register units;inFIG.22, reference numeral E1denotes a shift register unit of the first stage, reference numeral E2denotes a shift register unit of the second stage, reference numeral E3denotes a shift register unit of the third stage, reference numeral EK denotes a shift register unit of the K-th stage, and reference numeral EK+1 denotes a shift register unit of the K-th stage; k is a positive integer;the input end of E1is electrically connected to the start signal line STV;the input end of E2is electrically connected to the output end of E1; an input end of E3is electrically connected to an output end of E2, and an input end of EK+1 is electrically connected to an output end of EK. Optionally, the K-th stage shift register unit may include a K-th stage node control end and a K-th stage input end;the K-th stage node control end is electrically connected to the (K+N)-th stage output end;K and N are both positive integers;the (K+N)-th stage light-emitting control signal end is configured to provide an effective voltage signal when the K-th stage input end provides an ineffective voltage signal. The display device according to the embodiment of the present disclosure includes the scanning drive circuit. The display device comprises the display substrate. The display device provided by the embodiment of the disclosure can be any product or component with a display function, such as a mobile phone, a tablet computer, a television, a display, a notebook computer, a digital photo frame, a navigator and the like. While the foregoing is directed to embodiments of the present disclosure, it will be appreciated by those skilled in the art that various changes and modifications may be made without departing from the principles of the disclosure, and it is intended that such changes and modifications be considered as within the scope of the disclosure. | 54,067 |
11862109 | DETAILED DESCRIPTION Hereinafter, aspects of the present disclosure are described in detail with reference to the accompanying drawings. The same or substantially the same reference denotations are used to refer to the same or substantially the same elements throughout the specification and the drawings. When determined to make the subject matter of the present disclosure unclear, the detailed of the known art or functions may be skipped. The terms “comprises” and/or “comprising,” “has” and/or “having,” or “includes” and/or “including” when used in this specification specify the presence of stated features, regions, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, regions, integers, steps, operations, elements, components, and/or groups thereof. As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. Such denotations as “first,” “second,” “A,” “B,” “(a),” and “(b),” may be used in describing the components of the disclosure. These denotations are provided merely to distinguish a component from another, and the essence of the components is not limited by the denotations in light of order or sequence. In describing the positional relationship between components, when two or more components are described as “connected”, “coupled” or “linked”, the two or more components may be directly “connected”, “coupled” or “linked””, or another component may intervene. Here, the other component may be included in one or more of the two or more components that are “connected”, “coupled” or “linked” to each other. In relation to components, operational methods or manufacturing methods, when A is referred to as being “after,” “subsequent to,” “next,” and “before,” A and B may be discontinuous from each other unless mentioned with the term “immediately” or “directly.” When a component is designated with a value or its corresponding information (e.g., level), the value or the corresponding information may be interpreted as including a tolerance that may arise due to various factors (e.g., process factors, internal or external impacts, or noise). Hereinafter, various aspects of the present disclosure are described in detail with reference to the accompanying drawings. FIG.1is a view illustrating a display device according to the present disclosure. Referring toFIG.1, a display device100according to the present disclosure may include a display panel110, a data driving circuit120and a gate driving circuit130for driving the display panel110, and a controller140for controlling the data driving circuit120and the gate driving circuit130. In the display panel110, signal lines, such as a plurality of data lines DL and a plurality of gate lines GL, may be disposed on a substrate. In the display panel110, a plurality of subpixels SP connected with the plurality of data lines DL and the gate lines GL may be disposed. The display panel110may include a display area AA in which images are displayed and a non-display area NA in which no image is displayed. In the display panel110, a plurality of subpixels SP for displaying an image may be disposed in the display area AA and, in the non-display area NA, the data driving circuit120and the gate driving circuit130may be mounted, or pad units connected with the data driving circuit120or the gate driving circuit130may be disposed. The data driving circuit120is a circuit configured to drive the plurality of data lines DL, and may supply data signals to the plurality of data lines DL. The gate driving circuit130is a circuit configured to drive the plurality of gate lines GL, and may supply gate signals Vgate to the plurality of gate lines GL. The controller140may supply a data driving timing control signal DCS to the data driving circuit120to control the operation timing of the data driving circuit120. The controller140may supply a gate driving timing control signal GCS for controlling the operation timing of the gate driving circuit130to the gate driving circuit130. The controller140may start scanning according to a timing implemented in each frame, convert input image data input from the outside into image data Data suited for the data signal format used in the data driving circuit120, supply the image data Data to the data driving circuit120, and control data driving at an appropriate time suited for scanning. The controller140receives, from the outside (e.g., a host system), various timing signals including a vertical synchronization signal Vsync, a horizontal synchronization signal Hsync, an input data enable signal DE, and a clock signal, along with the input image data. To control the data driving circuit120and the gate driving circuit130, the controller140receives timing signals, such as the vertical synchronization signal Vsync, horizontal synchronization signal Hsync, input data enable signal DE, and clock signal CLK, generates various control signals DCS and GCS, and outputs the control signals to the data driving circuit120and the gate driving circuit130. To control the gate driving circuit130, the controller140outputs various gate driving timing control signals GCS including a gate start pulse GSP, a gate shift clock GSC, and a gate output enable signal GOE. To control the data driving circuit140, the controller140outputs various data driving timing control signals DCS including, e.g., a source start pulse SSP and a source sampling clock. The data driving circuit120receives the image data Data from the controller140and drives the plurality of data lines DL. The data driving circuit120may include one or more source driving integrated circuit (SDICs). Each source driving integrated circuit (SDIC) may be connected with the display panel110by a tape automated bonding (TAB) method or connected to a bonding pad of the display panel110by a chip on glass (COG) method. Alternatively, it may be implemented by a chip on film (COF) method and connected with the display panel110. The gate driving circuit130may output a gate signal of a turn-on level voltage or a gate signal of a turn-off level voltage according to the control of the controller140. The gate driving circuit130may drive the plurality of gate lines GL by supplying gate signals of the turn-on level voltage to the plurality of gate lines GL. The gate driving circuit130may be connected with the display panel110by a tape automated bonding (TAB) method or connected to a bonding pad of the self-emission display panel110by a COG or chip on panel (COP) method or may be connected with the display panel110according to a COF method. The gate driving circuit130may be formed in a gate in panel (GIP) type, in the non-display area NA of the display panel110. The gate driving circuit130may be disposed on the substrate of the display panel110or may be connected to the substrate of the display panel110. The gate driving circuit130that is of a GIP type may be disposed in the non-display area NA of the substrate. The gate driving circuit130that is of a chip-on-glass (COG) type or chip-on-film (COF) type may be connected to the substrate of the display panel110. When a specific gate line GL is opened by the gate driving circuit130, the data driving circuit120may convert the image data Data received from the controller140into an analog data signal and supply it to the plurality of data lines DL. The data driving circuit120may be connected with one side (e.g., an upper or lower side) of the display panel110. Depending on the driving scheme or the panel design scheme, the data driving circuit120may be connected with both sides (e.g., upper and lower sides) of the self-emission display panel110, or two or more of the four sides of the self-emission display panel110. The gate driving circuit130may be connected with one side (e.g., a left or right side) of the display panel110. Depending on the driving scheme or the panel design scheme, the gate driving circuit130may be connected with both sides (e.g., left and right sides) of the display panel110, or two or more of the four sides of the display panel110. The controller140may be a timing controller used in typical display technology, a control device that may perform other control functions as well as the functions of the timing controller, or a control device other than the timing controller, or may be a circuit in the control device. The controller140may be implemented as various circuits or electronic components, such as an integrated circuit (IC), a field programmable gate array (FPGA), an application specific integrated circuit (ASIC), or a processor. The controller140may be mounted on a printed circuit board or a flexible printed circuit and may be electrically connected with the data driving circuit120and the gate driving circuit130through the printed circuit board or the flexible printed circuit. The controller140may transmit/receive signals to/from the data driving circuit120according to one or more predetermined interfaces. The interface may include, e.g., a low voltage differential signaling (LVDS) interface, an EPI interface, and a serial peripheral interface (SPI). The controller140may include a storage medium, such as one or more registers. The display device100according to the present disclosure may be a display including a backlight unit, such as a liquid crystal display, or may be a self-emission display, such as an organic light emitting display, a quantum dot display, or a micro light emitting diode (LED) display. According to an aspect, when the display device100is an organic light emitting display, each subpixel SP may include an organic light emitting diode (OLED), which is self-emissive, as a light emitting element. According to an aspect, when the display device100is a quantum dot display, each subpixel SP may include a light emitting element formed of a quantum dot, which is a self-emissive semiconductor crystal. According to an aspect, when the display device100is a micro LED display, each subpixel SP may include a micro light emitting diode, which is self-emissive and formed of an inorganic material, as a light emitting element. FIG.2is a view schematically illustrating an equivalent circuit of a subpixel SP and a configuration for compensating for characteristic values of the subpixel SP according to the present disclosure. Referring toFIG.2, each of a plurality of subpixels SP may include a light emitting element ED, a driving transistor DRT, a scan transistor TSC, and a storage capacitor Cst. The light emitting element ED may include a pixel electrode PE and a common electrode CE and may include a light emitting layer EL positioned between the pixel electrode PE and the common electrode CE. The pixel electrode PE of the light emitting element ED may be an electrode disposed in each subpixel SP, and the common electrode CE may be an electrode commonly disposed in all the subpixels SP. Here, the pixel electrode PE may be an anode electrode, and the common electrode CE may be a cathode electrode. Conversely, the pixel electrode PE may be a cathode electrode, and the common electrode CE may be an anode electrode. The common electrode CE of the light emitting element ED may receive a base voltage EVS S. For example, the light emitting element ED may be an organic light emitting diode (OLED), a light emitting diode (LED), or a quantum dot light emitting element. The driving transistor DRT is a transistor for driving the light emitting element ED, and may include a first node N1, a second node N2, and a third node N3. The first node N1of the driving transistor DRT may be a gate node of the driving transistor DRT, and may be electrically connected with a source node or a drain node of the scan transistor SCT. The second node N2of the driving transistor DRT may be a source node or a drain node of the driving transistor DRT, and may be electrically connected with a source node or a drain node of the sensing transistor SENT and may also be electrically connected with the pixel electrode PE of the light emitting element ED. The third node N3of the driving transistor DRT may be electrically connected with a driving voltage line DVL supplying a driving voltage EVDD. The scan transistor SCT may be controlled by a scan pulse SCAN, which is a type of gate signal, and may be electrically connected to the first node N1of the driving transistor DRT and the data line DL. In other words, the scan transistor SCT may be turned on or off according to the scan pulse SCAN supplied from the scan line SCL, which is a type of the gate line GL, controlling the connection between the data line DL and the first node N1of the driving transistor DRT. The scan transistor SCT may be turned on by the scan pulse SCAN having a turn-on level voltage and transfer the data signal Vdata supplied from the data line DL to the first node N1of the driving transistor DRT. If the scan transistor SCT is an n-type transistor, the turn-on level voltage of the scan pulse SCAN may be a high level voltage. If the scan transistor SCT is a p-type transistor, the turn-on level voltage of the scan pulse SCAN may be a low level voltage. The storage capacitor Cst may be electrically connected to the first node N1and the second node N2of the driving transistor DRT. The storage capacitor Cst is charged with the quantity of electric charge corresponding to the voltage difference between both ends thereof and serves to maintain the voltage difference between both ends for a predetermined frame time. Accordingly, during the predetermined frame time, the corresponding subpixel SP may emit light. Referring toFIG.2, each of the plurality of subpixels SP disposed on the display panel110of the display device100may further include a sensing transistor SENT. The sensing transistor SENT may be controlled by a sense pulse SENSE, which is a type of gate signal, and may be electrically connected to the second node N2of the driving transistor DRT and a reference voltage line RVL. In other words, the sensing transistor SENT may be turned on or off according to the sense pulse SENSE supplied from the sense line SENL, which is another type of the gate line GL, controlling the connection between the sense line SENL and the second node N2of the driving transistor DRT. The second node N2of the driving transistor DRT is also referred to as a sensing node. The sensing transistor SENT may be turned on by the sense pulse SENSE having a turn-on level voltage and transfer a reference voltage supplied from the reference voltage line RVL to the second node N2of the driving transistor DRT. The reference voltage line RVL is also referred to as a sensing line. The reference voltage may include a sensing reference voltage VpreS and/or a driving reference voltage VpreR. The driving reference voltage VpreR and the sensing reference voltage VpreS may be common voltages input to the plurality of subpixels SP electrically connected to the reference voltage line RVL. The driving reference voltage VpreR may be a voltage input to the second node N2of the driving transistor DRT during an active period when the data signal Vdata for image display is input to the plurality of data lines DL. In the active period, the voltage of the second node N2of the driving transistor DRT may be initialized to the driving reference voltage VpreR. According to the voltage difference Vgs between the first node N1and the second node N2of the driving transistor DRT and the threshold voltage Vth of the driving transistor DRT, the light emitting diode ED emits light in different brightness levels. The sensing reference voltage VpreS may be a voltage input to the second node N2of the driving transistor DRT during a blank period between two different active periods. In the blank period, the voltage of the second node N2of the driving transistor DRT may be initialized to the sensing reference voltage VpreS. The display device according to the present disclosure may further include an initialization switch configured to input the driving reference voltage VpreR or the sensing reference voltage VpreS to the reference voltage line RVL according to timing. The initialization switch may include a first initialization switch RPRE and a second initialization switch SPRE. The first initialization switch RPRE may switch an electrical connection between the driving reference voltage input node NpreR and the reference voltage line RVL. The second initialization switch SPRE may switch an electrical connection between the sensing reference voltage input node NpreS and the reference voltage line RVL. The sensing transistor SENT may be turned on by the sense pulse SENSE having a turn-on level voltage, transferring the voltage of the second node N2of the driving transistor DRT to the reference voltage line RVL. If the sensing transistor SENT is an n-type transistor, the turn-on level voltage of the sense pulse SENSE may be a high level voltage. If the sensing transistor SENT is a p-type transistor, the turn-on level voltage of the sense pulse SENSE may be a low level voltage. The function in which the sensing transistor SENT transfers the voltage of the second node N2of the driving transistor DRT to the reference voltage line RVL may be used upon driving to sense the characteristic value of the subpixel SP. In this case, the voltage transferred to the reference voltage line RVL may be a voltage for calculating the characteristic value of the subpixel SP or a voltage reflecting the characteristic value of the subpixel SP. Each of the driving transistor DRT, the scan transistor SCT, and the sensing transistor SENT may be an n-type transistor or a p-type transistor. In aspects of the disclosure, for convenience of description, each of the driving transistor DRT, the scan transistor SCT, and the sensing transistor SENT is an n-type transistor. The storage capacitor Cst is not a parasitic capacitor (e.g., Cgs or Cgd) which is an internal capacitor existing between the gate node and the source node (or drain node) of the driving transistor DRT, but may be an external capacitor intentionally designed outside the driving transistor DRT. The scan line SCL and the sense line SENL may be different gate lines GL. In this case, the scan pulse SCAN and the sense pulse SENSE may be separate gate signals, and the on-off timings of the scan transistor SCT and the on-off timings of the sensing transistor SENT in one subpixel SP may be independent. In other words, the on-off timings of the scan transistor SCT and the on-off timings of the sensing transistor SENT in one subpixel SP may be the same or different. Alternatively, the scan line SCL and the sense line SENL may be the same gate line GL. In other words, the gate node of the scan transistor SCT and the gate node of the sensing transistor SENT in one subpixel SP may be connected with one gate line GL. In this case, the scan pulse SCAN and the sense pulse SENSE may be the same gate signal, and the on-off timings of the scan transistor SCT and the on-off timings of the sensing transistor SENT in one subpixel SP may be identical. The structure of the subpixel SP shown inFIG.2is merely an example, and various changes may be made thereto, e.g., such as including one or more transistors or one or more capacitors. Although the structure of the subpixel SP is described with reference toFIG.2under the assumption that the display device100is a self-emission display device, if the display device100is a liquid crystal display, each subpixel SP may include a transistor and a pixel electrode. Referring toFIG.2, the display device100according to the present disclosure may include a line capacitor Crvl. The line capacitor Crvl may be a capacitor element having one end electrically connected to the reference voltage line RVL and the other end connected to the ground GND or may be a parasitic capacitor formed on the reference voltage line RVL. Referring toFIG.2, the source driving integrated circuit SDIC may further include an analog-to-digital converter ADC and a sampling switch SAM. The reference voltage line RVL may be electrically connected to the analog-to-digital converter ADC. The analog-to-digital converter ADC may sense the voltage of the reference voltage line RVL. The sensing voltage sensed by the analog-to-digital converter ADC may be a voltage reflecting the characteristic value of the subpixel SP. In the disclosure, the characteristic value of the subpixel SP may be a characteristic value of the driving transistor DRT or the light emitting element ED. The characteristic value of the driving transistor DRT may include a threshold voltage and mobility of the driving transistor DRT. The characteristic value of the light emitting element ED may include a threshold voltage of the light emitting element ED. The analog-to-digital converter ADC may receive a sensing analog voltage, convert it into a digital value, and output it to the controller140. The display device according to the present disclosure may further include a sampling switch SAM configured to switch an electrical connection between the analog-to-digital converters ADC. The controller140may include a memory210configured to store characteristic value information about the subpixel SP and a compensation circuit220configured to perform calculation for compensating for a change in the characteristic value of the subpixel SP based on the information stored in the memory210. The memory210may store information for compensating for the characteristic value of the subpixel SP. For example, the memory210may store information about the threshold voltage and mobility of the driving transistor DRT of each of the plurality of subpixels SP and information about the threshold voltage of the light emitting element ED included in the subpixel SP. Information about the threshold voltage of the light emitting element ED may be stored in a lookup table LUT. The compensation circuit220calculates the degree of change in the characteristic value of the corresponding subpixel SP based on the characteristic value information about the subpixel SP stored in the memory210and the digital value received from the analog-to-digital converter ADC. The compensation circuit220may update the characteristic value of the subpixel SP stored in the memory210based on the calculated value. The controller140compensates for image data by applying the change in the characteristic value of the subpixel SP, calculated by the compensation circuit220, thereby driving the data driving circuit120. The data signal Vdata reflecting the change in the characteristic value of the subpixel SP may be output to the data line DL through the digital-to-analog converter DAC. The process of sensing the change in the characteristic value of the subpixel SP and compensating for the same is referred to as a “subpixel characteristic value compensation process.” FIG.3is a view illustrating a threshold voltage sensing (i.e., Vth sensing) driving scheme in a display device according to the present disclosure. The threshold voltage sensing driving operation for the driving transistor DRT may be performed through a sensing process including an initialization step, a tracking step, and a sampling step. The initialization step is the step of initializing the first node N1and the second node N2of the driving transistor DRT. In the initialization step, the scan transistor SCT and the sensing transistor SENT is turned on, and the second initialization switch SPRE is turned on. Accordingly, the first node N1and the second node N2of the driving transistor DRT are initialized as a threshold voltage sensing driving data signal Vdata and a sensing reference voltage VpreS, respectively. (V1=Vdata, V2=VpreS) The tracking step is a step that changes the voltage V2of the second node N2of the driving transistor DRT until the second node N2of the driving transistor DRT becomes a voltage state reflecting the threshold voltage or its change. In other words, the tracking step is the step of tracking the voltage of the second node N2of the driving transistor DRT that may reflect the threshold voltage or a change thereof. In the tracking step, the second initialization switch SPRE is turned off or the sensing transistor SENT is turned off, so that the second node N2of the driving transistor DRT is floated. Accordingly, the voltage of the second node N2of the driving transistor DRT rises. The rise of voltage V2of the second node N2of the driving transistor DRT gradually slows down, and the voltage V2is then saturated. The saturated voltage of the second node N2of the driving transistor DRT may correspond to the difference between the data signal Vdata and the threshold voltage Vth or the difference between the data signal Vdata and the threshold voltage deviation ΔVth. If the voltage V2of the second node N2of the driving transistor DRT is saturated, the sampling step may be performed. The sampling step is the step of measuring the voltage reflecting the threshold voltage or its change, and the analog-to-digital converter ADC senses the voltage of the reference voltage line RVL, i.e., the voltage V2of the second node N2of the driving transistor DRT. The voltage Vsen sensed by the analog-to-digital converter ADC may be the voltage Vdata SEN-Vth which is the data signal Vdata minus the threshold voltage Vth or the voltage Vdata-ΔVth which is the data signal Vdata minus the threshold voltage deviation ΔVth. Vth may be a positive threshold voltage Positive Vth or a negative threshold voltage Negative Vth. FIG.4is a view illustrating a mobility sensing driving scheme for a driving transistor DRT in a display device according to the present disclosure. The mobility sensing driving operation for the driving transistor DRT may be performed through a sensing process including an initialization step, a tracking step, and a sampling step. The initialization step is the step of initializing the first node N1and the second node N2of the driving transistor DRT. In the initialization step, the scan transistor SCT and the sensing transistor SENT is turned on, and the second initialization switch SPRE is turned on. Accordingly, the first node N1and the second node N2of the driving transistor DRT are initialized as a mobility sensing driving data signal Vdata and a sensing reference voltage VpreS, respectively. (V1=Vdata, V2=VpreS) The tracking step is a step that changes the voltage V2of the second node N2of the driving transistor DRT until the voltage of the second node N2of the driving transistor DRT becomes a voltage state reflecting the mobility or its change. In other words, the tracking step is the step of tracking the voltage of the second node N2of the driving transistor DRT that may reflect the mobility or its change. In the tracking step, the second initialization switch SPRE is turned off or the sensing transistor SENT is turned off, so that the second node N2of the driving transistor DRT is floated. In this case, the scan transistor SCT may be turned off, so that the first node N1of the driving transistor DRT may also be floated. Accordingly, the voltage V2of the second node N2of the driving transistor DRT starts to rise. The rising rate of the voltage V2of the second node N2of the driving transistor DRT varies depending on the current capability (i.e., mobility) of the driving transistor DRT. As the current capability (mobility) of the driving transistor DRT increases, the voltage V2of the second node N2of the driving transistor DRT further sharply rises. After the tracking period proceeds during a predetermined time Δt, i.e., after the voltage V2of the second node N2of the driving transistor DRT rises during the preset tracking time Δt, the sampling period may proceed. During the tracking step, the rising rate of the voltage of the second node N2of the driving transistor DRT corresponds to a voltage variation ΔV for the predetermined time Δt. In the sampling step, the sampling switch SAM is turned on, so that the analog-to-digital converter ADC and the reference voltage line RVL are electrically connected. Accordingly, the analog-to-digital converter ADC senses the voltage of the reference voltage line RVL, i.e., the voltage V2of the second node N2of the driving transistor DRT. The voltage Vsen sensed by the analog-to-digital converter ADC may be the voltage which is the sensing reference voltage VpreS plus the voltage variation Δt during the preset tracking time Δt. According to the sensing driving operation for threshold voltage or mobility as described above in connection withFIGS.3and4, the analog-to-digital converter ADC converts the voltage Vsen sensed for threshold voltage sensing or mobility sensing into a digital value and generates and outputs sensing data including the digital value (sensing value). The sensing data output from the analog-to-digital converter ADC may be provided to the compensation circuit220. In some cases, the sensing data may be provided to the compensation circuit220through the memory210. The compensation circuit220may grasp the characteristic value (e.g., threshold voltage or mobility) of the driving transistor DRT in the corresponding subpixel or a change in the characteristic value of the driving transistor DRT (e.g., a change in threshold voltage or a change in mobility) based on the sensing data provided from the analog-to-digital converter ADC and perform a characteristic value compensation process. The change in the characteristic value of the driving transistor DRT may mean a change in the current sensing data from previous sensing data or a change in the current sensing data from initial compensation data. Accordingly, it is possible to grasp the characteristic value deviation between driving transistors DRT by comparing characteristic values or changes in characteristic value between the driving transistors DRT. When the change in the characteristic value of the driving transistor DRT means a change in the current sensing data from the initial compensation data, it is possible to grasp the characteristic value deviation (i.e., subpixel luminance deviation) between driving transistors DRT from the change in the characteristic value of the driving transistor DRT. The initial compensation data may be initial setting data that is set and stored when the display device is manufactured. The characteristic value compensation process may include threshold voltage compensation processing for compensating for the threshold voltage of the driving transistor DRT and mobility compensation processing for compensating for the mobility of the driving transistor DRT. The threshold voltage compensation processing may include the processing of calculating compensation data for compensating for the threshold voltage or threshold voltage deviation (change in threshold voltage), storing the calculated compensation data in the memory210, or changing the image data Data into the calculated compensation data. The mobility compensation processing may include the processing of calculating compensation data for compensating for the mobility or mobility deviation (change in mobility), storing the calculated compensation data in the memory210, or changing the image data Data into the calculated compensation data. The compensation circuit220may change the image data Data through the threshold voltage compensation processing or mobility compensation processing and supply the changed data to the corresponding source driving integrated circuit SDIC in the data driving circuit120. Accordingly, the source driving integrated circuit SDIC converts the data changed by the compensation unit220into a data signal through a digital-to-analog converter (DAC) and supplies it to the corresponding subpixel. By so doing, it is possible to indeed achieve compensation for the subpixel characteristic value (threshold voltage compensation or mobility compensation). When a power on signal is generated, the display device according to aspects of the present disclosure may perform any one of the above-described compensation processes. Such sensing process is referred to as an “on-sensing process.” When a power off signal is generated, the display device according to the present disclosure may perform any one of the above-described compensation processes before an off-sequence, e.g., power-off, proceeds. Such sensing process is referred to as an “off-sensing process.” FIG.5is a view schematically illustrating an input/output correspondence of an analog-to-digital converter ADC according to the present disclosure. Referring toFIG.5, the analog-to-digital converter ADC may convert the sensing voltage Vsen, which is between the analog-to-digital converting reference voltage EVref and the analog-to-digital converting reference voltage EVref plus a predetermined voltage range (EVref+ADC Range) into a digital value Dsen corresponding to the corresponding sensing voltage Vsen. The range of the digital value output from the analog-to-digital converter ADC may be determined according to the resolution of the analog-to-digital converter ADC. For example, when the resolution of the analog-to-digital converter ADC is 10 bits, the analog-to-digital converter ADC may match the input sensing voltage Vsen to any one of digital values from 0 to 1023 and output the result. As another example, in the range of the digital value output from the analog-to-digital converter ADC, the analog-to-digital converter ADC may match the input sensing voltage Vsen to correspond to any one of digital values from 0 to 255 and output the result. The smallest value and the largest value among the digital values output from the analog-to-digital converter ADC may be defined as saturation values. In other words, the analog-to-digital converter ADC may not output digital values smaller than a first saturation value and may not output digital values larger than a second saturation value. If the level of the sensing voltage Vsen input to the analog-to-digital converter ADC is equal to or less than the analog-to-digital converting reference voltage EVref, the analog-to-digital converter ADC outputs the first saturation value. When the level of the sensing voltage Vsen input to the analog-to-digital converter ADC is larger than or equal to the analog-to-digital converting reference voltage plus a predetermined voltage range (EVref+ADC Range), the analog-to-digital converter ADC outputs the second saturation value. According to the foregoing description, if the level of the sensing voltage Vsen is the analog-to-digital converting reference voltage EVref or less or the level of the sensing voltage Vsen is not less than the analog-to-digital converting reference voltage plus the predetermined voltage range (EVref+ADC Range), the analog-to-digital converter ADC is unable to output a digital value exactly corresponding to the corresponding sensing voltage Vsen. FIG.6is a view exemplarily illustrating an analog-to-digital converting process of an analog-to-digital converter ADC according to the present disclosure. Referring toFIG.6, if the level of the sensing voltage Vsen input to the analog-to-digital converter ADC is equal to or less than the analog-to-digital converting reference voltage EVref, the analog-to-digital converter ADC outputs the first saturation value. When the level of the sensing voltage Vsen input to the analog-to-digital converter ADC is larger than or equal to the analog-to-digital converting reference voltage plus a predetermined voltage range (EVref+ADC Range), the analog-to-digital converter ADC outputs the second saturation value. For example, when the resolution of the analog-to-digital converter ADC is 10 bits, the first saturation value may be 0, and the second saturation value may be 1023. When the level of the sensing voltage Vsen is included in an underflow area that is equal to or less than the analog-to-digital converting reference voltage EVref, the analog-to-digital converter ADC outputs the first saturation value regardless of the level of the sensing voltage Vsen. When the level of the sensing voltage Vsen is included in an overflow area that is larger than or equal to the analog-to-digital converting reference voltage plus the predetermined voltage range (EVref+ADC Range), the analog-to-digital converter ADC outputs the second saturation value regardless of the level of the sensing voltage Vsen. The controller140may receive the digital value output from the analog-to-digital converter ADC, calculate a subpixel degradation compensation value and control the data driving circuit to input a data signal reflecting the calculated degradation compensation value to the corresponding subpixel. When the first saturation value or the second saturation value is input to the controller140, such an issue may arise that it is impossible to properly compensate for the characteristic value of the corresponding subpixel. FIG.7is a view illustrating an example in which an analog-to-digital converting reference voltage EVref is changed depending on the level of a sensing voltage Vsen input to an analog-to-digital converter ADC. Referring toFIG.7, if the level of the sensing voltage Vsen input to the analog-to-digital converter ADC is included in the underflow area, the voltage level of the analog-to-digital converting reference voltage EVref may be changed to decrease. Specifically, the voltage level of the analog-to-digital converting reference voltage EVref may be decreased until the level of the sensing voltage Vsen is not included in the underflow area. As another example, if the level of the sensing voltage Vsen input to the analog-to-digital converter ADC is included in the above-described overflow area, the voltage level of the analog-to-digital converting reference voltage EVref may be changed to increase. Specifically, the voltage level of the analog-to-digital converting reference voltage EVref may be increased until the level of the sensing voltage Vsen is not included in the overflow area. Referring toFIG.7, after the voltage level of the analog-to-digital converting reference voltage EVref is changed, an analog-to-digital converting reference voltage EVref of the changed voltage level is applied to the analog-to-digital converter ADC. As the voltage level of the analog-to-digital converting reference voltage EVref is changed according to the level of the sensing voltage Vsen, the analog-to-digital converter ADC may output a digital value between the first saturation value and second saturation value reflecting the characteristic value of the subpixel. FIG.8is a view illustrating a characteristic in which the voltage level of an analog-to-digital converting reference voltage EVref and a driving reference voltage VpreR is varied depending on the level of a sensing voltage Vsen. Referring toFIG.8, the analog-to-digital converter ADC receives a sensing voltage Vsen reflecting the characteristic value of at least one subpixel SP and outputs a digital value Dsen corresponding to the sensing voltage Vsen to the controller140. When the input digital value Dsen is the first saturation value or the second saturation value, the controller140senses the above-described at least one subpixel SP again. The power management circuit810may change the voltage level of the analog-to-digital converting reference voltage EVref under the control of the controller140. The controller140may control the power management circuit810to decrease the voltage level of the analog-to-digital converting reference voltage EVref when the input digital value Dsen is the first saturation value. The controller140may control the power management circuit810to increase the voltage level of the analog-to-digital converting reference voltage EVref when the input digital value Dsen is the second saturation value. The power management circuit810may perform at least one of a first driving operation for decreasing the voltage level of the analog-to-digital converting reference voltage EVref by a preset voltage level and a second driving operation for increasing the voltage level by a preset voltage level. The controller140may sense the characteristic values of the plurality of subpixels SP during an off-sensing process period after a turn-off signal is input to the display device. The controller140may control the power management circuit810to increase or decrease the voltage level of the analog-to-digital converting reference voltage EVref during the off-sensing process period. The controller140may perform sensing driving operation to compensate for the characteristic values of the plurality of subpixels SP during the off-sensing process period. The plurality of subpixels SP may include normal subpixels NSP having a digital value Dsen reflecting the characteristic value of the corresponding subpixel SP being between the first saturation value and the second saturation value and abnormal subpixels ASP having a digital value Dsen reflecting the characteristic value of the corresponding subpixel SP being the first saturation value or second saturation value. The controller140may sense the characteristic values of the plurality of subpixels SP once for each during the off-sensing process period and then sense the characteristic values of the abnormal subpixels ASP again. The analog-to-digital converting reference voltage EVref is input to the analog-to-digital converting reference voltage input node N_EVref of the analog-to-digital converter ADC. The voltage level of the analog-to-digital converting reference voltage EVref may differ at the time when the characteristic values of the plurality of subpixels SP are sensed once for each and at the time when the characteristic values of the abnormal subpixels ASP are sensed again. The controller140may sense the characteristic value of the at least one subpixel SP again after the voltage level of the analog-to-digital converting reference voltage EVref is changed. At least one subpixel may be an abnormal subpixel ASP. The analog-to-digital converter ADC converts the sensing voltage Vsen into a digital value Dsen according to the changed analog-to-digital converting reference voltage EVref and outputs it to the controller140. The controller140may calculate a compensation value for the above-described at least one subpixel SP when the input digital value Dsen is a value between the first saturation value and the second saturation value and may store the calculated compensation value in the memory. The controller140may store, in the memory, the degree of the change in the voltage level of the analog-to-digital converting reference voltage EVref. If the display device is turned back on after the off-sensing processor period is terminated, the controller140may control the power management circuit810to change the voltage level of the driving reference voltage VpreR. Specifically, the timing controller may control the power management circuit810to change the voltage level of the driving reference voltage VpreR based on the degree of change in the analog-to-digital converting reference voltage EVref. When the voltage level of the analog-to-digital converting reference voltage EVref is decreased by a first voltage level during the off-sensing process period, the power management circuit810may decrease the voltage level of the driving reference voltage VpreR by the above-described first voltage level and input it to the driving reference voltage input node NpreR. When the voltage level of the analog-to-digital converting reference voltage EVref is increased by the first voltage level during the off-sensing process period, the power management circuit810may increase the voltage level of the driving reference voltage VpreR by the above-described first voltage level and input it to the driving reference voltage input node NpreR. The driving reference voltage VpreR is a voltage input to the reference voltage line RVL, and is a voltage commonly applied to a plurality of subpixels SP electrically connected to the reference voltage line RVL. Specifically, the driving reference voltage VpreR is a common voltage jointly input to the plurality of subpixels SP during an active period when the data signal Vdata for image display is input to the plurality of data lines DL. The controller140controls the voltage level of the data signal Vdata input to the normal subpixel NSP based on the above-described degree of change in the voltage level of the analog-to-digital converting reference voltage EVref. For example, when the voltage level of the analog-to-digital converting reference voltage EVref is decreased or increased by the first voltage level, the controller140may perform additional compensation for decreasing or increasing the voltage level of the data signal Vdata input to the normal subpixel NSP by the first voltage level. Accordingly, even when the voltage level of the driving reference voltage VpreR, which is the common voltage, is changed, the change in the voltage level of the driving reference voltage VpreR is not reflected to the voltage difference Vgs between the gate node and source node of the driving transistor included in the normal subpixel NSP. The controller140may control the data driving circuit to output a data signal Vdata reflecting the degradation compensation value of the corresponding subpixel SP as the data signal Vdata input to the abnormal subpixel ASP. Accordingly, the change in the voltage level of the driving reference voltage VpreR may be reflected to the voltage difference between the gate node and source node of the driving transistor included in the abnormal subpixel ASP. In the display device according to the disclosure, the voltage level of the driving reference voltage VpreR and the voltage level of the data signal Vdata reflect the degree of change in the voltage level of the analog-to-digital converting reference voltage EVref, allowing for additional compensation for the degradation of the abnormal subpixel ASP for which it was conventionally hard to normally compensate for a change in characteristic value. This leads to substantially the same effect as increasing the lifespan of the display device. FIG.9is a flow chart schematically illustrating a method for driving a display device according to the present disclosure. Referring toFIG.9, the display device according to the present disclosure may perform sensing driving operation to compensate for a change in characteristic values of a plurality of subpixels. The analog-to-digital converter ADC receives a sensing voltage Vsen reflecting the characteristic value of at least one subpixel SP and outputs a digital value Dsen corresponding to the sensing voltage Vsen (S910). The analog-to-digital converter ADC may output a first saturation value as the digital value Dsen if the input sensing voltage Vsen is smaller than the analog-to-digital converting reference voltage EVref. The analog-to-digital converter ADC may output a second saturation value if the input sensing voltage Vsen is equal to or larger than the analog-to-digital converting reference voltage plus a predetermined voltage range (EVref+ADC Range). If the input sensing voltage Vsen is more than the analog-to-digital converting reference voltage EVref and is less than the analog-to-digital converting reference voltage plus the predetermined voltage range (EVref+ADC Range), the analog-to-digital converter ADC outputs a digital value Dsen between the first saturation value and the second saturation value. The controller140may determine whether the input digital values Dsen includes the first saturation value or the second saturation value. (S920) When all of the digital values output from the analog-to-digital converter ADC are digital values between the first saturation value and the second saturation value, the controller140calculates a sensing compensation value according to a first sensing driving step (S930) and stores the calculated sensing compensation value in the memory (S990), and the sensing driving operation is terminated. When at least one digital value among all the digital values output from the analog-to-digital converter ADC is the first saturation value or the second saturation value, the controller140controls the power management circuit to change the voltage level of the analog-to-digital converting reference voltage EVref. (S940) If the voltage level of the analog-to-digital converting reference voltage EVref is changed, the controller140performs repeated sensing driving operation on at least one subpixel SP for which the first saturation value or second saturation value is calculated (S950). The repeated sensing driving operation may be performed, e.g., at a time after the first sensing driving operation on the plurality of subpixels SP is terminated. Accordingly, during the first sensing driving operation, the voltage level of the analog-to-digital converting reference voltage EVref may remain constant. The timing controller may determine whether the digital value Dsen input is a digital value between the first saturation value and the second saturation value according to repeated sensing driving operation (S960). When the digital value input according to the repeated sensing driving operation is the first saturation value or the second saturation value, the controller140changes the voltage level of the analog-to-digital converting reference voltage EVref again (S940) and performs repeated sensing driving operation again (S950). In other words, the controller140may perform the repeated sensing driving operation until the level of the sensing voltage Vsen is larger than the level of the analog-to-digital converting reference voltage EVref and is smaller than the level of the analog-to-digital converting reference voltage plus the predetermined voltage range (EVref+ADC Range) (i.e., EVref<Vsen<EVref+ADC Range). Such repeated sensing driving operation may be performed two times or more. When all the digital values input according to the repeated sensing driving operation are values between the first saturation value and the second saturation value, the controller140calculates a sensing compensation value for all the subpixels based on the digital values input according to the first sensing driving operation and the repeated sensing driving operation (S970). The controller140calculates an additional compensation value based on the changed voltage level of the analog-to-digital converting reference voltage EVref (S970). During the image display period after the display device is turned back on, the data signal Vdata reflecting the additional compensation value may be input to the normal subpixels NSP for which the digital value input to the controller140during the period when the first sensing driving operation is performed is between the first saturation value and the second saturation value. The controller140may calculate the degree of change in the voltage level of the driving reference voltage VpreR based on the degree of change in the voltage level of the analog-to-digital converting reference voltage EVref. The power management circuit may change the voltage level of the driving reference voltage VpreR under the control of the controller140(S980). Unlike shown inFIG.9, the step S980of changing the voltage level of the driving reference voltage VpreR may be performed along with the step S940of changing the voltage level of the analog-to-digital converting reference voltage EVref. As an example, the power management circuit may change the voltage level of the driving reference voltage VpreR by a preset voltage level in the step S940of changing the voltage level of the analog-to-digital converting reference voltage EVref by the preset voltage level. The voltage level of the driving reference voltage VpreR input to the driving reference voltage input node NpreR may be changed based on the degree of change in the voltage level of the analog-to-digital converting reference voltage EVref. The timing when the voltage level-changed driving reference voltage VpreR is input to the reference voltage line RVL may be an active period after the display device is powered on. The controller140may store sensing compensation values for the plurality of subpixels SP (S990). The controller140may store the additional compensation value of the normal subpixel Normal SP in the memory. Accordingly, if the display device is turned back on in the period after the off-sensing process, the data driving circuit may output, to the data line, the data signal Vdata reflecting the sensing compensation value and/or additional compensation value of the plurality of subpixels SP, calculated during the off-sensing process period. Accordingly, it is possible to compensate for characteristic values even for abnormal subpixels ASP for which it was conventionally difficult to compensate for changes in characteristic values. Therefore, it is possible to substrate increase the lifespan of the display device. The foregoing aspects are briefly described below. According to aspects of the disclosure, there may be provided a display device100, comprising a reference voltage line RVL electrically connected with a first node NpreR and receiving a sensing voltage Vsen reflecting a characteristic value of at least one subpixel SP; and an analog-to-digital converter ADC including a second node N_EVref, receiving the sensing voltage Vsen, and outputting a digital value Dsen corresponding to the sensing voltage Vsen, wherein a voltage level of a driving reference voltage VpreR applied to the first node NpreR and a voltage level of an analog-to-digital converting reference voltage EVref applied to the second node N_EVref are changed depending on a level of the sensing voltage Vsen. According to aspects of the disclosure, there may be provided the display device, wherein a degree of the change in the voltage level of the driving reference voltage VpreR is identical to a degree of the change in the voltage level of the analog-to-digital converting reference voltage EVref. According to aspects of the disclosure, there may be provided the display device, further comprising a power management circuit810controlling the voltage level of the driving reference voltage VpreR and the voltage level of the analog-to-digital converting reference voltage EVref. According to aspects of the disclosure, there may be provided the display device, wherein the analog-to-digital converter converts an analog voltage in a predetermined voltage range from the analog-to-digital converting reference voltage into a digital value corresponding to the analog voltage and outputs the digital value. According to aspects of the disclosure, there may be provided the display device, wherein the power management circuit810performs at least one driving of a first driving operation for decreasing the voltage levels of the analog-to-digital converting reference voltage EVref and the driving reference voltage VpreR if the level of the sensing voltage Vsen is not more than the analog-to-digital converting reference voltage EVref and a second driving operation for increasing the voltage levels of the analog-to-digital converting reference voltage EVref and the driving reference voltage VpreR if the level of the sensing voltage Vsen is not less than the analog-to-digital converting reference voltage plus the predetermined voltage range (EVref+ADC Range). According to aspects of the disclosure, there may be provided the display device, further comprising a controller receiving the digital value and performing a repeated sensing driving operation which repeatedly senses the characteristic value of the at least one subpixel if a first saturation value or a second saturation value is input to the controller. According to aspects of the disclosure, there may be provided the display device further comprising a controller140receiving the digital value Dsen, wherein the analog-to-digital converter ADC outputs a first saturation value if the level of the sensing voltage Vsen of the at least one subpixel SP is not more than the analog-to-digital converting reference voltage EVref and outputs a second saturation value if the level of the sensing voltage Vsen of the at least one subpixel SP is not less than the voltage level of the analog-to-digital converting reference voltage plus the predetermined voltage range (EVref+ADC Range), and wherein the controller140performs a repeated sensing driving operation which again senses the characteristic value of the at least one subpixel SP if the first saturation value or the second saturation value is input to the controller140. According to aspects of the disclosure, there may be provided the display device, wherein during a period when the repeated sensing driving operation is performed, the power management circuit810changes the voltage level of the analog-to-digital converting reference voltage EVref by a preset voltage level and inputs the changed voltage level of the analog-to-digital converting reference voltage EVref to the second node N_EVref. According to aspects of the disclosure, there may be provided the display device, wherein the controller140performs the repeated sensing driving operation two or more times until the level of the sensing voltage Vsen is larger than the analog-to-digital converting reference voltage EVref and smaller than the voltage level of the analog-to-digital converting reference voltage EVref plus the predetermined voltage range (EVref+ADC Range). According to aspects of the disclosure, there may be provided the display device, wherein the controller140performs the repeated sensing driving operation for again sensing the characteristic value of the at least one subpixel ASP after performing first sensing driving operation for sensing a characteristic value of a plurality of subpixels SP. According to aspects of the disclosure, there may be provided the display device, further comprising a data driving circuit120configured to control a data signal Vdata to input to a plurality of the data lines DL, wherein the controller140controls the data driving circuit120to input a data signal Vdata reflecting the degree of change in the voltage levels of the analog-to-digital converting reference voltage EVref and the driving reference voltage VpreR to remaining subpixels NSP except for the at least one subpixel ASP among the plurality of subpixels SP. According to aspects of the disclosure, there may be provided the display device, wherein a driving period of the display device100includes an active period during which a data signal Vdata for image display is input to a plurality of data lines DL and a blank period between two different active periods, and wherein the driving reference voltage VpreR is a voltage input to the at least one subpixels SP electrically connected with the reference voltage line RVL during the active period. According to aspects of the disclosure, there may be provided a method for driving a display device100, comprising: receiving a sensing voltage Vsen reflecting a characteristic value of at least one subpixel SP from a reference voltage line RVL and outputting a digital value Dsen corresponding to the received sensing voltage Vsen to a controller140, by an analog-to-digital converter ADC (S910); changing a voltage level of an analog-to-digital converting reference voltage EVref input to the analog-to-digital converter ADC based on the sensing voltage Vsen (S940); and changing a voltage level of a driving reference voltage VpreR input to a first node NpreR based on a degree of the change in the voltage level of the analog-to-digital converting reference voltage EVref (S980), wherein the first node NpreR is a node electrically connected with the reference voltage line RVL. According to aspects of the disclosure, there may be provided the method further comprising determining whether the input digital value Dsen corresponds to a first saturation value or a second saturation value of the analog-to-digital converter ADC, by the controller140(S920); and performing a repeated sensing driving operation on a subpixel for which a voltage reflecting a characteristic value of a subpixel is changed to the first saturation value or the second saturation value among the at least one subpixels SP, by the controller140(S950). According to aspects of the disclosure, there may be provided the method further comprising storing, in a memory, an additional compensation value calculated according to the repeated sensing driving operation by the controller140(S970). According to aspects of the disclosure, there may be provided a display device, comprising: an analog-to-digital converter configured to receive a sensing voltage reflecting a characteristic value of at least one subpixel from a reference voltage line and to output a digital value corresponding to the received sensing voltage; a controller configured to receive the digital value from the analog-to-digital converter and to sense the characteristic value of the at least one subpixel when the received digital value is a first saturation value or a second saturation value; and a power management circuit configured to change an analog-to-digital converting reference voltage under control of the controller, wherein the controller is configured to control the analog-to-digital converter to change a voltage level of the analog-to-digital converting reference voltage during an off-sensing process period and to perform a sensing driving operation to compensate for characteristic values of the at least one subpixels during the off-sensing process period. According to aspects of the disclosure, there may be provided the display device, wherein the at least one subpixels include normal subpixels having the digital value reflecting the characteristic value of a corresponding subpixel between the first saturation value and the second saturation value and abnormal subpixels having the digital value reflecting the characteristic value of a corresponding subpixel of the first saturation value or the second saturation value. According to aspects of the disclosure, there may be provided the display device, wherein the controller performs a repeated sensing driving operation which repeatedly senses the characteristic value of the at least one subpixel if the first saturation value or the second saturation value is input to the controller. According to aspects of the disclosure, there may be provided the display device, wherein the power management circuit is configured to decrease the analog-to-digital converting reference voltage when the input digital value is the first saturation value and to increase the analog-to-digital converting reference voltage when the input digital value is the second saturation value. According to aspects of the disclosure, there may be provided the display device, wherein the analog-to-digital converter is configured to output the first saturation value when the level of the sensing voltage of the at least one subpixel is not greater than the analog-to-digital converting reference voltage, and is configured to output the second saturation value when the level of the sensing voltage of the at least one subpixel is not less than the voltage level of the analog-to-digital converting reference voltage plus the predetermined voltage range. The above-described aspects are merely examples, and it will be appreciated by one of ordinary skill in the art various changes may be made thereto without departing from the scope of the disclosure. Accordingly, the aspects set forth herein are provided for illustrative purposes, but not to limit the scope of the disclosure, and should be appreciated that the scope of the present disclosure is not limited by the aspects. The scope of the present disclosure should be construed by the following claims, and all technical spirits within equivalents thereof should be interpreted to belong to the scope of the present disclosure. | 64,413 |
11862110 | DETAILED DESCRIPTION OF THE EMBODIMENTS Advantages and features of the present disclosure, and implementation methods thereof will be clarified through following embodiments described with reference to the accompanying drawings. The present disclosure may, however, be embodied in different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the present disclosure to those skilled in the art. Furthermore, the present disclosure is only defined by scopes of claims. The shapes, sizes, ratios, angles, numbers and the like disclosed in the drawings for description of various embodiments of the present disclosure to describe embodiments of the present disclosure are merely exemplary and the present disclosure is not limited thereto. Like reference numerals refer to like elements throughout. Throughout this specification, the same elements are denoted by the same reference numerals. As used herein, the terms “comprise”, “having,” “including” and the like suggest that other parts can be added unless the term “only” is used. As used herein, the singular forms “a,” “an,” and “the” are intended to include the plural forms as well, unless context clearly indicates otherwise. Elements in various embodiments of the present disclosure are to be interpreted as including margins of error even without explicit statements. In describing a position relationship, for example, when a position relation between two parts is described as “on˜,” “over˜,” “under˜,” “above,” and “next˜,” one or more other parts may be disposed between the two parts unless “just” or “direct” is used. It will be understood that, although the terms “first,” “second,” etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first element could be termed a second element, and, similarly, a second element could be termed a first element, without departing from the scope of the present disclosure. Like reference numerals refer to like elements throughout. Hereinafter, embodiments of the present disclosure will be described in detail with reference to the accompanying drawings. In the following embodiments, as an example of an electroluminescent display apparatus, an organic light emitting display apparatus including an organic light emitting material will be mainly described. However, the inventive concept is not limited to an organic light emitting display apparatus and may be applied to an inorganic light emitting display apparatus including an inorganic light emitting material. FIG.1is a diagram illustrating an electroluminescent display apparatus according to an embodiment of the present disclosure. Referring toFIG.1, the electroluminescent display apparatus can include a display panel10, a timing controller11, a data driver12, a gate driver13, and a power circuit. A plurality of pixels PXL included in the display panel10can be arranged as a matrix to configure a pixel array. In the pixel array, each of the pixels PXL can be connected to a data line14, a gate line15, an initialization power line, a high level power line, and a low level power line. Here, the gate line15connected to each pixel PXL can include two scan lines and two emission lines. Each pixel PXL can be supplied with a data voltage through the data line14, scan signals through two scan lines, emission signals through two emission lines, an initialization voltage Vinit through the initialization power line, a high level driving voltage VDDEL through the high level power line, and a low level driving voltage VSSEL through the low level power line. The initialization power line, the high level power line, and the low level power line can be connected to the power circuit. The power circuit can generate the initialization voltage Vinit, the high level driving voltage VDDEL, and the low level driving voltage VSSEL. Each pixel PXL can perform a programming operation and an emission operation based on a driving waveform based on the scan signals and the emission signals to implement luminance corresponding to image data DATA. To this end, each pixel PXL can include a driving element which generates a driving current corresponding to the image data DATA and a light emitting device which emits light having brightness proportional to a level of the driving current. The driving element included in each pixel PXL can be implemented with an oxide transistor which is good in leakage current characteristic, but is not limited thereto. Each pixel PXL can perform a programming operation for setting the driving current prior to an emission operation at every refresh frame. The programming operation according to the present embodiment can include an initialization operation of applying the initialization voltage Vinit to an anode electrode of a light emitting device to turn off the light emitting device, a sampling operation of sampling a threshold voltage of a driving element to reflect the sampled threshold voltage in a gate-source voltage of the driving element, and a driving current setting operation of reflecting image data DATA in the gate-source voltage of the driving element. In performing such a programming operation, the light emitting device can maintain a non-emission state. The initialization voltage Vinit can be for preventing the light emitting device from emitting undesired light in the programming operation and can be selected within a voltage range which is sufficiently lower than an operation point voltage of the light emitting device, and for example, can be selected as a voltage at or near the low level driving voltage VSSEL. Each pixel PXL can be driven based on variable refresh rate (VRR) technology. In order to implement the VRR technology, one or more anode reset frames can be disposed between adjacent refresh frames. In the anode reset frame, a data refresh operation may not be performed on pixels PXL, and luminance of a previous refresh frame can be maintained during the anode reset frame. However, the anode reset voltage can be applied to the pixels PXL during an anode reset interval in the anode reset frame. The anode reset interval of the anode reset frame can correspond to a programming operation interval of a refresh frame, and the anode reset voltage can be a voltage for stopping the emitting of light from the light emitting device during the anode reset interval. As a result, an emission maintenance time between the refresh frame and the anode reset frame can be equal. This will be described below with reference toFIGS.5and6. The timing controller11can receive digital video data DATA and a refresh rate variation information from a host system. The timing controller11can adjust the number of anode reset frames disposed between adjacent refresh frames based on the refresh rate variation information, and thus, can vary a refresh period of the digital video data DATA. The timing controller11can apply a weight to image data DATA corresponding to the refresh frame to modulate the image data DATA, in order to decrease flickers occurring at a time at which the anode reset frame is changed to the refresh frame. The timing controller11can overall adjust a level of a weight which is to be applied to the image data DATA, based on the number of anode reset frames. The timing controller11can overall and further adjust a level of a weight which is to be applied to the image data DATA, based on an average transition amount of the image data DATA between adjacent refresh frames. The timing controller11can further adjust, by pixel row units, a level of a weight which is to be applied to the image data DATA, based on a distance between pixel rows and output terminals of the data driver12. Here, a pixel row can denote a set of pixels PXL which share the same gate line15instead of a physical signal line and are adjacent to one another in a horizontal direction. The timing controller11can supply the data driver12with weight-applied image data DATA. Also, the timing controller11can generate a data control signal DDC for controlling an operation timing of the data driver12and a gate control signal GDC for controlling an operation timing of the gate driver13, based on timing signals, such as a vertical synchronization signal Vsync, a horizontal synchronization signal Hsync, a dot clock signal DCLK, and a data enable signal DE. A detailed configuration and operation of the timing controller11will be described below with reference toFIG.17. The data driver12can digital-to-analog convert the image data DATA input the timing controller11based on the data control signal DDC in the refresh frame to generate a compensation data voltage. The compensation data voltage can be a flicker compensation data voltage which is generated by reflecting a weight in a data voltage of an input gray level. The data driver12can output the flicker compensation data voltage to the data lines14of the display panel10through output terminals in the programming operation interval of the refresh frame. The data driver12can generate an anode reset voltage based on the data control signal DDC in the anode reset frame. The anode reset voltage can be a voltage irrelevant to the image data DATA. The data driver12can output the anode reset voltage to the data lines14of the display panel10through the output terminals in the anode reset interval of the anode reset frame. The gate driver13can generate first gate signals based on the gate control signal GDC in the refresh frame. The first gate signals can include scan signals and emission signals. The gate driver13can output the first gate signals to the gate lines15of the display panel10in the refresh frame. The gate driver13can generate second gate signals based on the gate control signal GDC in the anode reset frame. The second gate signals can include the scan signals and the emission signals. The gate driver13can output the second gate signals to the gate lines15of the display panel10in the anode reset frame. The gate driver13can be directly provided in a bezel area of the display panel10based on a gate driver in panel (GIP) type. Here, the bezel area can correspond to a non-display area outside a screen area including the pixel array. The bezel area may not display an image. FIG.2is a diagram illustrating a circuit configuration of a pixel provided in a display panel ofFIG.1. A pixel circuit ofFIG.2can be merely an embodiment, and the technical spirit of the present disclosure is not limited to a configuration of the pixel circuit. Referring toFIG.2, a first pixel PXL of a plurality of pixels arranged in an nth(where n is a natural number) pixel row is illustrated. The first pixel PXL can include a light emitting device EL, a driving element DT, first to fifth switch elements T1to T5, and a storage capacitor Cst. The light emitting device EL can be implemented with an organic light emitting diode (OLED) which emits light with a driving current supplied through the driving element DT. A multi-layer organic compound layer can be disposed between an anode electrode and a cathode electrode of the light emitting device EL. The organic compound layer can include a hole injection layer (HIL), a hole transport layer (HTL), an emission layer (EML), an electron transport layer (ETL), and an electron injection layer (EIL). The anode electrode of the light emitting device EL can be connected to a node C, and the cathode electrode of the light emitting device EL can be connected to an input terminal for the low level driving voltage VSSEL. The driving element DT can generate the driving current applied to the light emitting device EL, based on a gate-source voltage thereof. A gate electrode of the driving element DT can be connected to a node A, a drain electrode thereof can be connected to a node B, and a source electrode thereof can be connected to a node D. The driving element DT can be implemented with an MOSFET including an oxide semiconductor layer, but is not limited thereto. The first switch element T1can be connected between the node A and the node B and can be turned on/off based on a first scan signal SCAN1from the first scan line151. A gate electrode of the first switch element T1can be connected to the first scan line151. The second switch element T2can be connected between an input terminal for the initialization voltage Vinit and the node C and can be turned on/off based on the first scan signal SCAN1from the first scan line151. A gate electrode of the second switch element T2can be connected to the first scan line151. The third switch element T3can be connected between the first data line14and the node D and can be turned on/off based on a second scan signal SCAN2from the second scan line152. A gate electrode of the third switch element T3can be connected to the second scan line152. The fourth switch element T4can be connected between an input terminal for the high level driving voltage VDDEL and the node B and can be turned on/off based on the second emission signal EM2from the second emission line154. A gate electrode of the fourth switch element T4can be connected to the second emission line154. The fifth switch element T5can be connected between the node D and the node C and can be turned on/off based on the first emission signal EM1from the first emission line153. A gate electrode of the fifth switch element T5can be connected to the first emission line153. The storage capacitor Cst can be connected between the node A and the node C. The first pixel PXL can automatically compensate for (hereinafter referred to as internal compensation) a threshold voltage deviation of the driving element DT through a pixel operation based on the connection configuration. An internal compensation operation can denote that a threshold voltage of the driving element DT is reflected in the gate-source voltage of the driving element DT in a pixel programming operation, and thus, compensation is performed so that the driving current generated by the driving element DT is not affected by a threshold voltage variation of the driving element DT. FIGS.3and4are diagrams illustrating variable frequency technology for determining a refresh rate based on the number of skip frames disposed between refresh frames, in a comparative example of the present disclosure. Referring toFIGS.3and4, a data refresh period implemented in the pixels of the display panel can vary based on refresh rate variation information input from the host system. The data refresh period can become longer as a refresh rate is lowered, and the number of skip frames can increase as the refresh rate is lowered. For example, the data refresh period can be 1 sec/120 in 120 Hz, 1 sec/60 in 60 Hz, 1 sec/24 in 24 Hz, and 1 sec in 1 Hz. The number of skip frames disposed between two adjacent refresh frames can be zero in 120 Hz, one in 60 Hz, four in 24 Hz, and 119 in 1 Hz. The refresh frame can include a programing operation interval PP and an emission operation interval EP. As described above, the light emitting device can emit light in only the emission operation interval EP and may not emit light in the programing operation interval PP. The skip frame can include only the emission operation interval EP (e.g., no programing operation interval PP during a skip frame). In the emission operation interval EP, emission luminance of an immediately previous refresh frame can be maintained intactly. A length of the emission operation interval EP of the skip frame can be longer than the emission operation interval EP of the refresh frame. Therefore, when comparing a luminance integral amount for a certain time, as a refresh rate is lowered (e.g., as the number skip frames increases), the luminance integral amount can increase. For example, a luminance integral amount for a certain time can be higher in 60 Hz than 120 Hz, higher in 24 Hz than 60 Hz, and higher in 1 Hz than 24 Hz. According to the comparative example described above, due to a luminance integral amount difference based on the refresh rate, flickers can be recognized at a time at which the refresh rate varies, (e.g., such was when a display switches from a 120 Hz refresh rate to a 24 Hz refresh rate). FIGS.5and6are diagrams illustrating variable frequency technology for determining a refresh rate based on the number of anode reset frames disposed between refresh frames, in an embodiment of the present disclosure. Referring toFIGS.5and6, a data refresh period implemented in the pixels of the display panel can vary based on the refresh rate variation information input from the host system. The data refresh period can become longer as a refresh rate is lowered, and the number of anode reset frames can increase as the refresh rate is lowered. For example, the data refresh period can be 1 sec/120 in 120 Hz, 1 sec/60 in 60 Hz, 1 sec/24 in 24 Hz, and 1 sec in 1 Hz. The number of anode reset frames disposed between two adjacent refresh frames can be zero in 120 Hz, one in 60 Hz, four in 24 Hz, and 119 in 1 Hz. The refresh frame can include a programing operation interval PP and an emission operation interval EP. As described above, the light emitting device can emit light in only the emission operation interval EP and may not emit light in the programing operation interval PP. The anode reset frame can include an anode reset interval PP′ arranged before the emission operation interval EP. In the emission operation interval EP, emission luminance of an immediately previous refresh frame can be maintained intact. The anode reset interval PP′ can correspond to a programming operation interval PP of a refresh frame, and the light emitting device may not emit light when an anode reset voltage Vrst is applied to the anode electrode of the light emitting device in the anode reset interval PP′. A length of the anode reset interval PP′ may be equal to or approximately equal to that of the programming operation interval PP so that a luminance integral amount of the anode reset frame is equal to or approximately equal to a luminance integral amount of the refresh frame. According to the embodiment described above, a luminance integral amount difference based on the refresh rate may not occur, and thus, flickers caused by the luminance integral amount difference may be prevented. FIG.7is a diagram illustrating a refresh operation performed on the pixel ofFIG.2. The refresh operation ofFIG.7can be performed in a programming operation interval PP of a refresh frame. Referring toFIG.7, in the programming operation interval PP of the refresh frame, when the first scan signal SCAN1and the second scan signal SCAN2are input at an on level, the first to third switch elements T1to T3can be turned on. When the first to third switch elements T1to T3are turned on, the node A, the node B, and the node D can be refreshed to a new data voltage Vdata, and the node C can be initialized to the initialization voltage Vini. Here, the new data voltage Vdata can be a flicker compensation data voltage. Such a refresh operation can be performed in a state where the fourth and fifth switch elements T4and T5are turned off. In performing the refresh operation, the light emitting device EL may not emit light. FIG.8is a diagram illustrating a holding operation performed on the pixel ofFIG.2. The holding operation ofFIG.8can be performed in an emission operation interval EP of a refresh frame. Referring toFIG.8, in the emission operation interval EP of the refresh frame, the first scan signal SCAN1and the second scan signal SCAN2are input at an off level, and the first emission signal EM1and the second emission signal EM2can be input at an on level. When the first to third switch elements T1to T3are turned off and the fourth and fifth switch elements T4and T5are turned on, the light emitting device EL can emit light with a driving current Iel. The driving current Iel can be determined based on a gate-source voltage of the driving element DT set in the programming operation interval PP. FIG.9is a diagram illustrating an anode reset operation performed on the pixel ofFIG.2, which is performed between adjacent refresh frames.FIG.10is a diagram illustrating an embodiment of an anode reset voltage supplied to a pixel in an anode reset operation process ofFIG.9. The anode reset operation ofFIG.9can be performed in an anode reset interval PP′ of an anode reset frame. Referring toFIG.9, when the second scan signal SCAN2and the first emission signal EM1are input at an on level in a state where the first scan signal SCAN1and the second emission signal EM2are input at an off level, in the anode reset interval PP′, an anode reset voltage Vrst can be supplied to the node C through the data line14and the third and fifth switch elements T3and T5. The anode reset voltage Vrst supplied to the node C can be applied to the anode electrode of the light emitting device EL to turn off the light emitting device EL. To this end, the anode reset voltage Vrst can be a voltage which is lower than a turn-on threshold voltage of the light emitting device EL as inFIG.10. Furthermore, in performing the anode reset operation, the first, second, and fourth switch elements T1, T2, and T4maintain an off state. In the anode reset frame, an emission operation interval can be arranged next to or adjacent to the anode reset interval PP′. The emission operation interval of the anode reset frame can be equal to or substantially the same as the emission operation interval EP of the refresh frame. A luminance waveform of the anode reset frame can be equal to a luminance waveform of the refresh frame through the anode reset operation. That is, a luminance integral amount of the anode reset frame can be equal to a luminance integral amount of the refresh frame through the anode reset operation, and thus, flickers caused by a luminance integral amount difference can be prevented. According to VRR technology according to the present disclosure, as described above, flickers caused by a luminance integral amount difference can be prevented, a grayscale response delay can occur when the anode reset frame is changed to or transitions to the reset frame, and another problem can occur where a grayscale response delay degree varies based on a data refresh period. A grayscale response delay amount difference based on the data refresh period can be another cause of a flicker. Hereinafter, a method for compensating for a grayscale response delay phenomenon and flickers caused thereby can be proposed. FIGS.11and12are diagrams illustrating a grayscale response delay phenomenon occurring at a time Tt at which an anode reset frame of a first gray level (for example, a black gray level) is changed to or transitions to a refresh frame of a second gray level (for example, a white gray level). For example, a pixel transitions from black to a bright color, and the delayed step response corresponding to ΔLa and L1can be noticed. Referring toFIG.11, in a refresh rate mode of 60 Hz, the black gray level can first increase up to a first gray level L1that is lower than the white gray level at the change time Tt, and then, can further increase up to the white gray level from the first gray level L1in the refresh frame, based on a step-to-step scheme. That is, the black gray level can increase via a plurality of steps as shown inFIG.11. Referring toFIG.12, in a refresh rate mode of 1 Hz, the black gray level can increase up to a second gray level L2that is lower than the white gray level at the change time Tt, and then, can further increase up to the white gray level from the second gray level L2in the refresh frame, based on the step-to-step scheme. That is, the black gray level can increase via a plurality of steps as shown inFIG.12. Here, the second gray level L2can have a value which is less than that of the first gray level L1shown inFIG.11. Therefore, a grayscale response delay amount can increase and become worse as a refresh rate is lowered. For example, a grayscale response delay amount can be ΔLa in a refresh rate mode of 60 Hz shown inFIG.11, and a grayscale response delay amount can be ΔLb in a refresh rate mode of 1 Hz shown inFIG.12. Here, ΔLb can be greater than ΔLa. The reason can be because a data refresh period is 1/60 sec and a time for maintaining the black gray level through the anode reset frame is 1/120 sec in the refresh rate mode of 60 Hz, and a data refresh period is 1 sec and a time for maintaining the black gray level through the anode reset frame is 59/60 sec in the refresh rate mode of 1 Hz. Comparing with the refresh rate mode of 60 Hz, in the refresh rate mode of 1 Hz, because a time for maintaining the black gray level through the anode reset frame is relatively longer, a grayscale response delay amount can be relatively greater. In other words, comparing with the refresh rate mode of 60 Hz, in the refresh rate mode of 1 Hz, because the number of anode reset frames is relatively more, a grayscale response delay amount can be relatively greater. Such a grayscale response delay amount difference can cause a luminance deviation in the refresh frame. Also, flickers can be recognized in the refresh frame due to this type of luminance deviation. FIGS.13and14are diagrams illustrating a grayscale response delay phenomenon occurring at a time Tt at which an anode reset frame of a first gray level (for example, a white gray level) is changed to or transitions to a refresh frame of a second gray level (for example, a black gray level). For example, this is a situation where the pixel transitions from a bright color to black. Referring toFIG.13, in a refresh rate mode of 60 Hz, the white gray level can decrease down to a third gray level L3that is higher than the black gray level at the change time Tt, and then, can decrease down even further to finally arrive at the black gray level from the third gray level L3in the refresh frame, based on the step-to-step scheme. That is, the white gray level can decrease via a plurality of steps as shown inFIG.13. Referring toFIG.14, in a refresh rate mode of 1 Hz, the white gray level can decrease down to a fourth gray level L4that is much higher than the black gray level at the change time Tt, and then, can decrease down further to the black gray level from the fourth gray level L4in the refresh frame, based on the step-to-step scheme. That is, the white gray level can decrease via a plurality of steps as shown inFIG.14. Here, the fourth gray level L4can have a value which is greater than that of the third gray level L3shown inFIG.13. Therefore, a grayscale response delay amount can increase as a refresh rate is lowered or a data refresh period is made longer. For example, a grayscale response delay amount can be ΔLc in a refresh rate mode of 60 Hz shown inFIG.13, and a grayscale response delay amount can be ΔLd in a refresh rate mode of 1 Hz shown inFIG.14. Here, ΔLd can be greater than ΔLc. The reason can be because a data refresh period is 1/60 sec and a time for maintaining the white gray level through the anode reset frame is 1/120 sec in the refresh rate mode of 60 Hz, and a data refresh period is 1 sec and a time for maintaining the white gray level through the anode reset frame is 59/60 sec in the refresh rate mode of 1 Hz. Comparing with the refresh rate mode of 60 Hz, in the refresh rate mode of 1 Hz, because a time for maintaining the white gray level through the anode reset frame is relatively longer, a grayscale response delay amount can be relatively greater. In other words, comparing with the refresh rate mode of 60 Hz, in the refresh rate mode of 1 Hz, because the number of anode reset frames is relatively more, a grayscale response delay amount can be relatively greater. In other words, slower refresh rates can cause poorer response times when transitioning from black to a bright color and when transitioning from a bright color to black. Such a grayscale response delay amount difference can cause a luminance deviation in the refresh frame. Also, flickers can be recognized in the refresh frame due to the luminance deviation. FIGS.15A to15Care diagrams illustrating an example where a level of a flicker compensation data voltage is differently set based on a refresh rate (or the number of anode reset frames), to compensate for flickers caused by a grayscale response delay amount difference. For example, the data voltage can be adjusted to have a type of underdamped waveform, in order to provide a pixel with an optimum transition when going from black to a bright color. FIGS.15A to15Cshow a concept which compensates for a grayscale response delay phenomenon occurring at a time Tt at which an anode reset frame of a first gray level (a black gray level) is changed to a refresh frame of a second gray level (a white gray level). Referring toFIG.15A, under a low refresh rate mode having a grayscale response delay amount of ΔLx, the data driver can output a flicker compensation data voltage having a first level to pixels in a refresh frame. The flicker compensation data voltage having the first level can be a compensation voltage for minimizing the grayscale response delay amount of ΔLx and can be a data voltage of a third gray level obtained by reflecting a first weight WT1in a data voltage of the second gray level. Referring toFIG.15B, under a middle refresh rate mode having a grayscale response delay amount of ΔLy, the data driver can output a flicker compensation data voltage having a second level to the pixels in the refresh frame. The flicker compensation data voltage having the second level can be a compensation voltage for minimizing the grayscale response delay amount of ΔLy and can be a data voltage of a fourth gray level obtained by reflecting a second weight WT2in the data voltage of the second gray level. Referring toFIG.15C, under a high refresh rate mode having a grayscale response delay amount of ΔLz, the data driver can output a flicker compensation data voltage having a third level to the pixels in the refresh frame. The flicker compensation data voltage having the third level can be a compensation voltage for minimizing the grayscale response delay amount of ΔLz and can be a data voltage of a fifth gray level obtained by reflecting a third weight WT3in the data voltage of the second gray level. InFIGS.15A to15C, ΔLx>ΔLy>ΔL, and WT1>WT2>WT3. Also, each of the first to third weights WT1to WT3can be a rising weight, and each of data voltages of the third to fifth gray levels can be higher than the data voltage of the second gray level which is a target. In the data voltages of the third to fifth gray levels, the data voltage of the third gray level can be highest, the data voltage of the fourth gray level can be second high, and the data voltage of the fifth gray level can be lowest. For example, greater weights can be applied for compensation, as the refresh rate becomes slower or the data refresh period becomes longer, and faster refresh rates use less compensation. Furthermore, the compensation concept illustrated inFIGS.15A to15Ccan be applied to a concept which compensates for a grayscale response delay phenomenon occurring at a time Tt at which an anode reset frame of a white gray level is changed to a refresh frame of a black gray level. In this situation, ΔLx>ΔLy>ΔLz, and WT1>WT2>WT3. Also, each of the first to third weights WT1to WT3can be a falling weight, and each of the data voltages of the third to fifth gray levels can be lower than the data voltage of the second gray level which is a target. In the data voltages of the third to fifth gray levels, the data voltage of the third gray level can be lowest, the data voltage of the fourth gray level can be second low, and the data voltage of the fifth gray level can be highest. Furthermore, each of the first to third weights WT1to WT3described above can be a first global weight which is overall applied to (e.g., applied by screen units) data voltages of one screen in a corresponding refresh rate. The first global weight can be further adjusted by screen units based on a second global weight proportional to an average transition amount of data voltages of one screen which are to be applied to the display panel during the refresh frame, and thus, a grayscale response delay phenomenon occurring at a time Tt at which the anode reset frame is changed to the refresh frame can be more effectively reduced. Moreover, the first global weight can be further adjusted by pixel row units based on a line weight further proportional to a distance between a pixel row and output terminals of the data driver, and thus, a grayscale response delay phenomenon occurring at a time Tt at which the anode reset frame is changed to the refresh frame can be more effectively reduced. FIG.16is a table illustrating a detailed example where weights reflected in a data voltage of an input gray level are differently set based on a refresh rate to adjust a level of a flicker compensation data voltage. Referring toFIG.16, a low refresh rate mode can correspond to a refresh rate of less than 30 Hz, a middle refresh rate mode can correspond to a refresh rate of 30 Hz to less than 60 Hz, and a high refresh rate mode can correspond to a refresh rate of 60 Hz to less than 120 Hz. InFIG.16, a situation where an anode reset frame of a black gray level is changed to or transitions to a refresh frame of a white gray level can be expressed as gray rising or a gray rising type of situation, and a situation where an anode reset frame of the white gray level is changed to or transitions to a refresh frame of the black gray level can be expressed as gray falling or a gray falling type of situation. InFIG.16, an output can be a flicker compensation data voltage, and an input can be a target data voltage of the refresh frame. The target data voltage can be a data voltage of the white gray level in gray rising and can be a data voltage of the black gray level in gray falling. InFIG.16, a digit multiplied by, added to, or subtracted from an input can represent a rising weight or a falling weight. In the low refresh rate mode, a flicker compensation data voltage in gray rising can be “(data voltage of white gray level*1.3)+10,” and a flicker compensation data voltage in gray falling can be “(data voltage of black gray level*0.9)−5.” In the middle refresh rate mode, a flicker compensation data voltage in gray rising can be “(data voltage of white gray level*1.2)+7,” and a flicker compensation data voltage in gray falling can be “(data voltage of black gray level*0.95)−3.” In the high refresh rate mode, a flicker compensation data voltage in gray rising can be “(data voltage of white gray level*1.1)+5,” and a flicker compensation data voltage in gray falling can be “(data voltage of black gray level*1.0)−1.” As described above, a flicker compensation data voltage in gray rising can be the highest in the low refresh rate mode, can be second highest in the middle refresh rate mode, and can be the lowest in the high refresh rate mode. Also, a flicker compensation data voltage in gray falling can be lowest in the low refresh rate mode, can be second lowest in the middle refresh rate mode, and can be highest in the high refresh rate mode. An arithmetic operation of generating the flicker compensation data voltage described above can be performed in a digital mode by the timing controller11. FIG.17is a diagram illustrating a configuration of a timing controller11for a data adjustment operation ofFIG.16. Referring toFIG.17, the timing controller11can include a data receiver111, a refresh rate setting unit112(e.g., refresh rate setting circuit), a refresh frame buffer113, a data transition extractor114, an anode reset frame counter115, a pixel row position extractor116, a weight generator117, a data modulator118, a data transferor119, and a control signal generator120. The data receiver111can be connected to the host system through an internal interface and can receive video data DATA and timing signals DE, Vsync, and Hsync from the host system. Also, the data receiver111can receive refresh rate variation information from the host system. The refresh rate setting unit112can adjust the number of anode reset frames disposed between adjacent refresh frames based on the refresh rate variation information to vary a refresh period of the digital video data DATA. As a refresh rate is lowered or becomes slower, the number of anode reset frames disposed between adjacent refresh frames can be increased, and a data refresh period can become longer. The refresh frame buffer113can store image data (previous frame data) of one screen at every refresh frame. The data transition extractor114can calculate an average transition amount of image data (current frame data) of one screen which is to be applied to the display panel during the refresh frame. The data transition extractor114can compare current frame data with previous frame data stored in the refresh frame buffer113to calculate an average transition amount of the current frame data. The anode reset frame counter115can count the number of anode reset frames based on a refresh rate. The pixel row position extractor116can analyze a position of a pixel row to which corresponding image data is applied in the current frame data, and thus, can extract a pixel row position of the image data. The weight generator117can generate, by screen units, a weight corresponding to the current frame data based on the number of anode reset frames. The weight can increase as the number of anode reset frames increases. The weight generator117can further adjust the generated weight by screen units, based on a global weight proportional to an average transition amount of the current frame data. The global weight can increase as the average transition amount of the current frame data increases. The weight generator117can further adjust the generated weight by pixel row units, based on a line weight based on a pixel row position of image data. The line weight can increase as the pixel row position of the image data is located farther away from the data driver (e.g., to compensate for a voltage drop due to a long wire length). The data modulator118can reflect the weight, generated by the weight generator117, in the current frame data to modulate the current frame data. The data transferor119can transfer modulated current frame data DATAm to the data driver based on the refresh frame. The control signal generator120can generate and output control signals DDC and GDC for controlling an operation timing of the data driver and an operation timing of the gate driver based on the timing signals DE, Vsync, and Hsync. FIG.18is a flowchart illustrating an example where a data refresh period varies based on a peak luminance of a whole image.FIGS.19A and19Bare diagrams illustrating an example where a temporal length of a holding period arranged between adjacent refresh frames varies based on a peak luminance of a whole image.FIGS.20A and20Bare flowcharts illustrating another example where a temporal length of a holding period arranged between adjacent refresh frames varies based on a peak luminance of a whole image. As illustrated inFIG.18, when image data of one frame is input, a timing controller according to the present embodiment can calculate a peak luminance of the image data (S181and S182). The peak luminance can be a luminance value of image data, having highest luminance, of the image data of one frame. The timing controller can adjust a refresh rate at which the image data of one frame is to be displayed, based on the calculated peak luminance (S183). Because eyes of a user more easily perceive flickers in a dark image than a bright image, the timing controller can increase a refresh rate in the dark image (i.e., peak luminance<reference value) to improve flickers (S185). Also, the timing controller can reduce a refresh rate in the bright image (i.e., peak luminance≥reference value) to decrease power consumption (S184). The timing controller can output image data to a data driver, based on the adjusted refresh rate (S186). Referring toFIGS.19A to20B, a holding period of an image can vary based on an adjust refresh rate. As inFIGS.19A and20A, when a bright image A is being displayed where a peak luminance of a whole image maintained in a display panel is greater than or equal to a reference value, a holding period between a first refresh frame and a second refresh frame can have a first temporal length. On the other hand, as inFIGS.19B and20B, when a dark image B is being displayed where the peak luminance of the whole image maintained in the display panel is less than the reference value, a holding period between a first refresh frame and a second refresh frame can have a second temporal length which is less than the first temporal length. As a temporal length of the holding period increases, the number of skip frames or anode reset frames included in the holding period may increase. During the holding period, the screen of the display panel is not updated with a new image. The timing controller and the data driver can transfer image data therebetween through an interface circuit. The interface circuit can include a TX circuit included in the timing controller and an RX circuit included in the data driver. The TX circuit can transfer first image data, corresponding to a data voltage having a first gray level, to the RX circuit in the first refresh frame and can transfer second image data, corresponding to a data voltage having a second gray level, to the RX circuit in the second refresh frame. As inFIGS.19A to20B, because the interface circuit is turned off during the holding period, an effect of reducing power consumption can more increase in the bright image A where the holding period is relatively long. Also, in the dark image B, because the interface circuit is turned off during the holding period, power consumption can be reduced. FIG.21is a flowchart illustrating an example where a data refresh period varies based on a size of a continuous low grayscale area in a whole area. As inFIG.21, when image data of one frame is input, the timing controller according to the present embodiment can calculate a size of the continuous low grayscale area in the image data (S211and S212). The continuous low grayscale area may denote an area which is less than or equal to predetermined brightness (for example, a black gray level). The timing controller can adjust a refresh rate at which the image data of one frame is to be displayed, based on the calculated size of the low grayscale area (S213). Because eyes of a user more easily perceive flickers in a dark image than a bright image, the timing controller can increase a refresh rate in the dark image (i.e., size of low grayscale area<reference value) to improve flickers (S215). Also, the timing controller can reduce a refresh rate in the bright image (i.e., size of low grayscale area≥reference value) to decrease power consumption (S214). The timing controller can output image data to the data driver, based on the adjusted refresh rate (S216). As inFIGS.19A and20A, when a bright image A is being displayed where a size of a low grayscale area of a whole image maintained in a display panel is greater than or equal to a reference value, a holding period between a first refresh frame and a second refresh frame can have a first temporal length. On the other hand, as inFIGS.19B and20B, when a dark image B is being displayed where the size of the low grayscale area of the whole image maintained in the display panel is less than the reference value, a holding period between a first refresh frame and a second refresh frame can have a second temporal length which is less than the first temporal length. As a temporal length of the holding period increases, the number of skip frames or anode reset frames included in the holding period may increase. During the holding period, the screen of the display panel is not updated with a new image. The timing controller and the data driver can transfer image data therebetween through an interface circuit. The interface circuit can include a TX circuit included in the timing controller and an RX circuit included in the data driver. The TX circuit can transfer first image data, corresponding to a data voltage having a first gray level, to the RX circuit in the first refresh frame and can transfer second image data, corresponding to a data voltage having a second gray level, to the RX circuit in the second refresh frame. As inFIGS.19A to20B, because the interface circuit is turned off during the holding period, an effect of reducing power consumption can more increase in the bright image A where the holding period is relatively long. Also, in the dark image B, because the interface circuit is turned off during the holding period, power consumption can be reduced. FIG.22is a flowchart illustrating an example where a data refresh period varies based on a viewing distance to a user watching a display panel. As inFIG.22, when image data of one frame is input, a timing controller according to the present embodiment can calculate a viewing distance to a user watching a display panel, based on a user image obtained from a camera (S221and S222). The camera can be previously installed in the display panel. The timing controller can adjust a refresh rate at which image data of one frame is to be displayed, based on the calculated viewing distance to the user (S223). Because eyes of the user perceive flickers more easily in a case, where the user watches an image at a close position, than a case where the user watches an image at a remote position, the timing controller can increase a refresh rate at a close viewing distance (i.e., viewing distance<reference value) to improve flickers (S225). Also, the timing controller can reduce a refresh rate at a remote viewing distance (i.e., viewing distance≥reference value) to decrease power consumption (S224). The timing controller can output image data to the data driver, based on the adjusted refresh rate (S226). As inFIGS.19A and20A, when a viewing distance to a user watching an image A is greater than or equal to a reference value, a holding period between a first refresh frame and a second refresh frame can have a first temporal length. On the other hand, as inFIGS.19Band20B, when a viewing distance to a user watching an image B is less than the reference value, a holding period between a first refresh frame and a second refresh frame can have a second temporal length which is less than the first temporal length. As a temporal length of the holding period increases, the number of skip frames or anode reset frames included in the holding period may increase. During the holding period, the screen of the display panel is not updated with a new image. The timing controller and the data driver can transfer image data therebetween through an interface circuit. The interface circuit can include a TX circuit included in the timing controller and an RX circuit included in the data driver. The TX circuit can transfer first image data, corresponding to a data voltage having a first gray level, to the RX circuit in the first refresh frame and can transfer second image data, corresponding to a data voltage having a second gray level, to the RX circuit in the second refresh frame. As inFIGS.19A to20B, because the interface circuit is turned off during the holding period, an effect of reducing power consumption can more increase at a remote viewing distance which is relatively long. Also, because the interface circuit is turned off during the holding period at a close viewing distance, power consumption can be reduced. The electroluminescent display apparatus according to the present disclosure can be based on the VRR technology where a data refresh period varies based on an attribute of an input image. The electroluminescent display apparatus according to the present disclosure can adjust a level of a flicker compensation data voltage based on a data refresh period to prevent flickers from occurring at a time at which a refresh rate varies in an anode reset frame, thereby increasing display quality. The electroluminescent display apparatus according to the present disclosure may vary a data refresh period on the basis of a peak luminance of one screen, a size of a low grayscale area, or a viewing distance to a user to decrease flickers and may turn off an interface circuit between the timing controller and the data driver in a holding period between adjacent refresh frames, thereby reducing power consumption. The effects according to the present disclosure are not limited to the above examples, and other various effects can be included in the specification. While the present disclosure has been particularly shown and described with reference to exemplary embodiments thereof, it will be understood by those of ordinary skill in the art that various changes in form and details may be made therein without departing from the spirit and scope of the present disclosure as defined by the following claims. | 49,546 |
11862111 | DETAILED DESCRIPTION Example embodiments will be described more fully hereinafter with reference to the accompanying drawings. As those skilled in the art would realize, the described embodiments may be modified in various different ways, without departing from the spirit or scope of the present invention. Accordingly, the drawings and description are to be regarded as illustrative in nature and not restrictive. Like reference numerals designate like elements throughout the specification. In the flowcharts described with reference to the drawings in this specification, the operation order may be changed, various operations may be merged, certain operations may be divided, and certain operations may not be performed. In addition, a singular form may be intended to include a plural form as well, unless the explicit expression such as “one” or “single” is used. Terms including ordinal numbers such as first, second, and the like will be used only to describe various constituent elements, and are not to be interpreted as limiting these constituent elements. These terms may be used for a purpose of distinguishing one constituent element from other constituent elements. FIG.1illustrates a block diagram of a display device according to an example embodiment. Referring toFIG.1, a display device100according to an example embodiment may include a display driving circuit110and a display panel120. In some example embodiments, the display device100may further include a power supply circuit such as a DC/DC converter that provides a driving voltage to the display driving circuit110and the display panel120. The display panel120may include a plurality of pixels PX for displaying an image. Each pixel PX may be connected to a corresponding source line SL among a plurality of source lines and a corresponding gate line GL among a plurality of gate lines. Each pixel PX may receive a data signal from the source line SL when a gate signal is supplied to the gate line GL. Each pixel PX may emit light corresponding to an inputted data signal. The plurality of pixels PX may display an image in units of one frame. When the display device100is an organic light emitting display device, each of the pixels PX may include a plurality of transistors including a driving transistor and an organic light emitting diode. The driving transistor included in the pixel PX may supply a current corresponding to the data signal to the organic light emitting diode, so that the organic light emitting diode may emit light with a luminance that corresponds to the inputted data signal. When the display device100is a liquid crystal display device, each of the pixels PX may include a switching transistor and a liquid crystal capacitor. The pixel PX may control transmittance of a liquid crystal in response to the data signal so that light of a luminance that corresponds to the inputted data signal may be supplied to the outside. Although one pixel PX is illustrated as being connected to one source line SL and one gate line GL inFIG.1, the connection structure of the signal line of the pixel PX of the display device according to example embodiments is not limited thereto. For example, various signal lines may be additionally connected to correspond to the circuit structure of the pixel PX. In example embodiments, the pixel PX may be implemented in various forms. The display driving circuit110may include a gate driver111, a source driver112, a gamma voltage generator113, and a driving controller114. Some or all of the gate driver111, the source driver112, the gamma voltage generator113, and the driving controller114may be implemented on the same semiconductor die, chip, or module, or each of them may be implemented with a separate semiconductor die, chip, or module. In some example embodiments, the gate driver111and/or the source driver112may be implemented on the same substrate as the display panel120. In this case, the gate driver111and/or the source driver112may be disposed on the periphery of the display panel120. The gate driver111may provide a plurality of gate signals (G1, G2, . . . , Gh) to the display panel120. The plurality of gate signals (G1, G2, . . . , Gh) may be pulse signals having an enable level and a disable level. The plurality of gate signals (G1, G2, . . . , Gh) may be applied to a plurality of gate lines GL. When the gate signal of the enable level is applied to the gate line GL connected to the pixel PX, the data signal applied to the source line SL connected to the pixel PX may be transmitted to the pixel PX. The source driver112may receive data DATA in a form of a digital signal from the driving controller114, and may convert the data DATA into data signals (S1, S2, . . . , Sk) in a form of an analog signal. Here, the data DATA may include grayscale information corresponding to each pixel PX for displaying image data IS on the display panel120. The source driver112may transmit a plurality of data signals (S1, S2, . . . , Sk) to the display panel120according to a source driver control signal CONT2provided from the driving controller114. The source driver112may be referred to as a data driver. The gamma voltage generator113may generate a plurality of gamma voltages (VG1, VG2, . . . , VGi) to provide them to the source driver112. The plurality of gamma voltages (VG1, VG2, . . . , VGi) may have i different voltage levels. The plurality of gamma voltages (VG1, VG2, . . . , VGi) may be used by the source driver112to generate an analog signal corresponding to the data DATA. In example embodiments, the source driver112may generate a data signal through a method of interpolating the plurality of gamma voltages (VG1, VG2, . . . , VGi) (hereinafter referred to as an interpolation scheme). For example, the gamma voltage generator113may provide 64 gamma voltages to the source driver112. In order to convert the data DATA for expressing 1024 (210) grayscales into a data signal, the source driver112may use high-order bits (MSB 6 bit) data of the data DATA to select two gamma voltages among 64 (26) gamma voltages, and may use data of low-order bits (LSB 4 bit) to divide voltages between two gamma voltages selected by using the data of the high-order bits into 16 (24) steps to output them. In this interpolation scheme, a voltage difference may occur between an actual voltage outputted according to each low-order bit data value and an ideal voltage to be outputted, due to integral nonlinearity (INL). The driving controller114may receive the image data IS and a driving control signal CTRL from a host device, and may control the gate driver111, the source driver112, and the gamma voltage generator113. Here, the host device may be a computing device or system that controls the display device100to display an image desired by a user on the display panel120. The driving control signal CTRL provided from the host device may include control instructions and predetermined data for controlling the gate driver111, the source driver112, and the gamma voltage generator113. For example, the driving control signal CTRL may include an instruction (hereinafter referred to as a ‘brightness control instruction’) for controlling brightness of the display device100, an instruction (hereinafter referred to as an ‘operation mode control instruction’) for instructing an operation mode of the display device100, and data (hereinafter referred to as ‘temperature data’) for indicating a temperature of the display device100or a temperature around the display device100. For example, the display device100or an external device may include a thermometer to obtain the temperature data. The driving controller114may display the same image data IS with different luminance on the display panel120according to the brightness control instruction. For example, when the brightness control instruction indicates a first brightness, the driving controller114may display 243 grayscale image data IS with a first luminance, and when the brightness control instruction indicates a second brightness, the driving controller114may display the 243 grayscale image data IS with a second luminance higher than the first luminance. The driving controller114of the driving control signal may control the gate driver111, the source driver112, and the gamma voltage generator113based on the driving control signal CTRL. For example, the driving control signal CTRL may include a horizontal synchronization signal Hsync, a vertical synchronization signal Vsync, a main clock signal MCLK, and a data enable signal DE. The driving controller114may divide the image data IS in units of one frame based on the vertical sync signal Vsync, and may divide the image data IS in units of the gate lines GL based on the horizontal sync signal Hsync to generate the data DATA. The driving controller114may transmit a gate driver control signal CONT1and the source driver control signal CONT2to the gate driver111and the source driver112to perform, for example, control to synchronize operations of the source driver112and the gate driver111. The driving controller114may transmit a gamma voltage generation control signal CONT3to the gamma voltage generator113to control an operation of the gamma voltage generator113. The driving controller114may control the gate driver111, the source driver112, and the gamma voltage generator113based on a control instruction that is self-generated independently from the driving control signal CTRL received from the host device or in addition to the driving control signal CTRL. The driving controller114may include an offset compensation circuit115for compensating for an offset of the source driver112. The offset compensation circuit115may generate gamma image data by converting a gamma characteristic of the image data IS. In example embodiments, the offset compensation circuit115may compensate the image data IS or gamma image data in which the image data IS is gamma-corrected, by using a compensation value corresponding to the offset. For example, the offset compensation circuit115may add or subtract a compensation value to or from the image data IS or gamma image data. Hereinafter, the offset compensation circuit115will be described as compensating for the gamma image data. In some example embodiments, the offset compensation circuit115may compensate for the gamma image data by selecting at least one of compensation values stored in a look-up table (LUT) format. The LUT may include a plurality of compensation values corresponding to a plurality of low-order bit data values of the gamma image data. In example embodiments, the LUT may include a plurality of groups, each group may include a plurality of taps, and each tap may include a plurality of compensation values corresponding to a plurality of low-order bit data values. Specifically, the LUT may include groups in which a plurality of compensation values corresponding to a plurality of low-order bit data values of the gamma image data are divided according to brightness of the display device100. For example, the LUT may include a first group including a plurality of compensation values corresponding to a first brightness and a second group including a plurality of compensation values corresponding to a second brightness, and in this case, a plurality of compensation values included in each group may correspond to a plurality of low-order bit data values of the gamma image data. The LUT may include groups in which a plurality of compensation values corresponding to a plurality of low-order bit data values of the gamma image data are divided for each operation mode of the display device100. For example, the LUT may include a first group including a plurality of compensation values corresponding to a first operation mode and a second group including a plurality of compensation values corresponding to a second operation mode, and in this case, a plurality of compensation values included in each group may correspond to a plurality of low-order bit data values of the gamma image data. That is, within different groups divided by the brightness or operation mode, compensation values corresponding to low-order bit data values of the same gamma image data may be different from each other. The LUT may include a plurality of taps in which a plurality of compensation values corresponding to a plurality of low-order bit data values of the gamma image data are divided for each high-order bit data value. For example, the LUT may include a first tap including a plurality of compensation values corresponding to a first value and a second tap including a plurality of compensation values corresponding to a second value, and in this case, a plurality of compensation values included in each tap may correspond to a plurality of low-order bit data values of the gamma image data. That is, in the case of different taps, compensation values corresponding to the same low-order bit data value may be different from each other. The offset compensation circuit115may correct a plurality of compensation values. In example embodiments, the offset compensation circuit115may interpolate the compensation values stored in an LUT. The offset compensation circuit115may calculate a plurality of compensation values corresponding to a plurality of low-order bit data values in a third brightness corresponding to an intermediate brightness between a first brightness and a second brightness of the display device100not included in the LUT, by interpolating a plurality of compensation values corresponding to a plurality of low-order bit data values in a first brightness of the display device100included in the LUT and a plurality of compensation values corresponding to a plurality of low-order bit data values in a second brightness of the display device100included in the LUT. The offset compensation circuit115may calculate a plurality of compensation values corresponding to a plurality of low-order bit data values when a high-order bit data value not included in the LUT is a third value corresponding to an intermediate value between a first value and a second value, by interpolating a plurality of compensation values corresponding to a plurality of low-order bit data values when the high-order bit data value included in the LUT is the first value and a plurality of compensation values corresponding to a plurality of low-order bit data values when the high-order bit data value included in the LUT is the second value. In example embodiments, the offset compensation circuit115may compensate the compensation values included in the LUT by using temperature data. The offset compensation circuit115may apply a gain value and an offset value according to a temperature value determined by using the temperature data to the compensation values included in the LUT. For example, the offset compensation circuit115may multiply a gain value corresponding to a temperature value by compensation values included in the LUT, and may subtract or add an offset value corresponding to the temperature value. In some example embodiments, the offset compensation circuit115may compensate the compensation values by selecting one or more of a plurality of gain values and a plurality of offset values stored in the LUT. In example embodiments, the LUT may include a plurality of gain values and a plurality of offset values corresponding to a plurality of temperature values. The LUT may include the plurality of gain values and the plurality of offset values corresponding to the temperature value by classifying them according to the brightness of the display device100. For example, when the brightnesses of the display device100are different, a plurality of gain values and a plurality of offset values corresponding to the same temperature value may be different from each other. The LUT may include the plurality of gain values and the plurality of offset values corresponding to the temperature value by classifying them for each operation mode of the display device100. For example, when the operation modes of the display device100are different, a plurality of gain values and a plurality of offset values corresponding to the same temperature value may be different from each other. The LUT may include the plurality of gain values and the plurality of offset values corresponding to the temperature value by classifying them for each high-order bit data value. For example, when the high-order bit data values of the gamma image data are different, a plurality of gain values and a plurality of offset values corresponding to the same temperature value may be different from each other. The offset compensation circuit115may interpolate the plurality of gain values and the plurality of offset values stored in the LUT. The offset compensation circuit115may calculate a plurality of gain values and a plurality of offset values corresponding to a plurality of temperature values in a third brightness corresponding to an intermediate brightness between a first brightness and a second brightness of the display device100that are not included in the LUT, by interpolating a plurality of gain values and a plurality of offset values corresponding to a plurality of temperature values in a first brightness of the display device100included in the LUT and a plurality of gain values and a plurality of offset values corresponding to a plurality of temperature values in a second brightness of the display device100included in the LUT. The offset compensation circuit115may calculate a plurality of gain values and a plurality of offset values corresponding to a plurality of temperature values when a high-order bit data value not included in the LUT is a third value corresponding to an intermediate value between a first value and a second value, by interpolating a plurality of gain values and a plurality of offset values corresponding to a plurality of temperature values when a high-order bit data value included in the LUT is the first value and a plurality of gain values and a plurality of offset values corresponding to a plurality of temperature values when a high-order bit data value included in the LUT is the second value. The offset compensation circuit115may calculate a plurality of gain values and a plurality of offset values at a third temperature value corresponding to an intermediate temperature value between a first temperature value and a second temperature value not included in the LUT, by interpolating a plurality of gain values and a plurality of offset values at a first temperature value included in the LUT and a plurality of gain values and a plurality of offset values at a second temperature value included in the LUT. The display driving circuit110of example embodiments may compensate for gamma image data, so that the INL of the source driver112may be reduced. In addition, because the display driving circuit110according to example embodiments may determine the degree of compensation of the gamma image data according to the operation mode and temperature of the display device100, the INL of the source driver112may be further reduced. Therefore, according to the display driving circuit110according to example embodiments, a color coordinate error may be reduced. In addition, because the display driving circuit110according to example embodiments interpolates and uses the compensation values, the compensation values used for compensation may be stored even with a small storage capacity. FIG.2illustrates a block diagram of a semiconductor device according to an example embodiment. Referring toFIG.2, a semiconductor device200may include a gamma converter201, a compensation circuit202, a dithering circuit203, and a storage circuit204. The semiconductor device200may be the offset compensation circuit115ofFIG.1. The gamma converter201may gamma-correct the image data IS. For example, the gamma converter201may convert the image data IS to fit a specific gamma curve. In example embodiments, the gamma converter201may receive an n-bit unit of image data IS (n is a natural number greater than or equal to 2, for example, n=10) and convert a gamma characteristic of the image data IS to fit a gamma 2.2 curve, and may output an m-bit unit of gamma image data GI (m is a natural number greater than or equal to 2, m>n, for example, m=14) in which the gamma characteristic is converted. The gamma converter201may perform gamma conversion by using an LUT for gamma conversion or by using an equation for gamma conversion. For example, the LUT for gamma conversion may include data mapped for each grayscale. The gamma converter201may search for data corresponding to the input image data IS from the LUT, and may output the searched data as gamma image data GI. Here, the number of unit bits of the gamma image data GI may be larger than the number of unit bits of the image data IS, which may allow for increased precision of gamma conversion. For example, the gamma converter201may output the gamma image data GI of the m-bit unit with respect to the image data IS of the n-bit unit. The compensation circuit202may compensate the gamma image data GI to output compensated gamma image data CI. The compensation circuit202may read the LUT stored in the storage circuit204, and compensate the gamma image data GI by using a plurality of compensation values stored in the LUT. The compensation circuit202may use one of temperature data, an operation mode control instruction, and a brightness control instruction to interpolate a plurality of compensation values. An operation of the compensation circuit202will be described with reference toFIG.3andFIG.4together. FIG.3illustrates a flowchart of an operation method of a semiconductor device according to an example embodiment, andFIG.4illustrates an LUT stored in a semiconductor device according to an example embodiment. Referring toFIG.3, the compensation circuit202may determine a gain value and an offset value corresponding to a temperature value of temperature data TEMP (operation S300). The compensation circuit202may read the gain value and the offset value corresponding to the temperature value of the temperature data TEMP from the LUT. As shown inFIG.4, an LUT400may include a plurality of gain values GAIN and a plurality of offset values OFFSET corresponding to a temperature value (LOW, MID, or HIGH). For example, the LUT400may include a plurality of gain values GAIN and a plurality of offset values OFFSET corresponding to the temperature value LOW. The plurality of gain values GAIN and the plurality of offset values OFFSET may be divided according to brightnesses MODE0and MODE1that may be set by a brightness control instruction and an operation mode control instruction MODE2. In addition, the plurality of gain values GAIN and the plurality of offset values OFFSET may be distinguished according to a high-order bit data value (GI[13:8]) even within the same group MODE0, MODE1, or MODE2. The compensation circuit202may read a plurality of gain values and a plurality of offset values corresponding to the temperature value LOW, MID, or HIGH of the LUT that matches the temperature value of the temperature data TEMP from the LUT. When the temperature value of the temperature data TEMP is different from the temperature values LOW, MID, and HIGH of the LUT, the compensation circuit202may interpolate the plurality of gain values and the plurality of offset values, by using two temperature values between which the temperature values of the temperature data TEMP are positioned among the temperature values LOW, MID, and HIGH of the LUT. For example, when the temperature value of the temperature data TEMP is 52 degrees, and when the temperature value of LOW of the LUT is 35 degrees, the temperature value of MID of the LUT is 50 degrees, and the temperature value of HIGH of the LUT is 62 degrees, the temperature value of the temperature data TEMP is positioned between MID and HIGH. The compensation circuit202may calculate a plurality of gain values and a plurality of offset values corresponding to 52 degrees by interpolating a plurality of gain values and a plurality of offset values corresponding to MID and HIGH, respectively. The compensation circuit202may compensate the LUT by using the plurality of gain values and the plurality of offset values (operation S302). The compensation circuit202may compensate the LUT by applying a gain value and an offset value corresponding to the temperature value of the temperature data TEMP to a plurality of compensation values. As shown inFIG.4, the LUT400may include a plurality of groups401,402, and403of which compensation values are divided by operation modes MODE1, MODE2, and MODE3. The compensation values of the LUT400may be mapped to low-order data values GI[7:4] (0000, . . . , 1111). The compensation values of the LUT400may be predetermined bit data that may be expressed with a sign. For example, the compensation values may be 5-bit data. InFIG.4, the compensation value is described as an integer for reference. The compensation circuit202may multiply the compensation values by the gain value GAIN corresponding to the temperature value, and may add the offset value OFFSET corresponding to the temperature value. The compensation circuit202may determine a group corresponding to the operation mode in the compensated LUT. When an operation mode control instruction EN is received, the compensation circuit202may determine to use compensation values of the operation mode group MODE2indicated by the operation mode control command EN among a plurality of compensation values included in the compensated LUT. When the operation mode control instruction EN is not received, the compensation circuit202may determine to use compensation values of the groups401or402associated with the modes MODE0or MODE1corresponding to brightness (first brightness or second brightness) set according to a brightness control instruction BV among a plurality of compensation values included in the compensated LUT. The compensation circuit202may interpolate the two groups401and402when the brightness set according to the brightness control instruction is between the first brightness and the second brightness, and determine to use the interpolated LUT. Here, the operation mode according to the operation mode control instruction EN may be a low-power display mode such as an AMOLED low power mode (ALPM) or a hybrid low power mode (HLPM), but is not limited thereto. The compensation circuit202may determine a tap corresponding to a high-order bit data value (GI[13:8]) (operation S306). The compensation circuit202may determine a tap TAP0, TAP1, or TAP2corresponding to the high-order bit data value (GI[13:8]) of the gamma image data GI. As shown inFIG.4, the plurality of groups401,402, and403may each include a plurality of taps TAP0, TAP1, and TAP2divided according to the high-order bit data value (GI[13:8]). That is, each of the plurality of taps TAP0, TAP1, and TAP2may correspond to one high-order bit data value (GI[13:8]). For example, when the high-order bit is 6 bits, each of the plurality of taps TAP0, TAP1, and TAP2may correspond to a data value between 000000 and 111111. Respective compensation values of the plurality of taps TAP0, TAP1, and TAP2may be mapped to low-order data values GI[7:4] (0000, . . . , 1111). The LUT interpolated in operation S304may also include the plurality of taps TAP0, TAP1, and TAP2. When the high-order bit data value (GI[13:8]) of the gamma image data GI is different from the values of the plurality of taps TAP0, TAP1, and TAP2, the compensation circuit202may interpolate a plurality of compensation values, by using two tap values between which the high-order bit data (GI[13:8]) of the gamma image data GI is positioned among the values of the plurality of taps TAP0, TAP1, and TAP2. For example, when the value of the high-order bit data (GI[13:8]) of the gamma image data GI is 001111, and when the values of TAP0of the plurality of taps TAP0, TAP1, and TAP2are 001000, the values of TAP1of the plurality of taps TAP0, TAP1, and TAP2is 100000, and the values of TAP2of the plurality of taps TAP0, TAP1, and TAP2is 111000, the high-order bit data value (GI[13:8]) is positioned between TAP0and TAP1. The compensation circuit202may interpolate a plurality of compensation values included in each of TAP0and TAP1. The compensation circuit202may select a compensation value corresponding to the first low-order bit data value (GI[7:4]) (operation S308). The compensation circuit202may select a compensation value corresponding to a first low-order bit data value (GI[7:4]) of the gamma image data GI from the plurality of compensation values of the determined tap. The compensation circuit202may select a compensation value corresponding to the first low-order bit data value (GI[7:4]) of the gamma image data GI, and a compensation value corresponding to a value adjacent to the first low-order bit data value (GI[7:4]). For example, when the first low-order bit data (GI[7:4]) of the gamma image data GI is ‘1000’, the compensation circuit202may select a compensation value corresponding to ‘1000’, a compensation value corresponding to ‘0111’, and a compensation value corresponding to ‘1001’, in the plurality of compensation values of the determined tap. When the first low-order bit data (GI[7:4]) of the gamma image data GI is ‘1111’, the compensation circuit202may select a compensation value corresponding to ‘1110’ in the plurality of compensation values of the determined tap. The compensation circuit202may determine a final compensation value corresponding to the second low-order bit data (GI[3:0]) (operation S310). The compensation circuit202may determine the final compensation value by using the selected plurality of compensation values. For example, the compensation circuit202may interpolate the selected plurality of compensation values by using the second low-order bit data value (GI[3:0]). For example, by using a compensation value corresponding to ‘1000’, a compensation value corresponding to ‘0111’, and a compensation value corresponding to ‘1001’ in the values selected when the first low-order bit data (GI[7:4]) of the gamma image data GI is ‘1000’, it is possible to generate an interpolation function, and it is possible to input the second low-order bit data (GI[3:0]) to a linear interpolation function to determine the final compensation value. The compensation circuit202may generate compensated gamma image data CI by compensating the final compensation value for the gamma image data GI (operation S312). The compensation circuit202may generate the compensated gamma image data CI by adding the final compensation value to the gamma image data GI. When the compensated gamma image data CI is expressed by more bits (for example, m+1 bits) than m bits, the compensation circuit202may clip the compensated gamma image data CI to change the clipped data as data of m bits. In example embodiments, the compensation circuit202may perform only some of operations (operations S300, S302, . . . , S312). For example, the compensation circuit202may perform operations S306, S308, S310, and S12on the gamma image data GI without performing operations S300, S302, and S304. In addition, the compensation circuit202may perform operations S304, S306, S308, S310, and S12on the gamma image data GI without performing operations S300and S302. The dithering circuit203may perform temporal and/or spatial dithering on the compensated gamma image data CI. The dithering circuit203may output data DATA of n bits by performing a dithering process on the compensated gamma image data CI of m bits. FIG.5illustrates a block diagram of a compensation circuit of a semiconductor device according to an example embodiment. Referencing toFIG.5, a compensation circuit500according to an example embodiment may include a first compensator501, a gain and offset calculator502, a first interpolator503, a second interpolator504, and a second compensator505. The first compensator501may read the LUT from the storage circuit204ofFIG.2. The first compensator501may generate a compensation LUT (LUT_C) by compensating a plurality of compensation values of the LUT according to a temperature. The first compensator501may compensate the LUT by using the gain value GAIN and the offset value OFFSET transmitted from the gain and offset calculator502. The first compensator501may receive a plurality of gain values GAIN and a plurality of offset values OFFSET corresponding to the plurality of compensation values. To obtain the compensation LUT (LUT_C), the first compensator501may compensate the plurality of compensation values by multiplying a corresponding at least one of the plurality of compensation values by a corresponding gain value GAIN of the plurality of gain values GAIN and adding a corresponding offset value OFFSET of the plurality of offset values OFFSET to a corresponding at least one of the plurality of compensation values. The gain and offset calculator502may read the LUT from the storage circuit204ofFIG.2. The gain and offset calculator502may receive the temperature data TEMP, which may indicate a temperature sensed by a thermometer. The gain and offset calculator502may determine the plurality of gain values GAIN and the plurality of offset values OFFSET corresponding to the temperature value of the temperature data TEMP by using the LUT. In example embodiments, the LUT may include a plurality of gain values GAIN and a plurality of offset values OFFSET corresponding to a plurality of temperature values. The gain and offset calculator502may interpolate the plurality of gain values GAIN and the plurality of offset values OFFSET stored in the LUT so as to determine the plurality of gain values GAIN and the plurality of offset values OFFSET corresponding to the temperature value of the temperature data TEMP. This will be described with reference toFIG.6. FIG.6illustrates a graph of an LUT compensation gain value according to a temperature of a semiconductor device according to an example embodiment. Referring toFIG.6, the LUT may store gain values G1, G2, G2, and G3in association with corresponding temperature values T0, T1, T2, and T3. The gain and offset calculator502may determine the gain value GAIN as G1when the temperature value of the temperature data TEMP is T0or less. When the temperature value of the temperature data TEMP is greater than T0and less than T1, the gain and offset calculator502may determine the gain value GAIN by interpolating between G1and G2according to the temperature value. The gain and offset calculator502may determine the gain value GAIN as G2when the temperature value is greater than or equal to T1and less than or equal to T2. When the temperature value is greater than T2and less than T3, the gain and offset calculator502may determine the gain value GAIN by interpolating between G2and G3according to the temperature value. The gain and offset calculator502may determine the gain value GAIN as G3when the temperature value of the temperature data TEMP is greater than or equal to T3. The method described herein is only one example of several methods in which the gain and offset calculator502may determine the gain value GAIN according to the temperature value of the temperature data TEMP, and the gain and offset calculator502may use a different method to determine the gain value GAIN according to the temperature value of the temperature data TEMP. The gain and offset calculator502may compensate the LUT by using the determined gain value GAIN and offset value OFFSET. The compensation LUT (LUT_C) may include a plurality of compensation values compensated by the gain value GAIN and the offset value OFFSET. The plurality of compensation values in the compensation LUT (LUT_C) may include groups divided by brightness and groups divided by operation mode, in the same manner as in the LUT before compensation. In an example embodiment, the LUT may store a functional model of the plurality of gain values GAIN and/or the plurality of offset values OFFSET according to the temperature value. The gain and offset calculator502may determine a plurality of gain values GAIN and a plurality of offset values OFFSET corresponding to the temperature value of the temperature data TEMP by using the function model. The first interpolator503may determine a group corresponding to the operation mode in the compensation LUT (LUT_C). The first interpolator503may determine to use a compensation value of a group corresponding to the operation mode control instruction EN and the brightness control instruction By. The first interpolator503may generate a group corresponding to the brightness of the brightness control instruction BV by interpolating the compensation value stored in the compensation LUT (LUT_C). Group selection and interpolation operations of the first interpolator503will be described with reference toFIG.7. FIG.7illustrates a graph of a group selected according to brightness and an operation mode of a semiconductor device according to an example embodiment. The first interpolator503may determine to use the compensation value of the group MODE2in the compensation LUT (LUT_C) when the operation mode control instruction EN is enabled (“EN=1”). When the operation mode control instruction EN is disabled (“EN=0”), the first interpolator503may use the compensation value of the group MODE0and a value in which the compensation value of the group MODE1is interpolated according to a compensation or brightness value of the group MODE0or MODE1corresponding to the brightness value of the brightness control instruction By. The compensation LUT (LUT_C) may include a group MODE0corresponding to brightness BV0and a group MODE1corresponding to brightness BV1. Each of the group MODE0and the group MODE1may include a plurality of compensation values. When the brightness of the brightness instruction BV is equal to or less than BV0, the first interpolator503may determine to use the compensation values of the group MODE0. When the brightness of the brightness instruction BV is equal to or greater than BV1, the first interpolator503may determine to use the compensation values of the group MODE1. When the brightness of the brightness instruction BV is greater than BV0and less than BV1, the first interpolator503may interpolate and determine a value between the compensation values of the group MODE0and compensation values of the group MODE1according to brightness. The method described herein is only one example of several methods in which the first interpolator503may determine a group according to the operation mode and brightness, and the first interpolator503may determine a group according to the operation mode and brightness in a different way. The first interpolator503may output an LUT (LUT_P) including a plurality of compensation values of the determined group or a plurality of the interpolated compensation values. The second interpolator504may determine a tap corresponding to the high-order bit data of the gamma image data GI by using the LUT (LUT_P). The second interpolator504may determine to use a compensation value of the tap corresponding to the high-order bit data value. The second interpolator504may generate the tap corresponding to the high-order bit data value by interpolating the compensation value stored in the LUT (LUT_P). Tap selection and interpolation operations of the second interpolator504will be described with reference toFIG.8. FIG.8illustrates a graph of compensation values according to high-order bits of image data of a semiconductor device according to an example embodiment. Referring toFIG.8, the LUT (LUT_P) may store compensation values corresponding to D0, D1, D2, D3, and D4that are the high-order bit data values (GI[13:8]) as V0, V1, V2, V3, and 0. In this case, the compensation value is a compensation value corresponding to the same low-order bit data value. The second interpolator504may determine the compensation value as 0 when the high-order bit data value (GI[13:8]) is less than D0. The second interpolator504may determine the compensation value as V0when the high-order bit data value (GI[13:8]) is D0. The second interpolator504may determine the compensation value by interpolating between V0and V1according to the high-order bit data value (GI[13:8]) when the high-order bit data value (GI[13:8]) is greater than D0and less than D1. The second interpolator504may determine the compensation value as V2when the high-order bit data value (GI[13:8]) is D2. The second interpolator504may determine the compensation value by interpolating between V2and V3according to the high-order bit data value (GI[13:8]) when the high-order bit data value (GI[13:8]) is greater than D2and less than D3. The second interpolator504may determine the compensation value as V3when the high-order bit data value (GI[13:8]) is D3. The second interpolator504may determine the compensation value by interpolating between V3and 0 according to the high-order bit data value (GI[13:8]) when the high-order bit data value (GI[13:8]) is greater than D3and less than D4. The second interpolator504may determine the compensation value as 0 when the high-order bit data value (GI[13:8]) is equal to or greater than D4. The method described herein is only one example of several methods in which the second interpolator504may determine a tap according to the high-order bit data value, and the second interpolator504may determine a tap according to the high-order bit data value in a different way. The second interpolator504may output a plurality of compensation values of the determined tap or a plurality of the interpolated compensation values as compensation data D_C. The second compensator505may determine a final compensation value CI corresponding to the low-order bit data of the gamma image data GI by using the compensation data D_C. In example embodiments, the second compensator505may determine the final compensation value by interpolating compensation values corresponding to the values of the first low-order bit data (GI[7:4]) by using the values of the second low-order bits data (GI[3:0]). For example, the second compensator505may generate an interpolation function using the second low-order bit data (GI[3:0]) as an input value and the compensation value as an output value, by using a compensation value corresponding to the first low-order bit data (GI[7:4]) value among the plurality of compensation values of the compensation data D_C and at least one compensation value corresponding to a value adjacent to the first low-order bit data (GI[7:4]) value. The second compensator505may output an output value according to the second low-order bit data (GI[3:0]) of the gamma image data GI in the generated interpolation function as the final compensation value. In example embodiments, the compensation circuit500includes only some of the first compensator501, the gain and offset calculator502, the first interpolator503, the second interpolator504, and the second compensator505. For example, the compensation circuit500may include only the second interpolator504and the second compensator505. Alternatively, the compensation circuit202may include only the first interpolator503, the second interpolator504, and the second compensator505. FIG.9illustrates a schematic block diagram of a source driver according to an example embodiment. Referring toFIG.9, a source driver900may include a latch901, a decoder902, and a source amplifier903. The latch901may temporarily store received data (DATA[n−1:0]), may dispose it to fit a source line of the display panel (120inFIG.1), and may transmit the disposed data to the decoder902. The decoder902may receive high-order bit data (DATA[n−1:nj]) of the data disposed by the latch901, and may convert the high-order bit data (DATA[n−1:nj]) into an analog signal. The decoder902may output two gamma voltages VH and VL corresponding to the high-order bit data (DATA[n−1:nj]) of j-bit among a plurality of gamma voltages (VG1, VGi) received from the gamma voltage generator (113inFIG.1). The two gamma voltages VH and VL may be inputted to the source amplifier903. The source amplifier903may receive the low-order bit data (DATA[nj−1:0]) of the data disposed by the latch901, and may generate and output an interpolation voltage between the two gamma voltages VH and VL based on the low-order bit data (DATA[nj−1:0]). The source amplifier903may use n-j bits of low-order bit data (DATA[n−j−1:0]) based on the two gamma voltages VH and VL to output 2n-jinterpolation voltages as an output signal VOUT. That is, the source amplifier903may output one of the 2n-jinterpolation voltages corresponding to the low-order bit data (DATA[n−j−1:0]) as the output signal VOUT. The output signal VOUT may be transmitted to the display panel120as data signals (S1, S2, . . . , Sk ofFIG.1) in a form of an analog signal. A difference between the output signal VOUT that is outputted according to the low-order bit data value (DATA[n−j−1:0]) by the INL of the source amplifier903and an ideal signal that should be outputted according to the low-order bit data value (DATA[n−j−1:0]) may occur. According to example embodiments, a compensation value capable of compensating for such a difference is stored in the LUT, and the compensation value stored in the LUT is compensated and/or interpolated in consideration of the temperature, brightness, and mode of the display device, so that the output signal VOUT having a reduced voltage difference may be outputted. The output signal VOUT outputted based on a plurality of gamma voltages (VG1, . . . , VGi) and data (DATA[n−1:0]) received by the source driver900will be described with reference toFIG.10together. FIG.10illustrates a graph of an INL improvement effect of a semiconductor device according to an example embodiment. A plurality of gamma voltages (VG1, VG1, VG2, VG3, VG63) may correspond to a plurality of grayscale values (0, 24, 2·24, 3·24, . . . , 26·24). In addition, the plurality of grayscale values (0, 24, 2·24, 3·24, . . . , 26·24) may correspond to a plurality of high-order bit data values (DATA[9:4]) (000000, 000001, 000010, 000011, . . . , 111111). When the high-order bit data value (DATA[9:4]) is 000010, the decoder902may output two gamma voltages VG2and VG3corresponding to 000010. Ideally, the source amplifier903outputs an output signal VOUT1that linearly increases as the low-order bit data value (DATA[3:0]) increases. However, actually, the source amplifier903outputs an output signal VOUT2due to INL. In this regard, the actual output signal VOUT2has a voltage difference from the ideal output signal VOUT1. The semiconductor device according to example embodiments may output an output signal VOUT3within a predetermined range (for example, 3 standard deviations) of the ideal output signal VOUT1, by compensating the data DATA transmitted to the source driver900with a compensation value capable of offsetting the voltage difference. FIG.11illustrates a graph of an effect of reducing a color coordinate error of a semiconductor device according to an example embodiment, as well as Comparative Example 1 and Comparative Example 2. In the graph ofFIG.11, the X-axis is a voltage applied to the gate of the driving transistor (in the case of a PMOS) included in the pixel PX, and an increase in the X-axis direction represents an increased grayscale, a decrease in the X-axis direction represents a decrease in grayscale, and the Y-axis represents a color coordinate error. Comparative Example 1 shows a color deviation according to a grayscale of an interpolation scheme that uses 6 bits as high-order bits and 4 bits as low-order bits without performing the compensation operation of the semiconductor device according to example embodiments. Comparative Example 2 shows a color deviation according to a grayscale of an interpolation scheme that uses 7 bits as high-order bits in addition to Comparative Example 1. Referring to the color deviation according to the grayscale of an example, it can be confirmed that the color deviation is significantly reduced compared to Comparative Example 1, and it can be confirmed that the color deviation is also reduced compared to Comparative Example 2 using more bits. Therefore, according to the semiconductor device according to example embodiments, the color coordinate error may be improved, and thus display quality may be improved. In addition, according to the semiconductor device of example embodiments, by using the interpolation scheme, the area occupied by the decoder902and the power consumed by the decoder902may be reduced. FIG.12illustrates a drawing for explaining a semiconductor system according to an example embodiment. Referring toFIG.12, a semiconductor system1200according to an example embodiment may include a processor1210, a memory1230, a display device1220, and a peripheral device1240that are electrically connected to a system bus1250. The processor1210may control input and output of data of the memory1230, the display device1220, and the peripheral device1240, and may perform image processing of image data transmitted between the corresponding devices. The display device1220may include a DDI1221and a display panel1222, and it may store image data applied through the system bus1250in a frame memory included in the DDI1221and then display it on the display panel1222. The DDI1221may be the semiconductor device according to example embodiments. The DDI1221may gamma-correct input image data, and compensate the gamma-corrected gamma image data with a compensation value corresponding to the offset of the source driver of the DDI1221. The compensation value may be stored in the LUT in the DDI1221, and the DDI1221may compensate and/or interpolate the compensation value stored in the LUT in consideration of the temperature, brightness, and mode of the display device1220. The peripheral device1240may be a device that converts a moving image or a still image captured by a camera, a scanner, or a webcam into an electrical signal. The image data obtained through the peripheral device1240may be stored in the memory1230, or may be displayed on the display panel1222in real time. The memory1230may include a volatile memory such as a dynamic random access memory (DRAM) and/or a non-volatile memory such as a flash memory. The memory1230may be configured with a DRAM, a phase-change random access memory (PRAM), a magnetic random access memory (MRAM), a resistive random access memory (ReRAM), a ferroelectric random access memory (FRAM), a NOR flash memory, a NAND flash memory, and a fusion flash memory (for example, a memory in which a static random access memory (SRAM) buffer, a NAND flash memory, and a NOR interface logic are combined). The memory1230may store image data obtained from the peripheral device1240or an image signal processed by the processor1210. The semiconductor system1200may be provided in a mobile electronic product such as a smart phone, but is not limited thereto, and may be provided in various electronic products that display images. FIG.13illustrates a drawing for explaining a semiconductor system according to an example embodiment. Referring toFIG.13, a semiconductor system1300according to an example embodiment may include a host1310, a DDI1320, a display panel1330, a touch panel driver1340, and a touch panel1350. The host1310may receive data or instruction from a user, and control the DDI1320and the touch panel driver1340based on the received data or instruction. The DDI1320may drive the display panel1330under the control of the host1310. The DDI1320may include the semiconductor device according to example embodiments. The DDI1320may gamma-correct input image data, and compensate the gamma-corrected gamma image data with a compensation value corresponding to the offset of the source driver of the DDI1320. The compensation value may be stored in the LUT in the DDI1320, and the DDI1320may compensate and/or interpolate the compensation value stored in the LUT in consideration of the temperature, brightness, and mode of the semiconductor system1300. The touch panel1350may be provided to overlap the display panel1330. The touch panel driver1340may receive data sensed by the touch panel1350and transmit the data to the host1310. In some example embodiments, each constituent element or a combination of two or more constituent elements described with reference toFIG.1toFIG.13may be implemented as a digital circuit, a programmable or non-programmable logic device or array, an application specific integrated circuit (ASIC), or the like. While aspects of example embodiments have been particularly shown and described, it will be understood that various changes in form and details may be made therein without departing from the spirit and scope of the following claims. | 53,071 |
11862112 | DETAILED DESCRIPTION OF ILLUSTRATIVE EMBODIMENTS Like features have been designated by like references in the various figures. In particular, the structural and/or functional features that are common among the various embodiments may have the same references and may have identical structural, dimensional and material properties. For the sake of clarity, only the operations and elements that are useful for an understanding of the described embodiments herein have been illustrated and described in detail. Unless indicated otherwise, when reference is made to two elements that are connected together, this means a direct connection without any intermediate elements other than conductors, and when reference is made to two elements that are linked or coupled together, this means that these two elements can be connected or be linked or coupled by way of one or more other elements. In the following disclosure, unless indicated otherwise, when reference is made to absolute positional qualifiers, such as the terms “front”, “back”, “top”, “bottom”, “left”, “right”, etc., or to relative positional qualifiers, such as the terms “above”, “below”, “higher”, “lower”, etc., or to qualifiers of orientation, such as “horizontal”, “vertical”, etc., reference is made to the orientation shown in the figures. Unless specified otherwise, the expressions “around”, “approximately”, “substantially” and “in the order of” signify within 10%, and preferably within 5%. In the following disclosure, electronic systems are considered in which the screen operates by alternating phases in which the screen emits light and phases in which the screen is turned off, i.e. the screen emits no light. In such systems, the average light power emitted by the screen and perceived by a user is adapted by modifying the duration of the phases of light emission and/or the duration of the phases in which no light is emitted. The screen is thus controlled by a binary signal, a first binary state of which, corresponding for example to a level of high potential, controls a phase of light emission by the screen, and a second binary state of which, corresponding for example to a level of low potential such as the ground, controls a phase in which no light is emitted by the screen. With adequate switching frequencies between the phases in which the screen emits light and the phases in which the screen is turned off, the user of the screen does not perceive the transitions between these phases, due to the persistence of vision of the human eye. For instance, the binary control signal generally undergoes pulse-width modulation (PWM). The type of screen, for example LCD (Liquid Crystal Display) or OLED (Organic Light Emitting Diode), to which such control modes apply and the manner of implementation of these control modes have not been described in detail. The described embodiments are compatible with these known control modes and the known screens to which these control modes apply. FIGS.1A and1Billustrate two views A and B of an embodiment of an electronic device2000, in this example a mobile telephone2000, the view A being a front view of the telephone2000and the view B being a sectional view along the plane BB indicated in view A. The device2000comprises an electronic system1000. The electronic circuit1000comprises a screen100intended to display images and/or information destined for a user. The display screen, or panel,100comprises a matrix of pixels (not illustrated) emitting light. The system1000further comprises various electronic circuits including an ambient light sensor104. In the example shown inFIG.1B, in the view B, two further electronic circuits, namely a processing unit106and a driver, or control, circuit108, of the screen100, are illustrated. The various electronic circuits of the system1000are, for example, mounted on a printed circuit board (PCB)110, preferably a flexible printed circuit board, in order to be electrically coupled with one another via the board110. Although a sole board11ois illustrated in the view B shown inFIG.1B, the system1000can comprise a plurality of boards110possibly electrically coupled with one another via ribbon cables. For instance, the display screen100can be of the OLED type (Organic Light Emitting Diode). The screen100is thus, for example, controlled by a binary control signal, for example generated by the driver circuit108. This control signal is, for example, provided selectively to each diode of the screen, so as to alternate phases in which at least certain diodes of the screen100emit light and phases in which no diode of the screen100emits light. The selection of the diodes of the screen100receiving or not receiving the control signal is, for example, implemented by the driver circuit108. In certain cases, the driver circuit108can further adapt, for each diode, the voltage level of the binary signal corresponding to a phase of light emission so as to adapt the light power emitted by the diode. Each pixel of the screen can be constituted by one or more diodes, possibly covered by an RGB (Red, Green, and Blue) color filter. For instance, the display screen100can also be of the LCD type (Liquid Crystal Display). The screen100thus comprises, for example, a matrix of pixels each comprising polarizing liquid crystal filters, and an illuminating panel disposed under the matrix of pixels. The plate is, for example, controlled by a binary control signal, for example generated by the driver circuit108, so that the plate operates by alternating phases of light emission and phases in which the plate does not emit any light. In certain cases, the driver circuit can further adapt the voltage level of the binary signal corresponding to a phase of light emission so as to adapt the light power emitted by the diode. The polarizing filters of each pixel are controlled, for example by the driver circuit108of the screen100, to let through or to not let through the light emitted by the plate through the polarizing filters, toward a user. Each pixel of the screen can be covered by one or more RGB color filters. In the illustrated example, the system1000further comprises, above the display screen100, a touch screen112. The touch screen, or touch plate,112entirely covers the display screen100, the screens100and112having substantially the same surface areas, preferably the same surface areas. Typically, the device2000comprises a protective glass pane114covering the screen100, and, more specifically in this example, the assembly constituted by the two screens100and112. The glass pane114entirely covers the screen100, the surface area of the glass pane114being substantially equal to that of the screen100, preferably equal to that of the screen100. The device2000comprises a housing, or shell,116, in which the system1000is disposed, i.e. in which the electronic circuits104,106and108and the one or more boards100are disposed. The assembly of the screen100, the possible touch screen112and the glass pane114closes the housing116on the side of a face of the system, the upper face in the view B ofFIG.1B, and the face that is visible in the view A ofFIG.1A. In this embodiment, the telephone2000is called “borderless”, i.e. the screen100, and more specifically the assembly of the screen100, the possible touch screen112and the glass pane114occupies substantially the entire face, preferably the entire face, of the device intended to be viewed by the user of the system, i.e. the upper face of the device2000in the view B ofFIG.1B. The ambient light sensor104is thus disposed under the screen100, i.e. on the side of the screen100opposite the face of the screen100visible to the user. The display screen100, the touch screen112and the glass pane114are thus at least partially transparent to the ambient light, the ambient light corresponding here to the visible light and possibly to infra-red and/or ultra-violet light. Thus, ambient light can pass through the assembly of the glass pane114, the possible touch screen112and the display screen100, and reach the sensor104. FIGS.2A and2Billustrate two views A and B of a further embodiment of an electronic device3000, in this example a mobile telephone3000, the view A being a front view of the telephone and the view B being a sectional view along the plane BB indicated in view A. The device3000ofFIGS.2A and2Bdiffers from the device2000ofFIGS.1A and1Bin that the display screen100and the possible touch screen112are interrupted above the sensor104in order to allow the ambient light to reach the sensor104. More specifically, a window118is provided in the screen100and the possible screen112, above the sensor104. The glass pane114covers the window118so as to protect the electronic circuits disposed in the housing116, and in particular the sensor104. It should be noted that the devices2000and3000are illustrated in a schematic fashion, and that not all details of these devices have been illustrated. The embodiments that will be described in the following are not limited to the example devices shown inFIGS.1and2, but apply to all electronic devices comprising an electronic system1000, for example tablets, connected watches, computer screens, mobile telephones, multimedia apparatus equipped with a, for example flexible or pliable, screen, etc. More specifically, the described embodiments apply to electronic systems1000comprising a display screen100and an ambient light sensor104disposed under the screen100as illustrated inFIG.1B, or under a window, or opening,118of the screen100as illustrated inFIG.2B, in which the screen100operates by alternating phases of light emission and phases in which no light is emitted. FIG.3illustrates, in a schematic fashion and in the form of blocks, the electronic system1000shown inFIG.1A,1B,2A or2B. The system1000comprises: the processing unit106(PU), for example a state machine, a microprocessor, a microcontroller, a programmable logic circuit, etc.; one or more storage zones120(MEM), each storage zone, or memory, potentially being volatile, for example of the RAM memory type or registers, for temporarily storing information (instructions, addresses, data) during processing, or non-volatile, for example of the flash type, for storing information in a permanent manner and in particular when the system1000is not supplied with power; one or more data, address and/or control buses between the internal electronic circuits of the system1000, illustrated here in the form of a sole bus122; an input-output communication interface124(I/O), for example of the serial bus type, for communicating with the outside of the system1000; the screen (SCREEN)100; the driver circuit108(SCREEN DRIVER) of the screen100, illustrated here as part of the screen100; and the ambient light sensor (ALS)104. Furthermore, the system1000can integrate other functions, represented by a block126(FCT), for example, a crypto-processor, further interfaces, further memories, a camera, an image sensor, etc. The one or more electrical supplies of the various elements of the system1000, in particular the electrical supply of the circuit108, are not illustrated inFIG.3. In the system1000, the processing unit106is configured to provide a setpoint signal to the circuit108. This setpoint signal is representative of a setpoint value of the average light power that the screen100needs to emit. For instance, this setpoint value is determined on the basis of a measurement signal provided by the sensor104, the measurement signal being representative of the quantity of light, or more precisely the quantity of photons, received by the sensor104during a phase of measurement of the ambient light. The dependence of the setpoint value on the measured level of ambient light allows an automatic adjustment of the light power emitted by the screen as a function of the level of ambient light. This setpoint value can also depend on a luminosity setpoint of the screen100provided manually to the system1000by the user. As a function of this setpoint signal, the circuit108adapts the binary control signal that it provides to the screen100, and more specifically adapts the duration during which the signal is in the first binary state, and/or the duration during which the signal is in the second binary state. This amounts to adapting the duration of the phases of operation in which the screen100emits light and/or the duration of the phases of operation in which the screen does not emit any light, so that the average luminosity emitted by the screen over a large number of phases of operation, for example more than 100, corresponds to the average setpoint luminosity. It should be noted that it is the average luminosity emitted by the screen that is perceived by the user, since the transitions between the phases of light emission by the screen and non-emission of light by the screen are imperceptible for the human eye due to persistence of vision. For instance, when the control signal of the screen undergoes pulse-width modulation, the circuit108increases or decreases the duty cycle of the signal, and, optionally, the circuit108can also modify the frequency of the signal. A drawback of the systems1000in which the sensor104is disposed under the screen100, or close to the screen100, for example along an edge of the screen100or under a window or opening118of the screen100, is that the light emitted by the screen can reach the sensor104, and thus distort the measurement of the level of ambient light. In the more specific case where the sensor104is disposed under the screen100, the glass pane114and the possible touch screen112, the rates of transmission of ambient light to the sensor104can be low, for example lower than 5%, or 1%. In this case, the light emitted by the screen100that reaches the sensor104can have a light power comparable to the ambient light transmitted up to the sensor204, which poses a problem. In order to address these drawbacks, the inventor proposes here to exploit the control mode of the screen100, and more specifically the alternation of the phases of operation in which the screen100emits light and those in which the screen100does not emit any light. More specifically, the inventor proposes here to provide, in the system1000, a synchronization device or circuits configured in order that each phase of measurement of the level of ambient light by the sensor104is implemented during a phase of operation in which the screen100does not emit any light. In other words, the inventor proposes here to provide a synchronization device configured to synchronize each measurement phase of the sensor104with a phase of operation in which the screen100does not emit any light, and more specifically a synchronization device configured to synchronize the start of each phase of measurement of the level of ambient light with the start of a phase of operation in which the screen does not emit any light. Indeed, a phase of measurement by the sensor104generally has a shorter duration than that of the phases in which the screen100does not emit any light when the screen is controlled by pulse width modulation. Thus, the measurement effected by the sensor104is not distorted by light emitted by the screen that could reach the sensor104. Various embodiments will now be described in greater detail in relation toFIGS.4,5,6and7. FIG.4illustrates, in the form of blocks and in a more detailed manner, an embodiment of a part of the electronic system1000shown inFIG.3.FIG.4illustrates more specifically the sensor104, the central unit106and the driver circuit108of the screen100, the circuit108being part of the screen100in this example. In this embodiment, the synchronization device comprises the processing unit106. The synchronization of the measurement phases of the sensor104with the phases of operation in which the screen100does not emit any light is thus at least partly implemented by the processing unit106. As has been described in the foregoing, the processing unit106is configured to provide a setpoint signal, designated as sig-t, to the circuit108. According to an embodiment, the unit106is further configured to provide an activation signal sig-a to the circuit108. The signal sig-a indicates to the circuit108whether or not it needs to provide the binary control signal, designated as sig-c, to the screen100. In other words, the signal sig-a indicates whether the screen100needs to be in operation and controlled by the signal sig-c (circuit108active), or whether the screen100needs to be turned off (circuit108inactive). When the circuit108is active, the binary control signal sig-c controls the alternation of the phases of operation in which the screen100emits light with those in which the screen100does not emit any light. The sensor104is configured to provide, after each measurement of the level of ambient light, a measurement signal sig-m representative of the quantity of light received during each measurement phase. The signal sig-m is, for example, determined by the sensor104during a processing phase following the measurement phase, this processing phase potentially being implemented regardless of the state of the binary control signal, i.e. whether or not the screen100is in a phase of light emission. Although not illustrated here, according to an embodiment, the sensor104is configured to implement a plurality of successive measurement phases before providing the signal sig-m, this latter thus being representative of the quantity of total light received during the successive measurement phases. For instance, the unit106is thus configured to provide to the sensor104a signal indicating whether a plurality of measurement phases need to be successively carried out before providing the corresponding signal sig-m, or whether the signal sig-m should be provided after each measurement phase. In this embodiment, the central unit106is configured to provide to the sensor104a synchronization signal sig-s1for synchronizing the start of each measurement phase. More specifically, the signal sig-s1indicates, for example by a change in binary state when the signal sig-s1is a binary signal that the sensor104needs to start a measurement phase. The unit106can thus synchronize, via the signal sig-s1, the start of each measurement phase with a phase of operation in which the control signal sig-c is in the second binary state, i.e. a phase of operation in which the screen100does not emit any light. For instance, the unit106determines the signal sig-s1based on the setpoint signal sig-t that it provides to the circuit108. Indeed, the unit106can deduced from the signal sig-t the characteristics (frequency, duty cycle, etc. . . . ) of the control signal sig-c, for example based on a table of correspondence between each value that the signal sig-t can potentially have and the corresponding characteristics of the signal sig-c. The embodiment shown inFIG.4is more specifically adapted to the case where the signal sig-c undergoes pulse-width modulation. In this case, the unit106determines, for example, the time at which the circuit108passes from a deactivated state to an activated stated based on the signal sig-a, and can deduce therefrom the start time of each alternation of a phase of operation in which the screen emits light and a phase of operation in which the screen does not emit any light. For each alternation, the start time of the phase in which the screen does not emit any light can correspond to the start time of the alternation, or be deduced from the start time of the alternation or from the knowledge of the duty cycle of the signal sig-c, the duty cycle of the signal sig-c being, for example, determined based on the setpoint signal sig-t. FIG.5illustrates, in the form of blocks and in a more detailed manner, a further embodiment of a part of the electronic system1000shown inFIG.3. More specifically,FIG.5illustrates the sensor104, the central unit106and the driver circuit108of the screen100, the circuit108being part of the screen100in this example. Only the differences between the part of the system1000illustrated inFIG.4and the part illustrated inFIG.5are shown here in detail. In this embodiment, the synchronization device for synchronizing the measurement phases of the sensor104with the phases in which the screen100does not emit any light comprises the sensor104. This synchronization is thus at least partly implemented by the sensor104, based on a synchronization signal sig-s2representative of the control signal sig-c provided to the screen100. In the embodiment shown inFIG.5, the unit106does not provide a signal sig-s1. Moreover, in the embodiment shown inFIG.5, the sensor104is configured to receive the synchronization signal sig-s2representative of the control signal sig-c. In this example, the signal sig-s2is identical to the signal sig-c. In further examples not illustrated, the signal sig-s2can correspond to the signal sig-c the potential levels of which corresponding to the first and second binary states have been adapted, for example by the circuit108or by a dedicated circuit. The signal sig-s2provided to the sensor104allows it to know, or determine, the start time of each phase in which the screen100does not emit any light. The sensor104is thus configured to synchronize each of its measurement phases with a phase of operation in which the screen100does not emit any light. In this example where the signals sig-c and sig-s2are identical, the switching of the signal sig-s2from the first binary state to the second binary state indicates the start of a phase of operation in which the screen emits no light and the sensor104can thus synchronize the start of a measurement phase with the start of this phase of operation. Compared to the embodiment shown inFIG.4, the embodiment shown inFIG.5makes it possible to limit the time discrepancy between the start of a phase in which the screen100does not emit any light and the start of a measurement phase. This results from the fact that, in the embodiment shown inFIG.5, the synchronization signal sig-s2received by the sensor104is obtained directly from the control signal sig-c, while, in the embodiment shown inFIG.4, the circuit108and/or the processing unit106can introduce a time discrepancy between the control signal sig-c and the synchronization signal sig-s1received by the sensor104. For instance, this time discrepancy can be due to a plurality of circuits each communicating in turn, for example via the bus of the system, and sharing the same control signals. FIG.6illustrates, in the form of blocks and in a more detailed manner, a further embodiment of a part of the electronic system1000shown inFIG.3. More specifically,FIG.6illustrates the sensor104, the central unit106and the driver circuit108of the screen100, the circuit108being part of the screen100in this example. Only the differences between the part of the system1000illustrated inFIG.5and the part illustrated inFIG.6are shown here in detail. In this embodiment, the extraction of characteristics of the control signal sig-c is provided, from a signal sig-e representative of the temporal evolution of the light power received by the sensor104. The synchronization between the measurement phases of the sensor104and the phases of operation in which the screen does not emit any light is thus implemented based on these extracted characteristics. Thus, in this embodiment, the synchronization device comprises the sensor104, which is thus configured to provide the signal sig-e, and a processing circuit600configured to extract characteristics of the signal sig-c from the signal sig-e. The fact is thus exploited here that, during a phase of light emission by the screen100, a part of this emitted light is received by the sensor104and is added to the ambient light received by the sensor104. Thus, characteristics of the control signal sig-c such as the frequency of the signal sig-c, the time ranges in which the signal sig-c is respectively in the first binary state and in the second binary state, the duty cycle of the signal sig-c, etc., can be extracted from the signal sig-e. For instance, the signal sig-e is an analogue signal the amplitude of which varies with the light power received by the sensor. In this embodiment, the circuit600is part of the unit106, the signal sig-e thus being provided to the processing unit106. As a variant, the circuit600can be a dedicated circuit, external to the unit106, corresponding, for example, to a block FCT (FIG.3). According to an embodiment, as long as the unit106sends a control signal sig-ce to the sensor104, the sensor104provides the signal sig-e. In a variant embodiment not illustrated, the unit106does not provide any signal sig-ce to the sensor104, which thus provides the signal sig-e in a continuous manner. According to an embodiment, in addition to being configured to extract characteristics of the signal sig-c from the signal sig-e, the circuit600is configured to determine a synchronization signal sig-s3provided to the sensor104. As a variant, the central unit106determines the signal sig-s3based on the characteristics extracted from the signal sig-e by the circuit600. The signal sig-s3indicates to the sensor104the start times of the phases of operation of the screen100in which the latter does not emit any light. The sensor104is thus configured to synchronize the start of each phase of measurement of the level of ambient light with a start time of a phase of operation in which the screen100does not emit any light. An advantage of the embodiment shown inFIG.6with respect to the embodiments shown inFIGS.4and5is that the synchronization is based on the signal sig-e which takes into account possible discrepancies between the level changes of the signal sig-c and the changes in phases of operation (emission or non-emission) of the screen. For instance, this discrepancy can result from a discrepancy between the control of the screen and the display by the screen, for example due to a display delay, the rise time of the electroluminescent diodes of the screen and/or a congestion of the buses or communication paths of the system. FIG.7illustrates in the form of blocks a variant of the embodiment described in relation toFIG.6. Only the differences between the systems1000shown inFIGS.6and7have been described in detail. InFIG.7, the processing circuit600is part of the sensor104rather than of the unit106. Thus, unlike the embodiment shown inFIG.6where the signal sig-e is transmitted to the unit106which, in return, provides the signal sig-s3to the sensor104, in the variant embodiment shown inFIG.7, these signals sig-e and sig-s3are internal to the sensor104. In particular, it is provided here that the sensor104, and in particular its processing circuit600, are configured to determine the frequency of the alternation between the phases of light emission by the screen and the phases in which the screen is turned off, the start of each phase of light emission by the screen, the end of each phase of light emission by the screen and/or the duty cycle between the phases of light emission by the screen and the phases in which the screen is turned off. Thus, the sensor104is autonomous and is capable, on the basis of the above information, of implementing phases of light capture when the screen is in a phase of operation in which it does not emit any light. The variant embodiment shown inFIG.7benefits from the same advantages as the embodiment shown inFIG.6. The variant embodiment shown inFIG.7is further simpler than the embodiment shown inFIG.6as the synchronization of the phases of capturing the surrounding light with the phases of operation in which the screen does not emit any light is implemented inside the sensor104, which simplifies the integration of the sensor104in an electronic system1000. FIG.8illustrates time charts depicting a mode of operation of the system1000described in the foregoing. In this example, a phase of measurement of the level of ambient light is implemented by the sensor104during each phase in which the screen does not emit any light. FIG.8illustrates the control signal sig-c for controlling the phases of light emission by the screen and the phases in which the screen does not emit any light, the circuit108being active here. In this example, the low level of the signal sig-c controls the screen so that it does not emit any light (“screen off”), and the high level of the signal sig-c controls the screen so that it emits light (“screen on”). FIG.8also illustrates the phases of measurement of the level of ambient light by the sensor104(“ALS on”) and the phases in which the sensor does not measure the level of ambient light (“ALS off”). A phase of measurement of the level of ambient light by the sensor is considered here to correspond to a phase of integration by the sensor104, i.e. a phase in which photons are received by a photosensitive zone of the sensor. The processing of the number of photons received during a phase of integration or measurement of the level of ambient light, for example for generating the signal sig-m representative of the number of photons received during this phase of integration, i.e. representative of the level of ambient light, being realized, in this example, after each phase of integration, for example at least partly during the following phase of light emission by the screen. More specifically, in this example, phases of light emission by the screen start at the respective times t0, t2, t4, and t6and end at the respective times t1, t3and t5, the times t0, t1, t2, t3, t4, t5and t6being successive times. Thus, as described in the foregoing, in this example, phases of measurement of the level of ambient light by the sensor105are respectively implemented between the times t1and t2, between the times t3and t4, and between the times t5and t6. More specifically, at each of the times t1, t3and t5, a phase of measurement of the level of corresponding ambient light begins. Although, inFIG.8, each phase of measurement of the level of ambient light ends at the end of a corresponding phase in which the screen does not emit any light (times t2, t4and t6), in practice, each phase of measurement of the level of ambient light can end before the end of the corresponding phase in which the screen does not emit any light. Although this is not illustrated inFIG.8, when the sensor104has ended a measurement phase, the sensor starts a processing phase during which it determines, or updates, the signal sig-m representative of the level of ambient light measured during the measurement phase. This processing phase can begin during the corresponding phase in which the screen does not emit any light, and continue or be entirely implemented during the following phase in which the screen emits light. Moreover, although not illustrated, according to the implemented embodiment, each phase of measurement of the level of ambient light by the sensor104can begin with a delay with respect to the start of the corresponding phase in which the screen does not emit any light. However, each phase of measurement of the level of light by the sensor ends at the latest at the end of the corresponding phase in which the screen does not emit any light. For instance, the phases in which the screen does not emit any light have a duration in the order of 3.6 ms, or less, the phases in which the screen emits light having a duration in the order of 0.4 ms, or less. For instance, a phase of measurement or integration of the level of ambient light by the sensor104has a duration in the order of 0.1 ms to 1 ms, preferably while taking into account possible processing delays of the various optical and electronic circuits involved. In the embodiments and variants described above, the sensor104can be configured to provide a signal sig-m representative of the light received during one or more measurement phases for a single wavelength range, for example the range of the wavelengths in the visible range possibly extended to the infra-red and/or ultra-violet wavelengths. In this case, the system1000cannot determine the type of ambient light that it receives. As a variant, the sensor104can be configured to provide a signal sig-m comprising, for each of a plurality of wavelength ranges, information representative of the quantity of light received, in this wavelength range, by the sensor during one or more measurement phases. In this case, the system1000can be configured to determine the type of ambient light, for example if the light is natural, from a filament light bulb, from a fluorescent light bulb, if the light is a cold or warm light, etc. In the case where the screen100is a color screen of the OLED type, the unit106can thus be configured to provide a signal sig-t comprising, for each wavelength range that the screen100can emit, an indication of the average target power that the screen100needs to emit for this wavelength range. Indeed, in the case of an OLED color screen, the circuit108is generally configured to control each pixel of the screen individually. As a result, the system1000can thus adapt the type of light emitted by its screen100to the type of ambient light. Furthermore, although it has not been described, due to the delays in transmitting and processing the signals sig-t, sig-a, sig-e and sig-s1, sig-s2or sig-s3, the start time of a phase of operation in which the screen100does not emit any light that is indicated by the signal sig-s1, sig-s2or sig-s3can be later than the time at which this phase actually begins. The system1000can thus comprise a delay circuit configured to take into account these transmissions and processing delays. For example, the delay circuit is configured to delay the synchronization signal sig-s1, sig-s2or sig-s3so that the start time of a phase of operation in which the screen does not emit any light indicated by this signal corresponds to the actual start time of the following phase of operation in which the screen does not emit any light. The determination of the delay introduced by the delay circuit can, for example, be implemented during a phase of calibration of the system1000, and in particular of the delay circuit. Various embodiments and variants have been described. Those skilled in the art will understand that certain features of these embodiments can be combined and other variants will readily occur to those skilled in the art. In particular, it can be provided that the synchronization between the measurement phases of the sensor104and the phases of operation of the screen in which it does not emit any light is carried out by combining the embodiment ofFIG.4or5with the embodiment ofFIG.6or7. Finally, the practical implementation of the embodiments and variants described herein is within the capabilities of those skilled in the art based on the functional description provided hereinabove. | 34,990 |
11862113 | DESCRIPTION OF REFERENCE NUMERALS 10, driving array;11, thin film transistor;12, first signal line;13, second signal line;14, third signal line;15, flat layer;16, first electrode;17, second electrode;110, first substrate;111, buffer layer;112, P-type silicon layer;113, insulating layer;114, dielectric layer;115, source electrode;116, gate electrode;117, drain electrode;20, first light-emitting device;21, second substrate;22, N-type layer;23, quantum well;24, P-type layer;25, P-type bonding pad;26, N-type bonding pad;27, cathode;28, anode;30, second light-emitting device;31, light-emitting chip;32, light blocking layer;01, eyes of a user;200, pixel unit;201, red light emitting device;202, green light emitting device; and203, blue light emitting device. DETAILED DESCRIPTION OF THE EMBODIMENTS In order to facilitate the understanding of the present disclosure, more complete description of the embodiments of the present disclosure will now be made with reference to the associated drawings. Exemplary implementations of the present disclosure are illustrated in the drawings. However, the present disclosure may be realized in many different forms and is not limited to the implementations described herein. Rather, the implementations are provided to facilitate a more thorough and complete understanding of the content of the present disclosure. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by those having ordinary skill in the art to which the present disclosure belongs. The terms used in the specification of the present disclosure herein is for the purpose of describing the exemplary implementations only and is not intended to be limiting of the present disclosure. It is to be understood that when an element (such as a layer, film, region, or substrate) is described as being ‘on’ another element, the element may be directly on the other element, or an intervening element may also be present between this element and the other element. Moreover, in the specification and claims, when an element is described as being ‘connected’ to another element, the element may be ‘directly connected’ to the other element or ‘connected’ to the other element through a third element. As described in the background, a display panel in the related art does not have an anti-peep mode and a normal display mode, and switching of the display panel between the two modes is not supported. Based on this, the embodiments of the present disclosure provide a solution capable of solving the above technical problem, the details of which will be set forth in the following embodiments. In an implementation of the present disclosure, a display panel is provided, as shown inFIGS.1-5. The display panel includes a driving array10, first light-emitting devices20and second light-emitting devices30. The first light-emitting devices20are arranged on one side of the driving array10and are electrically connected with the driving array10, and the second light-emitting devices30are located between at least two of the first light-emitting devices20and are electrically connected with the driving array10. In a case where the first light-emitting devices20are in a working state and the second light-emitting devices30are in a first state, the display panel is in a first mode, and in a case where the first light-emitting devices20are in a working state and the second light-emitting devices30are in a second state, the display panel is in a second mode, wherein a visual angle of the display panel in the first mode is greater than 0 and smaller than a visual angle of the display panel in the second mode. In a case where the driving array10controls the first light-emitting devices20and the second light-emitting devices30to work, the display panel is in the first mode. In a case where the driving array10controls the first light-emitting devices20to work and the second light-emitting devices30to not work, the display panel is in the second mode. When the display panel is in the first mode, the visual angle is equal to a preset angle, and the preset angle is greater than 0 degree and smaller than the visual angle in the second mode. In the first mode, a user in the preset angle range can see display information of the display panel, as shown inFIG.2(InFIG.2, in order to distinguish light emitted by the first light-emitting devices20from light emitted by the second light-emitting devices30, the light emitted before anti-peep is represented by dotted lines, and the light emitted by the first light-emitting devices20is represented by solid lines). A user observing at a position beyond the preset angle range (i.e., in a case where the preset angle is 100 degrees, a position with the angle smaller than 40 degrees on the right side of a normal line of a vertical plane and a position with the angle greater than 130 degrees on the left side of the normal line of the vertical plane) cannot see the display information of the display panel. InFIG.2, light information received by eyes01of the user at the position with the angle greater than the preset angle includes both the light information emitted by the first light-emitting devices20and the light information emitted by the second light-emitting devices30, so the user cannot normally see the corresponding display information. When the display panel is in the second mode, the visual angle is larger, so that the display panel is in the normal display mode, as shown inFIG.1, the user observing at an angle in the larger visual angle range can see the display information of the display panel, and thus the normal display of the display panel is guaranteed. According to the display panel, the second light-emitting devices30, electrically connected with the driving array10, are additionally arranged in the display panel, and the driving array10can control the working states of the first light-emitting devices20and the second light-emitting devices30to make the display panel work in the first mode (anti-peep mode) or the second mode, so that the switching of the display panel in different visual angle modes is realized. By virtue of the scheme, the display panel is supported to have the anti-peep mode and the normal display mode, and the problem that anti-peep display cannot be realized due to the fact that the display panel in the related art does not have the first mode and the normal display mode is solved. It is to be noted that the display panel in the embodiments of the present disclosure is a display panel which can emit light to realize display without a backlight source. The display panel in the embodiments of the present disclosure may be an Organic Light-Emitting Diode (OLED) display panel, a Micro-led display panel, or a Mini-led display panel, and the specific type is not limited herein. The visual angle in the embodiments of the present disclosure refers to an angle at which the user can clearly view all content on the screen from different directions. In addition, it is to be noted that the visual angle of the embodiments of the present disclosure includes at least one of a horizontal visual angle and a vertical visual angle, that is, the display panel of the embodiments of the present disclosure can realize horizontal anti-peep, or can realize vertical anti-peep, or can realize both horizontal and vertical anti-peep, which can specifically be set according to actual situations. In the actual application process, the preset angle of the embodiments of the present disclosure may be determined according to the actual situations so as to adapt to different application scenarios. For example, the preset angle may be determined according to the size of the display panel and the like. As the visual angle includes at least one of the horizontal visual angle and the vertical visual angle, the corresponding preset angle also includes at least one of the horizontal preset angle and the vertical preset angle. Under the condition that the horizontal preset angle and the vertical preset angle need to be determined at the same time, the corresponding horizontal preset angle and vertical preset angle may be the same or different. In order to meet the requirements of most display panels, in an exemplary embodiment of the present disclosure, the visual angle includes a horizontal visual angle which is greater than or equal to 90 degrees (namely, 45 degrees on each of the left and right of the vertical normal of the display panel), so that the user can normally read the information displayed by the display panel at a front view without being affected by light emitted by the second light-emitting devices30. When the user observes the display panel at a large view angle (an angle smaller than 45 degrees on the right side or greater than 135 degrees on the left side, where the left and the right herein are the directions when facing the display panel), the user cannot read the information due to the influence from the light emitted by the peek prevention components, i.e., the second light-emitting devices30. In another exemplary embodiment of the present disclosure, the visual angle of the display panel in the second mode is the maximum visual angle of the display panel. In this way, it is further guaranteed that the second mode of the display panel is the normal display mode. In the actual application process, through device design, the light emitting angle of the side surfaces of second light-emitting devices30can be controlled so as to adjust and control the visual angle of the display panel to adapt to different application scenarios. In an embodiment, an optimal anti-peep state is to allow the visibility only from the front view. In an embodiment of the present disclosure, the intensity of light emitted by second light-emitting devices30in a first state is greater than the intensity of light emitted by the second light-emitting devices30in a second state. In other words, the second light-emitting devices30emit more light in the first state, and light information received by eyes01of a user at the position with the angle greater than a preset angle includes light emitted by first light-emitting devices20and a large amount of light emitted by the second light-emitting devices30, so that the corresponding display information cannot be seen. The second light-emitting devices30emit less light in the first state, and thus a user can see the corresponding display information at the position with the angle greater than the preset angle without being substantially affected by the light emitted by the second light-emitting devices30. In another embodiment of the present disclosure, a first state of the second light-emitting devices30is an on state, namely, the first state of the second light-emitting devices30is actually a working state; and a second state of the second light-emitting devices30is an off state, namely, the second state of the second light-emitting devices30is a non-working state. In this way, the user can view from any position (angle) without being affected by the second light-emitting devices30when the second light-emitting devices30are in the second state, and a better display effect of the display panel in a second mode can be achieved. In one exemplary embodiment of the present disclosure, as shown inFIG.2, more than 90% of the light emitted by second light-emitting devices30is emitted from side surfaces, wherein the side surfaces are surfaces, parallel to a thickness direction of a driving array10, of the second light-emitting devices30. In the first mode, light-emitting chips31work and most of the light is emitted from the side surfaces of the second light-emitting devices30, when a user observes a screen at a view angle beyond a preset angle range, human eyes will be affected by the light emitted from the side surfaces of the second light-emitting devices30while collecting signals of a normal display picture, the display picture is distorted under the influence of the light emitted from the side surfaces of the second light-emitting devices30, so that display information cannot be read, and the purpose of anti-peep can be achieved. When the user observes at a view angle within the preset angle range, due to the fact that very little light is emitted from the front surfaces of the second light-emitting devices30, the user can normally read the display information without being influenced by the second light-emitting devices30, and therefore, the normal reading of the display information by the user is further guaranteed. In the second mode, the second light-emitting devices30do not work, so that the second light-emitting devices30do not emit light. When the user observes the screen at a view angle beyond the preset angle range, human eyes are not affected by interfering light while collecting the signals of the normal display picture, so that the user can normally read the information. When the user observes at the view angle within the preset angle range (including the preset angle), the user is not affected by the interfering light, and thus the user can normally read the information. In order to simplify the structure of the second light-emitting devices30, enable a manufacturing process of the second light-emitting devices30to be compatible with a manufacturing process of first light-emitting devices20, and further simplify a manufacturing process of a display panel, in an embodiment of the present disclosure, as shown inFIGS.1,2,5and6, each second light-emitting device30includes a light-emitting chip31and a light blocking layer32, wherein the light blocking layer32is arranged on the surface, far away from the driving array10, of the light-emitting chip31, and the second light-emitting device30is arranged between the two first light-emitting devices20. In the first mode, the driving array10controls the light-emitting chips31to control the second light-emitting devices30to be in the first state. In the second mode, the driving array10controls the light-emitting chip31to control the second light-emitting devices30to be in the second state. The light blocking layer32is configured to block light from being emitted from the front surface of the second light-emitting device30, so as to further guarantee that most of light emitted by the second light-emitting devices30is emitted from side surfaces, and therefore, it is further guaranteed that a user in a preset angle range can see clear content. In an embodiment of the present disclosure, the first light-emitting devices20are inversely mounted on a surface of the driving array10, that is, electrodes of the first light-emitting devices20are arranged in contact with the driving array10, while the distance between a substrate (second substrate21inFIG.7) in the first light-emitting devices20and the driving array10is the maximum, seeFIG.5andFIG.7. In the embodiment, the second light-emitting devices30are inversely mounted on the surface of the driving array10, that is, electrodes of the second light-emitting devices30are arranged in contact with the driving array10, while the distance between the substrate (second substrate21inFIG.7) and the driving array10is the maximum. In order to further ensure that light emitted from front surfaces of the second light-emitting devices30is reduced so that substantially all of the light is emitted from side surfaces, in an embodiment of the present disclosure, the reflectivity or absorptivity of a material of the light blocking layer32to light is greater than 95%. For example, the material of the light blocking layer32may be a carbon-doped oxygen-containing resin, metallic silver, or the like. The material of the light blocking layer32of the embodiment of the present disclosure may be any suitable material in the related art, and those having ordinary skill in the art may select a suitable material to form the light blocking layer32of the embodiment of the present disclosure according to practical situations. In one exemplary embodiment of the present disclosure, the material of the light blocking layer32includes at least one of an organic material and a metal material. In an exemplary embodiment, the material of the light blocking layer32is a black matrix material, such as a black resin doped with Cr, CrOx, or the like. In the actual application process, there are multiple first light-emitting devices20, and the multiple first light-emitting devices20may be light-emitting devices of the same color or light-emitting devices of different colors. Specifically, red light emitting devices201, green light emitting devices202or blue light emitting devices203may be selected as the first light-emitting devices20according to actual needs. In order to meet general display requirements and achieve color display, in an embodiment, a display panel includes pixel units200located in a display region, each pixel unit200at least includes multiple first light-emitting devices20, and the multiple first light-emitting devices20at least include red light emitting devices201, green light emitting devices202and blue light emitting devices203. In the embodiment shown inFIG.3andFIG.4, each pixel unit200includes three above-described first light-emitting devices20, respectively a red light emitting device201, a green light emitting device202and a blue light emitting device203. In the embodiment, the display region also includes second light-emitting devices30. In another embodiment of the present disclosure, as shown inFIG.3andFIG.4, there are multiple second light-emitting devices30, so that more light can be emitted during working of the second light-emitting devices30. Any one of the second light-emitting devices30is located between two adjacent rows of pixel units200and between two adjacent columns of pixel units200, and the second light-emitting devices30are uniformly distributed, which enables more light to be uniformly emitted during the operation of the second light-emitting devices30, so that when a user observes a screen at a view angle beyond a preset angle range, the light emitted by the second light-emitting devices30can further influence signals of a normal display picture, so that anti-peep effect is better achieved. The first light-emitting devices20of the embodiments of the present disclosure may be any suitable light-emitting devices in the related art, and those having ordinary skill in the art may select the suitable first light-emitting devices20according to practical situations. In an exemplary embodiment of the present disclosure, the first light-emitting devices20are one of LEDs, OLEDs, Micro-leds or Mini-leds. The light-emitting devices are self-luminous devices and are in pure solid display, so that the visual angle of the display panel is larger and may reach about 170 degrees. Similarly, the second light-emitting devices30of the present disclosure may be any suitable light-emitting devices in the related art, and those having ordinary skill in the art may select the suitable second light-emitting devices30according to practical situations. In an exemplary embodiment, second light-emitting devices30are one of LEDs, OLEDs, Micro-leds or Mini-leds. In an exemplary embodiment, the first light-emitting devices20and the second light-emitting devices30are the same light-emitting devices, so that manufacturing processes of the first light-emitting devices20and the second light-emitting devices30are compatible, and the manufacturing process of a display panel is simplified. In an exemplary embodiment of the present disclosure, at least one of a first light-emitting device20and a second light-emitting device30is as shown inFIG.7. Each light-emitting device includes a second substrate21, an N-type layer22, a quantum well23, a P-type layer24, and a P-type bonding pad25which are arranged in sequence, and further includes an N-type bonding pad26, a cathode27and an anode28. The N-type bonding pad26is located on the surface, away from the substrate, of the N-type layer22, the cathode27is located on the surface, away from the N-type layer22, of the N-type bonding pad26, and the anode28is located on the surface, away from the P-type layer24, of the P-type bonding pad25. In an exemplary embodiment, both the N-type layer22and the P-type layer24are GaN layers. Since the light-emitting chip31in the second light-emitting device30in the embodiments of the present disclosure only needs to emit interfering light to achieve an anti-peep function, and does not need to display specific contents, therefore, the second light-emitting devices30at least include one of the following: white light emitting devices, green light emitting devices, red light emitting devices or blue light emitting devices. Those having ordinary skill in the art may select the suitable light-emitting devices according to practical situations. In practical applications, the driving array10of the embodiments of the present disclosure may be any suitable driving array10, for example, a driving array10arranged on a PCB, a driving array10arranged on a glass substrate, or a driving array10arranged on a flexible substrate. The type of the substrate carrying the driving array10is not limited herein. It should be understood that the type of the display panel of the embodiments of the present disclosure may be rigid or flexible, transparent or opaque, which is not limited herein. Those having ordinary skill in the art may design a suitable driving array10to control the working of the first light-emitting devices20and the second light-emitting devices30according to practical situations. In an exemplary embodiment of the present disclosure, as shown inFIG.4andFIG.5, the driving array10includes a thin film transistor11, a first signal line12, a second signal line13and a third signal line14. The first signal line12is electrically connected to anodes28of second light-emitting devices30, the second signal line13is electrically connected to a source electrode115of the thin film transistor11, a drain electrode117of the thin film transistor11is electrically connected to anodes28of first light-emitting devices20, and the third signal line14is electrically connected to cathodes27of the first light-emitting devices20and cathodes27of the second light-emitting devices30, that is, the first light-emitting devices20and the second light-emitting devices30share the cathodes27. In this way, by controlling voltages of the first signal line12and the third signal line14, the voltage difference between the cathodes27and the anodes28of the second light-emitting devices30can be controlled, so that the second light-emitting devices30can be controlled to work or not work. By controlling voltages of the second signal line13and the third signal line14, the first light-emitting devices20may be controlled to work or not work. By controlling whether the first light-emitting devices20work or not and controlling whether the second light-emitting devices30work or not, the display panel may be controlled to be in a first mode or a second mode, namely, whether the first mode is activated or not is controlled. In the embodiment, compared with the driving array10in the related art, the first signal line12is added, the third signal line14is in contact with the cathodes27of the second light-emitting devices30, so that the driving array10is relatively simple in structure, the space of a back plate is saved, and the miniaturization development of the display panel is facilitated. Of course, in practical applications, the cathodes27of the second light-emitting devices30and the cathodes27of the first light-emitting devices20may also not be the same cathodes. In another embodiment of the present disclosure, which is not shown in the figure, the display panel may further include a fourth signal line and a fifth signal line which are spaced apart from the driving array10, the fourth signal line is electrically connected with the anodes28of the second light-emitting devices30, the fifth signal line is electrically connected with the cathodes27of the second light-emitting devices30, namely, the cathodes27of the second light-emitting devices30and the cathodes27of first light-emitting devices20are respectively provided with a signal line, and the working states of the second light-emitting devices30and the first light-emitting devices20are controlled by controlling the corresponding signal lines. Since the second light-emitting devices30are only used as an anti-peep function and do not participate in normal display, the second light-emitting devices30are designed as Passive Matrix (PM) drive. Of course, the second light-emitting devices30are not limited to adopt the PM drive, and other driving manners, such as an active driving manner, are also possible. FIG.8(a)andFIG.8(b)are waveform diagrams of anti-peep signals VAP of the first signal line12and signals VSS of the third signal line14, respectively. The VSS is shared with a normal signal of the display panel and is a constant negative voltage signal. When anti-peep is not needed, VAP is consistent with VSS, the second light-emitting devices30do not work, and the light leakage phenomenon (noise reduction effect) caused by the voltage coupling effect can be prevented. When the first mode needs to be activated, VAP is switched from a low level to a high level, the anti-peep components, namely the second light-emitting devices30, emit light due to flowing of current under the action of the voltage, then an anti-peep function at large view angle is achieved. The magnitude of the high-level voltage of the anti-peep signal may be adjusted according to the actual anti-peep effect. In specific applications, for simplicity of structure and ease of manufacturing, as shown inFIG.5, in an embodiment of the present disclosure, the driving array10may further include a first electrode16and a second electrode17, the first signal line12is electrically connected to the anodes28of the second light-emitting devices30through the first electrode16, and a drain electrode117of a thin film transistor11is electrically connected to the anodes28of the first light-emitting devices20through the second electrode17. In yet another embodiment of the present disclosure, the first signal line12and the second signal line13are located on the surface of a dielectric layer114of the thin film transistor11, for example, as shown inFIG.5, the first signal line12and the second signal line13are located on an upper surface of the dielectric layer114(that is, on the surface away from an insulating layer113). In this way, a manufacturing process of the driving array10may be better compatible with an existing manufacturing process of the driving array10, and a new process flow is basically not required to be developed. In order to better protect the structure of the thin film transistor11and ensure that the display panel has a long service life, in an embodiment of the present disclosure, as shown inFIG.5, the driving array10may further include a flat layer15. The flat layer15is located on the side, away from an insulating layer113, of a dielectric layer114. The flat layer15is provided with two through holes, respectively a first through hole and a second through hole, at least part of the first electrodes16is located in the first through hole so as to be in contact with the first signal line12, and at least part of the second electrodes17is located in the second through hole so as to be in contact with a drain electrode117of the thin film transistor11. An exemplary structure of the thin film transistor11is also shown inFIG.5. The exemplary structure of the thin film transistor11includes a first substrate110, a buffer layer111, a P-type silicon layer112, an insulating layer113and the like, and the specific position relation between the components is shown inFIG.5. The gate electrode116inFIG.5is a scanning line, specifically refer toFIG.4. Of course, the thin film transistor11of the embodiment of the present disclosure is not limited to the structure shown inFIG.5, and may be of any other structure. In still another embodiment which is not shown in the figure, the display panel may further include a driving chip (driving IC), the driving chip is electrically connected with the first signal line12, the second signal line13, the third signal line14and the grid electrode116of the thin film transistor11respectively, specifically, the driving chip may be connected to the display panel through a metal wire of the driving array10. The driving chip is configured to control the voltage of the first signal line12, the voltage of the second signal line13, the voltage of the third signal line14, and the voltage of the gate electrode116to control the working of first light-emitting devices20and second light-emitting devices30, and therefore, the display panel is controlled to be in a first mode or a second mode. In another typical implementation of the present disclosure, an electronic device is provided, which includes a display panel, and the display panel is any one of the display panels described above. Due to the fact that the electronic device includes the display panel, the electronic device is enabled to have a first mode and a second mode, and information security can be ensured in public places or specific places. In still another embodiment of the present disclosure, the electronic device may further include a control unit, configured to control the display panel to be in one of the first mode and the second mode. Specifically, the control unit is electrically connected with the driving chip of the display panel, and the working of first light-emitting devices20and second light-emitting devices30is controlled by controlling the working of the driving chip. In the exemplary implementation process, the electronic device may further include a control structure, after the control structure receives a preset operation, and the control unit controls the driving chip to work according to information corresponding to the preset operation. For example, the control structure may be a control key, and the first mode may be activated and deactivated through a pressing on the control key. The electronic device of the embodiment of the present disclosure may be any device including a display panel, for example, a computer, a mobile phone or a tablet personal computer. FIG.9shows the content observed from a display panel of a notebook computer in a second mode from a large view angle. As shown in the figure, the picture information can be normally read from the large view angle, and a small square in the figure represents the displayed content.FIG.10shows the content observed from a display panel of a notebook computer in a first mode from a large view angle. As shown in the figure, due to interfering light emitted from the second light-emitting devices30, the large-view-angle picture is distorted, so that display signals cannot be read normally, that is, no information is seen. It is to be understood that the application of the embodiments of the present disclosure is not limited to the examples described above, and modifications or variations may be made in light of the above description by those having ordinary skill in the art, all of which are intended to fall within the scope of the appended claims. | 31,460 |
11862114 | DETAILED DESCRIPTION The present disclosure may be understood by reference to the following detailed description, taken in conjunction with the drawings as described below. It is noted that, for purposes of illustrative clarity and being easily understood by the readers, various drawings of this disclosure show a portion of the display device, and certain elements in various drawings may not be drawn to scale. In addition, the number and dimension of each element shown in drawings are only illustrative and are not intended to limit the scope of the present disclosure. Certain terms are used throughout the description and following claims to refer to particular components. As one skilled in the art will understand, electronic equipment manufacturers may refer to a component by different names. This document does not intend to distinguish between components that differ in name but not function. In the following description and in the claims, the terms “include” and “comprise” are used in an open-ended fashion, and thus should be interpreted to mean “include, but not limited to . . . ” When the corresponding component such as a layer or a region is referred to “on another component (or the variant thereof)” or “extend to another component”, it may be directly on another component or directly extend to another component, or other components may be presented between them. On the other hand, when the component is referred to “directly on another component (or the variant thereof)” or “directly extend to another component”, there are no components presented between them. In addition, when the component is referred to “be coupled to/with another component (or the variant thereof)”, it may be directly connected to the other component, or it may be indirectly connected (such as electrically connected) to the other component through other component or components. It should be understood that when an element or layer is referred to as being “on” or “connected to” another element or layer, it may be directly on or directly connected to the other element or layer, or intervening elements or layers may be presented. In contrast, when an element is referred to as being “directly on” or “directly connected to” another element or layer, there are no intervening elements or layers presented. When the terms “include”, “comprise” and/or “have” are used in the description of the present disclosure, the corresponding features, regions, steps, operations and/or components would be pointed to existence, but not limited to the existence of one or more corresponding features, regions, steps, operations and/or components. The display device of the present disclosure may be applied to electronic devices of display equipment or tiled devices, but not limited thereto. The tiled devices may be, for example, display tiled devices or tiled with other devices such as antenna devices and sensing devices, but not limited thereto. The display device of the present disclosure may be a curved surface display device or a bendable display device. A bendable display device means the device can be curved, bent, folded, stretched, flexed, or the like (generally referred to as “bendable” hereinafter). That is to say, the display device may have a curved surface or present a bent state when operating, and the display device may have a fixed curved surface shape or have different curved states according to usage requirements. The embodiments of the display device of the present disclosure may be: a liquid crystal display (LCD) that is non-self-emitting, an organic light emitting diode display (OLED display), an inorganic light emitting diode display (LED display), a mini-LED display, a micro-LED Display, a quantum-dot LED display (QLED display) or an electro-phoretic display (EPD) that are self-emitting, and other display devices that can display images and pictures, but not limited thereto. It should be noted that the technical features in different embodiments described in the following can be replaced, recombined, or mixed with one another to constitute another embodiment without departing from the spirit of the present disclosure. Please refer toFIG.1.FIG.1is a partial sectional-view schematic diagram of an embodiment of a display device according to the present disclosure. This embodiment is an embodiment of a liquid crystal display. In this embodiment, the display device100includes a display panel104that is non-self-emitting, that is, a liquid crystal display panel, so the display device100can include a backlight module102. In other embodiments, the display device100may include a display panel104that is self-emitting, and the display device100may not include the backlight module102because the display panel104itself has dual functions of light emitting and display. In this embodiment, the backlight module102may produce incident light L0, which is used as a backlight required by the display panel104for displaying images. The backlight module102may include various suitable light emitting elements, such as various kinds of light emitting diodes (LEDs) or cold cathode fluorescent lamps (CCFLs). The types of the light emitting diodes can include: inorganic light emitting diodes (inorganic LEDs), mini-LEDs, micro-LEDs, organic light emitting diodes (OLEDs) or quantum dot light emitting diodes (QLEDs), but not limited thereto. The display panel104has a first substrate1041, a second substrate1042and a liquid crystal layer1043. The first substrate1041is disposed on a side of the display panel104that is closer to the backlight module102. The liquid crystal layer1043is disposed between the first substrate1041and the second substrate1042and surrounded and sealed between the two substrates by a sealant, so as to be used for modulating the incident light from the backlight module102. The first substrate1041and the second substrate1042may be respectively a thin glass substrate or a substrate including organic polymer materials that are bendable, such as a polyethylene terephthalate (PET) substrate, a polyimide (PI) substrate or a polyethylene naphthalate (PEN) substrate, but not limited thereto. In this embodiment, the outer surface of the second substrate1042may be used as a display surface100aof the display panel104. In other embodiments, the outer surface of the first substrate1041may be independently used as a display surface of the display panel104, or the display panel104may have two display surfaces which are the outer surface of the second substrate1042and the outer surface of the first substrate1041. In this embodiment, the display panel104may further include an array circuit layer1044disposed on the inner surface of the first substrate1041and a color filter layer1045disposed on the inner surface of the second substrate1042. In other embodiments, the array circuit layer1044may also be disposed on the inner surface of the second substrate1042, the color filter layer1045may also be disposed on the inner surface of the first substrate1041alternatively, or the display panel104does not include the color filter layer1045. The array circuit layer1044may include multiple-layers of conductive layers, insulating layers, and/or a semiconductor layer to construct, for example, switching elements (e.g., thin film transistors, TFTs), electronic elements such as capacitors, wires or circuits, etc. In this embodiment, the color filter layer1045may include red color filters CFR, green color filters CFG and blue color filters CFB, which are arranged side by side along the direction D1, and a light shielding layer (or referred to a black matrix layer) BM may be disposed between the red filter CFR, the green filter CFG and the blue filter CFB that are adjacent. In other embodiments, the color filter layer1045may include a red quantum-dot color converting layer and a green quantum-dot color converting layer (when the incident light L0of the backlight module102is blue light), or the color filter layer1045may include a red quantum-dot color converting layer, a green quantum-dot color converting layer and a blue quantum-dot color converting layer (when the incident light L0of the backlight module102is ultraviolet light) or may include fluorescence materials, phosphor materials or other suitable materials, and the materials thereof may be provided in any arrangement or combination, but not limited thereto. The display panel104may further include spacers (not shown inFIG.1) disposed on the first substrate1041or the second substrate1042, that is, disposed between the first substrate1041or the second substrate1042, in order to support the gap distance of the liquid crystal layer1043. The spacers may be columnar, wall-shaped or granular and arranged between the two substrates in a uniform or non-uniform type. In addition, various functional films (not shown inFIG.1) may be selectively attached to the outer surface of the first substrate1041or the outer surface of the second substrate1042, and such functional film may be, for example, a polarizer film, a retarder film, an anti-reflective film, a privacy protection film, an anti-scattering film, an anti-static shielding film or a protective film, but not limited thereto. The display surface100a(attached with a functional film) of the display panel104includes a first region R1with a first curvature and a second region R2with a second curvature, and the second curvature is different from the first curvature. The curvature of portions within each region (R1or R2) is approximately identical, and the area of each region (R1or R2) may be greater than 0 and less than 100 square centimeters (cm2). The curvature is used to describe a curved extent of the geometry. The curvature of a certain point or a certain tiny region on the geometry may be a reciprocal of a curvature radius of its osculating circle. For example, in a straight line, the curvature radius of the osculating circle at any point or any tiny region (which is a region with an area approaching 0 in mathematical calculation, and may be a region within a tolerance range of the curvature measuring apparatus in practice) is infinite, and the curvature thereof is 0. Furthermore, for example, in a curved line, the curvature radius of the osculating circle at any point or any tiny region is R, the curvature thereof is 1/R, and the center of the osculating circle is referred to the curvature center. In practice, the curvature radius of the surface of the geometry may be measured by using apparatus such as a spherometer or an optical interferometer. As shown inFIG.1, the first curvature has a curvature center C1, and the curvature radius of the first curvature is represented as a first radius ra1. The second curvature has a curvature center C2, and the curvature radius of the second curvature is represented as a second radius ra2. The first radius ra1is different from the second radius ra2, and an absolute value of a difference between the first radius ra1of the first curvature and the second radius ra2of the second curvature may be greater than 10 millimeters (mm) (>10 mm). When the first region R1is a plane (a straight line) and the second region R2is a curved surface (a curved line), the first radius ra1is infinite, and the second radius ra2is finite, so the absolute value of the difference between the first radius ra1and the second radius ra2is infinite, and thus a range of the absolute value of the difference between the first radius ra1and the second radius ra2is greater than 10 mm and less than infinity. According to some embodiments, the curvature center C1and curvature center C2are disposed on the same side of the display surface100a, but not limited thereto. In some embodiments, the curvature center C1of the first region R1and the curvature center C2of the second region R2may be disposed on different sides of the display surface100a. In some embodiments, the first radius ra1may be greater than the second radius ra2. In another embodiment, the first radius ra1may be less than the second radius ra2.FIG.1shows that the first radius ra1is smaller than the second radius ra2as an example. In this embodiment, after the incident light L0emitted from the backlight module passes through the display panel104, a first light L1(also referred to as a first exiting light L1) is outputted from the first region R1on the display surface100a(including a functional film attached to the display surface100a) of the display panel104, and a second light L2(also referred to as a second exiting light L2) is outputted in the second region R2on the display surface100aof the display panel104. In other embodiments, the display device100has no backlight module102, and the display panel104is a self-emitting display panel. A first exiting light L1is directly outputted in the first region R1on the display panel104, and a second exiting light L2is directly outputted in the second region R2on the display panel104. The first exiting light L1has a first normal brightness in a normal view angle and a first oblique brightness in a specific oblique view angle. InFIG.1, the first normal brightness and the first oblique brightness are respectively represented by an arrow NB1and an arrow OB1. Regarding a tiny partial region (which is a region with an area approaching 0 in mathematical calculation, and may be a region within a tolerance range of the curvature measuring apparatus in practice) as a plane, the normal direction of this plane is referred to a direction of the normal view angle, and a direction of the oblique view angle is a direction having an included angle of greater than 0 degree (>0°) and less than 90 degrees (<90°) with respect to the direction of the normal view angle in such partial region. In this partial region, there is one direction of the normal view angle and a plurality of directions of the oblique view angles. In this embodiment, a range of an acute included angle θ between the direction of the oblique view angle and the direction of the normal view angle is greater than or equal to 20 degrees (≥20°) and less than or equal to 60 degrees (≤60°). In other embodiments, a range of an acute included angle θ between the direction of the oblique view angle and the direction of the normal view angle is greater than 0 degree (>0°) and less than or equal to 20 degrees (≤20°), or greater than or equal to 60 degrees (≥60°) and less than 90 degrees (<90°). In addition, the brightness represents the integral value of the intensity in the visible light wavelength range of the emission spectrum, the visible light wavelength range may range from 380 nanometers (nm) to 780 nanometers (nm), and the intensity may be absolute intensity or normalized intensity. The calculation exemplified in the present disclosure may all use absolute intensity or normalized intensity. In practice, various spectrometers or photometers may be used in combination with axial adjustment mechanisms or multi-angle probes, such that the measurement axis may be adjusted, the spectrum of the normal view angle (normal direction) or the oblique view angle (not normal direction) of the light-emitting surface of the object may be measured, and converted into the brightness. On the other hand, the second exiting light L2has a second normal brightness in the normal view angle and a second oblique brightness in the same oblique view angle described above. InFIG.1, the second normal brightness and the second oblique brightness are respectively represented by an arrow NB2and an arrow OB2. According to this embodiment, an absolute value of a difference between the first normal brightness NB1and the second normal brightness NB2is defined as an absolute value of a first difference, and a ratio of the absolute value of the first difference to the first normal brightness NB1is defined as a first ratio r1, that is, r1=(|NB1−NB2|)/NB1. An absolute value of a difference between the first oblique brightness OB1and the second oblique brightness OB2is defined as an absolute value of a second difference, and a ratio of the absolute value of the second difference to the first oblique brightness OB1is defined as a second ratio r2, that is, r2=(|OB1−OB2|)/OB1. Furthermore, in the display device100of the present disclosure, the first ratio r1is less than the second ratio r2(r1<r2). The first ratio r1is calculated from the absolute value of the difference between the first normal brightness NB1and the second normal brightness NB2(i.e., the absolute value of the first difference). The second ratio r2is calculated from the absolute value of the difference between the first oblique brightness OB1and the second oblique brightness OB2(i.e., the absolute value of the second difference). According to some embodiments of the present disclosure, a ratio of the first ratio r1to the second ratio r2(r1/r2) is greater than or equal to 0.1 and less than 1.0 (0.1≤r1/r2<1.0). The above-mentioned first ratio r1may be regarded as a brightness difference ratio between the first region R1and the second region R2in the normal view angle (also called as a normal brightness difference ratio between the first region R1and the second region R2) of the display device100, and the second ratio r2may be regarded as a brightness difference ratio between the first region R1and the second region R2in the specific oblique view angle (also called as an oblique brightness difference ratio between the first region R1and the second region R2) of the display device100. The brightness of each region in each view angle may be obtained by various spectrometers or photometers in combination with axial adjustment mechanisms or multi-angle probes. Furthermore, the measurement of each brightness may be based on the light with a wavelength of 550 nm or 555 nm. The light with the wavelength of 550 nm or 555 nm is generally the wavelength value at the position of the main light intensity peak of green light. Human eyes are more sensitive to the green light (stimulation value is higher than from the red light and the blue light). According to some embodiments, it is designed that in the green light waveband, the light generated in different regions has a smaller difference in the direction of the normal view angle (which is compared with the red light waveband and the blue light waveband), so as to improve the optical performance, that is, the users in the direction of the normal view angle may perceive smaller optical brightness difference. In the above-mentioned design, the brightness difference value or brightness difference ratio between the first region R1and the second region R2in a normal view angle can be less than the brightness difference value or brightness difference ratio between the first region R1and the second region R2in an oblique view angle, such that the images perceived by the users in the direction of the normal view angle have more consistent brightness, that is, better brightness uniformity. The brightness uniformity of the images viewed from the direction of the normal view angle maybe higher than that of the images viewed from the direction of the oblique view angle. Please refer toFIG.2.FIG.2is a schematic diagram of a display device applied to a vehicle electronic device according to the present disclosure. For example, the display device100of the present disclosure may be used as a display device in a vehicle electronic device ED of a vehicle500, such as an instrument cluster display (ICD), a center stack display (CID), a rear seat entertainment display (RSED), or a rearview mirror display (RMD), etc. When the vehicle electronic device ED is used as an instrument cluster display (ICD) for providing the driver with driving related information, it is generally disposed at a position closer to the driver, such as disposed directly in front of the driver seat502(as shown inFIG.2) or disposed obliquely in front of the driving seat502, and a maximum viewable angle range α in the vehicle is about 60 degrees, that is, the maximum angle viewed from the co-driver seat504. The main user of the vehicle electronic device ED is the driver, according to some embodiments, the display device100of the present disclosure may provide display images with more uniform brightness in the direction of the normal view angle, so the driver can obtain clear image information more likely. In other words, the accuracy of the image in the normal view angle viewed by the driver will be higher than the accuracy of the image in the oblique view angle viewed by the co-driver. When the display device100is disposed in, for example, a vehicle500or other specific application sites, each region of the display surface100aof the display device may have different curvatures according the requirement of the arrangement position. For example, the curvature of each region may meet the requirement of the vehicle body. According to some embodiments, in the display device100, the accuracy in the normal view angle of can be higher than the accuracy in the oblique view angle. That is, brightness uniformity in the normal view angle for each curvature and region can be adjusted. The position of the main user corresponding to the vehicle display device is fixed, so the optimized adjustment may be performed for the display in a specific view angle. Furthermore, in the display device100of some embodiments of the present disclosure, an absolute value of a difference between a chromaticity of the first exiting light L1in the normal view angle and a chromaticity of the second exiting light L2in the normal view angle is defined as an absolute value of a first chromaticity coordinate difference, and an absolute value of a difference between a chromaticity of the first exiting light L1in the oblique view angle and a chromaticity of the second exiting light L2in the oblique view angle is defined as an absolute value of a second chromaticity coordinate difference. The absolute value of first chromaticity coordinate difference as mentioned above can be less than the absolute value of the second chromaticity coordinate difference as mentioned above. For example, but not limited, in the specification of a CIE xy chromaticity diagram of the CIE 1931 XYZ color space (taking x and y color coordinate values as indexes of chromaticity), the absolute value of the first chromaticity coordinate difference (such as an absolute value of a difference of x, or an absolute value of a difference of y) is greater than 0 and less than or equal to three thousandths (≤0.003). The absolute value of the second chromaticity coordinate difference (such as an absolute value of a difference of x, or an absolute value of a difference of y) is greater than 0 and less than or equal to ten thousandths (≤0.01), but is still larger than the first chromaticity coordinate difference. In practice, to obtain the chromaticity, various spectrometers or colorimeters may be used in combination with axial adjustment mechanisms or multi-angle probes, such that the measurement axis may be adjusted. The spectrum of light from the output surface of the object in the normal view angle (normal direction) or the oblique view angle (not normal direction) maybe measured, and converted into the chromaticity coordinate value. On the other hand, in some embodiments, the spectrum or composition of each color light of the first exiting light L1and the second exiting light L2may not be completely the same. For example, the waveband brightness (the integration of light intensity in certain waveband) of the first exiting light L1in a wavelength range of 500 nanometers (nm) to 570 nanometers (nm) may be different from the waveband brightness of the second exiting light L2in the wavelength range of 500 nanometers (nm) to 570 nanometers (nm), that is, the waveband brightness of the first exiting light L1and the second exiting light L2in about the green waveband may be different. Further, for example, the waveband brightness of the first exiting light L1in the wavelength range of 450 nanometers (nm) to 500 nanometers (nm) may be different from the waveband brightness of the second exiting light L2in the wavelength range of 450 nanometers (nm) to 500 nanometers (nm), that is, the waveband brightness of the first exiting light L1and the second exiting light L2in about the blue waveband may be different. As mentioned above, by adjusting the green waveband brightness or the blue waveband brightness of the first exiting light L1and the second exiting light L2, the users may receive more uniform brightness according to the perception extent of human eyes to different color lights. Please refer toFIG.3.FIG.3is a sectional-view schematic diagram of another embodiment of a display device according to the present disclosure. The display device100′ in this embodiment has a first curvature in a first region R1and a second curvature in a second region R2respectively, and a curvature center C1of the first curvature and a curvature center C2of the second curvature are respectively disposed on different sides of the display device100′. The curvature radius of the first curvature is represented as a first radius ra1, and the curvature radius of the second curvature is represented as a second radius ra2. The first radius ra1is different from the second radius ra2. For example, the first radius ra1is less than the second radius ra2, but not limited thereto. The relative brightness relationship between the exiting light in the first region R1and the second region R2maybe referred to the previous embodiment, and will not be described herein. The features in different embodiments can be mixed or combined with one another without departing from or violating the spirit of the present disclosure. Please refer toFIG.4andFIG.5.FIG.4is a flowchart of an embodiment of a design method of a display device according to the present disclosure.FIG.5is a manufacturing process schematic diagram of the method shown inFIG.4. The manufacturing method of the display device provides a design method of the display device, which comprises the following steps: Step S100: Providing a prototype display device100t. The structure of the prototype display device100tmay be similar to the display device100shown inFIG.1, and the prototype display device100tincludes a display panel having a liquid crystal layer. The prototype display device100tmay also be other types of displays. In an initial state, the prototype display device100thas a display surface of a plate shape, and the exiting light type in each region on the display surface is generally uniform, as shown in a flow (F1) ofFIG.5. Step S102: Bending the display panel, for example, bending it according to the predetermined curved surface state of the product, to make a first region Rt1of the display panel have a first curvature and a second region Rt2of the display panel have a second curvature. The second curvature is different from the first curvature. In a bent state, since the curvatures of different regions of the display panel are different, the incident light from the backlight module can have different light paths in the liquid crystal layer, or the emitting directions of the light emitting sources of the display panel itself may be different. Thus, the display panel can have different exiting light types in different regions, and the light can have different brightness or chromaticity in different angles, as shown in a flow (F2) ofFIG.5. Step S104: Performing brightness or chromaticity measurement in each angle to the display panel that has been bent to obtain an absolute value of a normal brightness difference between the first region Rt1and the second region Rt2. Step S106: Adjusting the structural design of the display device according to the absolute value of the normal brightness difference to obtain a display device100with better optical effect, as shown in a flow (F3) ofFIG.5. The adjustment method thereof is reducing the absolute value of the normal brightness difference. Relevant structures about the first region R1and the second region R2of the display device100, such as the absolute value of the normal brightness difference and the curvature definition may be referred to the relevant description ofFIG.1. From the above description, the manufacturing method includes bending the prototype display device100tinto the predetermined curved surface shape, realizing the exiting light brightness of different regions of the display panel after bending, especially the light brightness of the normal view angle, and then adjusting the structural design of the display device to improve the uniformity of the normal brightness. In another embodiment, the step S104may further include measuring the oblique brightness of different regions of the display panel, and comparing the oblique brightness difference in each region, or measuring the normal chromaticity and the oblique brightness of different regions and mainly reduce the normal chromaticity difference. It should be noted that, the present disclosure focuses on making the normal brightness (or chromaticity) of each region tend to be uniform, more than the uniform condition of the oblique brightness (or chromaticity). Therefore, in order to reduce the absolute value of the normal brightness difference (or the absolute value of the color coordinate difference) of different regions, it is likely to make various structural or operational special designs for the display100. In such design, the absolute value of the normal brightness difference (or the absolute value of the color coordinate difference) in the first region R1and the second region R2of the display panel of the display device100should be smaller the absolute value of the normal brightness difference (or the absolute value of the color coordinate difference) in the first region Rt1and the second region Rt2of the prototype display device100t. In different embodiments, the absolute value of the oblique brightness difference in the first region R1and the second region R2of the display panel of the display device100may be larger than, less than or equal to the absolute value of the oblique brightness difference in the first region Rt1and the second region Rt2of the prototype display device100t. That is to say, when manufacturing the display device100, the principle is mainly to reduce the absolute value of the normal brightness difference (or the absolute value of the color coordinate difference) between different regions of the display panel, and the absolute value of the oblique brightness difference (or the absolute value of the color coordinate difference) may also be selectively reduced. If it may not be both achieved, the objective is to reduce the absolute value of the normal brightness difference (or the absolute value of the color coordinate difference) between different regions first. The normal brightness differences or the color coordinate differences mentioned above are all compared with each other in absolute values. The structural parameters for adjusting brightness uniformity may include, but not limited to, backlight source brightness, inclined angle of transparent electrodes of sub-pixels, groove design of the optical compensation film, pattern design and aperture ratio of the light shielding layer, distribution density and sizes of the spacers, aperture ratio of each color sub-pixel, thickness of color filters of different colors, thickness of the liquid crystal layer, etc. In addition, according to the present disclosure, in the step S100, the provided prototype display device100tmay also, according to the later predetermined bent shape, be designed differently for different regions by using one or more structural parameters mentioned above in advance, so as to perform pre-compensation design inside the display device. For example, by designing different thicknesses of the liquid crystal layer in different regions of the display panel, the traveling path lengths of the light in the liquid crystal layer are more uniform after bending, but not limited thereto. Please refer toFIG.6andFIG.7.FIG.6is a sectional-view schematic diagram of another embodiment of a display device according to the present disclosure.FIG.7is a partial enlargement top-view schematic diagram of a circuit array layer of the display device shown inFIG.6.FIG.6only illustrates a part of the elements of the circuit array layer1044, and the color filter layer1045, the light shielding layer BM and other elements of the display device are omitted. As shown inFIG.6andFIG.7, each of the sub-pixels SPX of the display device100includes a metal conductive layer ML and a transparent conductive layer TL, respectively. The metal conductive layer ML shown inFIG.7may be a data line (which may also be a gate line, a common electrode line or a power line), and the transparent conductive layer TL shown inFIG.7may be a transparent electrode of the sub-pixel SPX, such as a pixel electrode and/or a common electrode. For example, one sub-pixel SPX may include a plurality of transparent electrodes TEa, TEb and TEc, and when the transparent electrodes TEa, TEb and TEc are all used as pixel electrodes, the transparent electrodes TEa, TEb and TEc may be electrically connected with each other or directly connected with each other. For example, they may be connected with each other at the upper end or the lower end thereof by a part of the transparent conductive layer TL, and there are slits ST between the transparent electrodes TEa, TEb and TEc. If one of the transparent electrodes TEa, TEb and TEc is used as a common electrode, then the transparent electrode TEa, TEb or TEc used as the common electrode will not be connected with the transparent electrode TEa, TEb or TEc used as the pixel electrode. According to this embodiment, each of the transparent electrodes TEa, TEb and TEc has a shape of “<”, and the transparent electrodes TEa, TEb and TEc are adjacent and side by side along the direction D1, but not limited thereto. In other embodiments, the transparent electrodes TEa, TEb and TEc may have different shapes, such as a shape of “/”, a shape of “J” or a shape of “S”. The direction D1may be any one of the principal axes of a Cartesian coordinate system (also known as an orthogonal coordinate system or an xyz coordinate system, and the three principal axes are orthogonal to each other), such as the x axis, the y axis, or the z axis. The transparent electrodes TEa, TEb and TEc in the sub-pixel SPX1corresponding to the first region R1have a pixel included angle β1with respect to the direction D1, and the transparent electrodes TEa, TEb and TEc in the sub-pixel SPX1corresponding to the second region R2have a pixel included angle β2with respect to the direction D1. In the step S106ofFIG.4, the structural design of the display device100is adjusted, in order to reduce the normal brightness difference in the first region R1and the second region R2. In this embodiment, it is designed to make the pixel included angle β2different from the pixel included angle β1, that is, make the slits ST between the transparent electrodes TEa, TEb and TEc have different inclined angles in the first region R1and in the second region R2, so as to achieve the objective of making the normal brightness more uniform. In some embodiments, the angle difference between the pixel included angle β2and the pixel included angle β1may be greater than 3 degrees (>3°) and less than 30 degrees (<30°), but not limited thereto. The angle difference between the pixel included angle β2and the pixel included angle β1may also be greater than 3 degrees (>3°) and less than 10 degrees (<10°). For example, when performing brightness measurement to the prototype display device100tthat has been bent in the step S104ofFIG.4, if the normal brightness of the first region Rt1is less than the normal brightness of the second region Rt2, the pixel included angle β1of the sub-pixel SPX1in the first region R1of the display device100may be designed as greater than the pixel included angle β2of the sub-pixel SPX2in the first region R2of the display device100, that is, making the angle between the silts ST and the direction D1closer to 90 degrees. When the pixel included angle β1is greater, the transmittance of the light to the liquid crystal layer1043is higher, so the brightness of the sub-pixel SPX1may be enhanced, such that the brightness of the sub-pixel SPX1in the first region R1is closer to the brightness of the sub-pixel SPX2in the second region R2. If the measured result in the step S104is opposite to the above-mentioned example, the pixel included angle β1of the sub-pixel SPX1and the pixel included angle β2of the sub-pixel SPX2may have opposite designs, and will not be described herein. In this embodiment, by adjusting the pixel included angle β1and the pixel included angle β2, the above-mentioned first ratio r1can be less than the above-mentioned second ratio r2, or the ratio of the first ratio r1to the second ratio r2(r1/r2) can be greater than or equal to 0.1 and less than 1.0 (0.1≤r1/r2<1). In the following embodiments or other embodiments of the present disclosure, the relative relationship between the first ratio r1and the second ratio r2or the relative relationship between the chromaticity differences in the first region R1and the second region R2can be obtained by adjusting and changing or designing the structure of the display device100, and will not be described herein. Furthermore, various embodiments of the present disclosure can be combined and varied with one another, and each structural parameter can be adjusted simultaneously or respectively to achieve the above-mentioned relationship. Please refer toFIG.8.FIG.8is a sectional-view schematic diagram of another embodiment of a display device according to the present disclosure.FIG.8only illustrates a metal conductive layer ML, a first transparent conductive layer TL1and a second transparent conductive layer TL2, and other layers and elements on the surfaces of the first substrate1041and the second substrate1043are omitted. In this embodiment, the first transparent conductive layer TL1is disposed on the inner surface of the first substrate1041and include one or more first transparent electrode(s) TE1corresponding to the first region R1and one or more first transparent electrode(s) TE2corresponding to the second region R2. The second transparent conductive layer TL2is disposed on the inner surface of the second substrate1042and include one or more second transparent electrode (s) TE1′ corresponding to the first region R1and one or more second transparent electrode(s) TE2′ corresponding to the second region R2. Furthermore, the second transparent electrode TE1′ may correspond to the first transparent electrode TE1, and the second transparent electrode TE2′ may correspond to the first transparent electrode TE2. Each of the first transparent electrodes TE1and TE2and each of the second transparent electrodes TE1′ and TE2′ maybe independently used as a pixel electrode or a common electrode in each of the sub-pixels SPX. By adjusting the voltages of the second transparent electrodes TE1′ and TE2′ in different regions, the inclined angles (the included angle with respect to the normal direction) of the liquid crystal molecules in the liquid crystal layer1043may be changed, thus the function of modulating the light phase of the liquid crystal molecules may be changed, and the light output efficiency of light passing through a functional film (such as a polarizing film) may be affected. For example, if the liquid crystal is a positive polarity liquid crystal, when the voltage is higher, the inclined angles of the liquid crystal molecules are smaller, and the brightness of the exiting light may be smaller. Thus, the brightness in different regions may be adjusted to achieve the objective of making the brightness in the normal view (also referred to as normal brightness) more uniform. For example, when performing brightness measurement to the display panel of the prototype display device100tthat has been bent in the step S104, if the normal brightness of the first region Rt1is less than the normal brightness of the second region Rt2, then the voltage provided to the second transparent electrode TE2′ in the second region R2can be designed to be slightly greater than the voltage provided to the second transparent electrode TE1′ of the first region R1to increase the inclined angles of the liquid crystal molecules near the second region R2, and reduce the brightness of the second region R2. If the measured result in the step S104is opposite to the above-mentioned example, then the relationship of the voltages provided to the second transparent electrode TE1′ and the second transparent electrode TE2′ may be opposite to the above-mentioned example, and will not be described herein. The various embodiments of the present invention can be combined and varied with one another, and each structural parameter can be adjusted simultaneously or respectively to achieve the above-mentioned relationship. Please refer toFIG.9.FIG.9is a top-view schematic diagram of another embodiment of a display device and only illustrates a light shielding layer BM and a second substrate1042. The light shielding layer BM has different pixel aperture ratios and/or different pattern widths or pattern sizes in the first region R1and the second region R2to adjust the individual light output amount of the sub-pixels in different regions and improve the uniformity of the normal brightness. For example, the aperture ratio OP1of the first region R1is made greater than the aperture ratio OP2of the second region R2, so that the light output amount of the sub-pixels of the first region R1is larger than the light output amount of the sub-pixels of the second region R2. In the design mentioned above, the width W1of the patterns in the bending direction (i.e., the direction D1) of the light shielding layer BM in the first region R1may be smaller than the width W2of the patterns in the bending direction of the light shielding layer BM in the second region R2, so as to enhance the aperture ratio OP1of the sub-pixels of the first region R1or reduce the aperture ratio OP2of the sub-pixels of the second region R2. In different embodiments, the aperture ratio OP1and the aperture ratio OP2may be designed as having relative relationship opposite to the above-mentioned example, and will not be described herein. The various embodiments of the present disclosure can be combined and varied with one another, and each structural parameter can be adjusted simultaneously or respectively to achieve the above-mentioned relationship. Please refer toFIG.10.FIG.10is a partial sectional-view schematic diagram of another embodiment of a display device and only illustrates a backlight module102, a first substrate1041, a second substrate1042, a liquid crystal layer1043and spacers PS1and PS2disposed between the first substrate1041and the second substrate1042. The width of the spacers is the maximum width on the partial section mentioned above, such as a width of an upper surface. InFIG.10, a width W3of the spacer PS1in the first region R1is greater than a width W4of the spacer PS2in the second region R2. Therefore, the spacer PS1may provide stronger supporting force to the liquid crystal layer1043, and the extent that the partial liquid crystal layer1043is compressed due to bending may be reduced, thereby reducing the attenuation of the brightness. Therefore, for increasing the light output amount or brightness of the first region R1, the width W3of the spacer PS1in the first region R1may be made larger, as shown inFIG.10. In contrast, for reducing the light output amount or brightness of the first region R1, the width W3of the spacer PS1in the first region R1may be made smaller. The various embodiments of the present disclosure can be combined and varied with one another, and each structural parameter can be adjusted simultaneously or respectively to achieve the above-mentioned relationship. Please refer toFIG.11.FIG.11is a partial sectional-view schematic diagram of another embodiment of a display device and only illustrates a backlight module102, a first substrate1041, a second substrate1042, a liquid crystal layer1043and spacers PS1and PS2disposed between the first substrate1041and the second substrate1042. As shown inFIG.11, a width W3of the spacer PS1in the first region R1may be approximately equal to a width W4of the spacer PS2in the second region R2, but a distribution density of the spacer PS1in the first region R1may be different from a distribution density of the spacer PS2in the second region R2, that is, a pitch SP1between the spacers PS1may be different from a pitch SP2between the spacers PS2. The distribution density refers to the objective unit per unit area. For example, 400 spacers are distributed in an area of 1 square millimeter (mm2), so the distribution density of the spacer in this portion is 400 (unit/mm2). InFIG.11, the pitch SP1is less than the pitch SP2for example. In such condition, when the distribution density of the spacers PS1is greater, a better supporting force may be provided, and the extent that the partial liquid crystal layer1043is compressed may be reduced, thereby reducing the attenuation of the brightness of the first region R1. In contrast, when the distribution density of the spacers PS2is smaller, the extent that the partial liquid crystal layer1043is compressed may be greater, such that the brightness of the second region R2is lower. In different embodiments, the distribution density of the spacers PS1and PS2may have opposite design. The various embodiments of the present disclosure can be combined and varied with one another, and each structural parameter can be adjusted simultaneously or respectively to achieve the above-mentioned relationship. Please refer toFIG.12.FIG.12is a partial enlargement schematic diagram of another embodiment of a display device and illustrates the arrangement of the color filter layer of a pixel PX1in the first region R1and a pixel PX2in the second region R2. The pixels generally include a red sub-pixel, a green sub-pixel and a blue sub-pixel, but not limited thereto. InFIG.12, the red filter CFR2, the green filter CFG2and the blue filter CFB2in the single pixel PX2of the second region R2all have the same areas, and the areas of the red filter CFR1, the green filter CFG1and the blue filter CFB1in the single pixel PX1of the first region R1are different. For example, the area of the blue filter CFB1is less than the area of the red filter CFR1, and the area of the blue filter CFB1is less than the area of the green filter CFG1. Furthermore, the area of the red filter CFR1is greater than the area of the red filter CFR2, and the area of the green filter CFG1is greater than the area of the green filter CFG2, but not limited thereto. Since the wavelength of the blue light is shorter, the blue light is easier to scatter than the red light and the green light, so the relatively smaller area of the blue filter CFB1in the pixel PX1may reduce the proportion of the blue light. Furthermore, since the brightness of the pixel PX1is less than the brightness of the pixel PX2, the brightness of the red light and the green light may be increased, so as to reduce the absolute value of the brightness difference in the normal view angle and improve the perception of human eyes to the normal brightness of the pixel PX1. Therefore, the perception of human eyes about the normal brightness of the pixel PX1and the pixel PX2may be different due to the different structural designs of the pixel PX1and the pixel PX2inFIG.12, thereby adjusting the uniformity of the normal brightness in the first region R1and the second region R2according to the curvatures of each region. In different embodiments, the arrangement of the areas of the color filters of the pixel PX1and the pixel PX2may be exchanged, and the arrangement of the color filters of each color is not limited to that shown inFIG.12. The various embodiments of the present disclosure can be combined and varied with one another, and each structural parameter can be adjusted simultaneously or respectively to achieve the above-mentioned relationship. Please refer toFIG.13.FIG.13is a sectional-view enlargement schematic diagram of a partial color filter layer of another embodiment of a display device. The color filter layer1045of the display device100may have different thicknesses in the first region R1and the second region R2. InFIG.13, the red filter CFR2, the green filter CFG2and the blue filter CFB2in the pixel PX2of the second region R2may have the same thickness, and the thicknesses of the red filter CFR1, the green filter CFG1and the blue filter CFB1in the pixel PX1of the first region R1may be different. For example, the thickness of the blue filter CFB1is less than the thickness of the red filter CFR1, and the thickness of the blue filter CFB1is less than the thickness of the green filter CFG1. Furthermore, the thickness of the green filter CFG1in the pixel PX1is greater than the thickness of the green filter CFG2in the pixel PX2, and the thickness of the red filter CFR1in the pixel PX1is greater than the thickness of the red filter CFR2in the pixel PX2, but not limited thereto. In such design, the thickness of the blue filter CFB1in the pixel PX1is relatively less than the thicknesses of the red filter CFR1and the green filter CFG1in pixel PX1, so the proportion of the blue light that is easier to scatter may be reduced. Furthermore, since the brightness of the pixel PX1is less than the brightness of the pixel PX2, the brightness of the red light and the green light may be increased, so as to reduce the absolute value of the brightness difference in the normal view angle and improve the perception of human eyes to the normal brightness of the pixel PX1. In different embodiments, the thickness design of the color filter layer1045in the pixel PX1and the pixel PX2may be exchanged, or there may be other thickness variants according to the requirements. The various embodiments of the present disclosure can be combined and varied with one another, and each structural parameter can be adjusted simultaneously or respectively to achieve the above-mentioned relationship. Please refer toFIG.14.FIG.14is a top-view enlargement schematic diagram of a backlight module of another embodiment of a display device and only illustrates light emitting elements1021in a backlight module102. As shown inFIG.14, the backlight module102may include a plurality of light emitting elements1021arranged side by side in a light emitting unit array along the direction D1and the direction D2. However, this embodiment may be designed as that not all of the light emitting elements1021will be lit up when the display device100is in an operating state. As shown inFIG.14, the light emitting elements1021that will be lit up when the display device100is operated are defined as light emitting units LU, which can actually provide backlight sources. In addition, some light emitting elements1021may not be lit up when the display device100is operated, which can be used as dummy light emitting units LUD. Thus, the brightness of partial regions of the backlight module102may be adjusted, thereby adjusting the brightness uniformity of the display device100. For example, inFIG.14, the distribution density of the light emitting units LU corresponding to the first region R1may be greater than the distribution density of the light emitting units LU corresponding to the second region R2, and the number of the dummy light emitting units LUD corresponding to the second region R2may be greater than the number of the dummy light emitting units LUD corresponding to the first region R1. Therefore, the backlight module102may provide higher brightness to the first region R1of the display panel104. If the normal brightness in the first region Rt1obtained during the measurement in the step S104ofFIG.4is smaller, the design of the backlight module102ofFIG.14may be adopted. If the normal brightness in the second region Rt2obtained during the measurement in the step S104ofFIG.4is smaller, the backlight module102may be designed to have a larger number of light emitting units LU corresponding to the second region R2and a larger number of dummy light emitting units LUD corresponding to the first region R1, and will not be described herein. The various embodiments of the present disclosure can be combined and varied with one another, and each structural parameter can be adjusted simultaneously or respectively to achieve the above-mentioned relationship. Please refer toFIG.15.FIG.15is a top-view enlargement schematic diagram of a backlight module of another embodiment of a display device and only illustrates light emitting elements1021in a backlight module102. The backlight module102in this embodiment may include a plurality of light emitting elements1021arranged in an array, but the light emitting elements1021may provide different light emitting brightness depending on the arrangement positions, that is, the backlight module102has a design with stronger brightness in partial position. For example, in the first region R1, some of the light emitting elements1021may be referred to as first light emitting units LU1, the other light emitting elements1021may be referred to as second light emitting units LU2, and the first light emitting units LU1may provide stronger brightness than the second light emitting units LU2. For example, the brightness of the first light emitting units LU1may be stronger by providing larger voltage or current to the first light emitting units LU1, but not limited thereto. If the normal brightness in the first region Rt1obtained during the measurement in the step S104ofFIG.4is smaller, the design of the backlight module102ofFIG.15may be adopted. If the normal brightness in the second region Rt2obtained during the measurement in the step S104ofFIG.4is smaller, the backlight module102may be designed to have a larger number of first light emitting units LU1corresponding to the second region R2, and will not be described herein. In another embodiment, the light emitting element1021may provide more than three kinds of brightness according to requirements to provide different brightness for different regions of the display panel104. The various embodiments of the present disclosure can be combined and varied with one another, and each structural parameter can be adjusted simultaneously or respectively to achieve the above-mentioned relationship. Please refer toFIG.16.FIG.16is a partial sectional-view schematic diagram of another embodiment of a display device. Besides the backlight module102and the display panel104, the display device100inFIG.16may also include an optical compensation film106disposed on the light output side of the display panel104, which is the outer surface of the second substrate1042. The optical compensation film106may include a plurality of grooves1061on the surface. The partial light output intensity of the display panel104may be adjusted by the distribution density, the patterns or the shapes of the grooves1061, so as to achieve the effect of making the normal brightness more uniform. The cross-sectional shapes of the grooves1061inFIG.16are trapezoids as an example, but not limited thereto. For example, the cross-sectional shapes of the grooves1061may be rectangles, triangles or any other suitable shapes. In addition, the grooves1061may have different designs in the first region R1and the second region R2according to the requirements to partially adjust the light output amount and the light brightness in the normal angle in each region. Please refer toFIG.17.FIG.17is a sectional-view schematic diagram of other examples of an optical compensation film. As shown in an example (I) inFIG.17, the grooves1061may respectively have different pitches SP3and pitches SP4in the first region R1and the second region R2. As shown in an example (II) inFIG.17, the grooves1061may have rectangular cross-sectional shapes. As shown in an example (III) inFIG.17, the grooves1061may have triangular cross-sectional shapes. As shown in an example (IV) inFIG.17, the grooves1061may respectively have different cross-sectional shapes in the first region R1and the second region R2. For example, the grooves1061in the first region R1have trapezoidal cross-sectional shapes, and the grooves1061in the second region R2have triangular cross-sectional shapes, but not limited thereto. The various shapes of the grooves1061may be combined or adjusted and varied according to requirements, and the grooves1061may be filled with air or any filling material, only if the refractive index of the grooves1061is different from the refractive index of the optical compensation film106, the effect of adjusting the light output angle can be achieved. The various embodiments of the present disclosure can be combined and varied with one another, and each structural parameter can be adjusted simultaneously or respectively to achieve the above-mentioned relationship. Please refer toFIG.18andFIG.19.FIG.18is a flowchart of another embodiment of a manufacturing method of a display device.FIG.19is a manufacturing process schematic diagram of the method shown inFIG.18. The manufacturing method of the display device includes the following steps: Step S200: Providing a display device including a display panel. The structure of the display device may be similar to the liquid crystal device as shown inFIG.1, and besides the display device may also be other types of display devices. In an initial state, the display panel104uof the display device has a display surface of a plate shape, and each region on the display panel104uhas an exiting light type that is substantially uniform, as shown in a flow (F1) ofFIG.19. Step S202: Bending the display panel104uto make a first region R1of the display panel104uhave a first curvature and a second region R2of the display panel104uhave a second curvature. The second curvature is different from the first curvature. When the display panel104uis in a bent state, since the curvatures of different regions of the display panel104uare different, the incident light from the backlight module can have different light paths in the liquid crystal layer, or the emitting directions of the light emitting sources of the display panel itself may be different. Thus, the display panel can have different exiting light types in different regions, and the light can have different brightness or chromaticity in different angle as shown in a flow (F2) ofFIG.19. Step S204: Performing brightness measurement in each angle to the display panel that has been bent to obtain an absolute value of a normal brightness difference between the first region R1and the second region R2. Step S206: Performing a brightness adjustment procedure to reduce the absolute value of the normal brightness difference between the first region R1and the second region R2of the display panel104u, so as to accomplish a display device with more uniform normal brightness, such as the display device100as shown in a flow (F3) ofFIG.19. In the brightness adjustment procedure of the step S206, the structural parameters for adjusting normal brightness uniformity include, for example, the adjustment of the partial brightness of backlight source (such as the method ofFIG.15toFIG.17) and the design of the optical compensation film, but not limited thereto. When performing measurement to the brightness (or chromaticity) of the bent display device in the step S204, it is mainly to perform measurement to the brightness (or chromaticity) of the exiting light in the direction of the normal view angle in different regions, and it may be selectively to perform measurement to the brightness (or chromaticity) of the exiting light in the direction of the oblique view angle to realize the brightness difference in different regions in the direction of the normal view angle and the brightness difference in different regions in the direction of the oblique view angle. In the brightness adjustment procedure, the principle is mainly to reduce the normal brightness difference between different regions, and the oblique brightness difference may also be selectively reduced. If it may not be both achieved, the objective is to reduce the normal brightness difference between different regions first. In this objective, after adjusting the brightness, in order to reduce the normal brightness difference, the adjusted brightness difference ratio in the oblique view angle may be increased, reduced or of not so much difference. In addition, according to the present disclosure, when in the step S200, the display device may also, according to the later predetermined bent shape, be designed differently for different regions by using one or more other structural parameters in advance, so as to perform pre-compensation design inside the display device. For example, by the design that in different regions, there are different liquid crystal layer thicknesses, angles of electrodes of sub-pixels, pattern design and aperture ratio of the light shielding layer, distribution density and sizes of the spacers, aperture ratio of each color sub-pixel and thickness of color filters of different colors. Thus, the traveling path lengths of the light in the liquid crystal layer are more uniform after bending, but not limited thereto. From the above description, the display device manufactured by the manufacturing method of the display device according to the present disclosure may have a display surface with a curved surface. In each region with different curvatures, the normal brightness difference ratio in the direction of the normal view angle is smaller, for example, smaller than the oblique brightness difference ratio in the direction of the oblique view angle. Thus, users in the direction of the normal view angle may obtain images with more uniform brightness. When the display device of the present disclosure is applied to the condition that less number of users or the user is directly in front of the display device, for example, applied to a vehicle display device, the position of the main user (such as the driver in the vehicle display device) is fixed, so the optimized design in the specific view angle may be performed for the users, for example, enabling the user right in front of the display device may observe better images. Those skilled in the art will readily observe that numerous modifications and alterations of the device and method may be made while retaining the teachings of the disclosure. Accordingly, the above disclosure should be construed as limited only by the metes and bounds of the appended claims. | 63,004 |
11862115 | DESCRIPTION OF EXEMPLARY EMBODIMENTS Hereinafter, a preferred embodiment according to the present disclosure will be described in detail. The present embodiment to be described below does not unduly limit contents described in claims, and not all configurations described in the present embodiment are necessarily essential constituent elements. 1. Display Apparatus FIG.1shows a configuration example of a display apparatus40including a circuit device100according to the present embodiment. The display apparatus40includes the circuit device100and a display unit300. A processing device200transmits image data to the circuit device100of the display apparatus40. The processing device200is a so-called SoC, and is, for example, a processor such as a CPU or a microcomputer. The SoC is an abbreviation for a system on chip. The CPU is an abbreviation for a central processing unit. The circuit device100acquires failure information on a light source of a backlight, and adjusts the backlight and performs color correction on the image data based on the failure information. The circuit device100transmits the color-corrected image data to the display unit300. The circuit device100is, for example, an integrated circuit device in which a plurality of circuit elements are integrated on a semiconductor substrate. The display unit300includes a display panel and the backlight, and displays the color-corrected image data from the circuit device100on the display panel. When the backlight emits light to the display panel and the light transmitted through the display panel is incident on eyes of a user, an image displayed on the display panel is visually recognized by the user. The display apparatus40may be any apparatus as long as it is an apparatus that presents an image to the user based on the image data. As an example, the display apparatus40is an in-vehicle cluster panel, a television apparatus, a monitor of an information processing terminal, a projector, a head-up display apparatus, or the like. An example of the head-up display apparatus will be described later. 2. Detailed Configuration Example of Display Unit and First Detailed Configuration Example of Circuit Device FIG.2shows a detailed configuration example of the display unit300. The display unit300includes a processing device310, a light source driver320, a backlight330, and a display panel340. The processing device310performs conversion between a communication format used by a light source interface circuit192of the circuit device100and a communication format used by the light source driver320. The processing device310is, for example, a processor such as a CPU or a microcomputer. The processing device310may be omitted, and the light source interface circuit192and the light source driver320may directly communicate with each other. The backlight330includes a plurality of light sources two-dimensionally disposed in a plan view. Each light source is, for example, a light emitting element such as an LED. The LED is an abbreviation for a light emitting diode. The backlight330overlaps the display panel340such that a side on which the plurality of light sources are disposed faces the display panel340in a plan view. Accordingly, emitted light from the plurality of two-dimensionally disposed light sources is incident on the display panel340. The two-dimensional arrangement of the light sources is, for example, a matrix arrangement, but is not limited thereto, and may be, for example, a staggered arrangement. The staggered arrangement is, for example, an arrangement in which the light sources are disposed in odd-numbered columns in odd-numbered rows and the light sources are disposed in even-numbered columns in even-numbered rows. The light source driver320drives the light sources of the backlight330based on light source control data from the light source interface circuit192. Further, the light source driver320detects a failure of each light source of the backlight330, and transmits failure information thereon to the light source interface circuit192. The light source driver320includes a first driver DR1to an n-th driver DRn. n is an integer of 1 or more. Each driver is configured with, for example, an integrated circuit device. Specifically, the first driver DR1drives some light sources among the plurality of light sources of the backlight330. The first driver DR1independently turns on or off the light sources in charge. Further, the first driver DR1causes the light sources in charge to emit light with a light amount set by the circuit device100. The light amount can be independently set for each light source. The same applies to the second driver DR2to the n-th driver DRn. The first driver DR1detects a failure of each light source in charge. The failure of the light source is a state in which the driver cannot control turning-on, turning-off, or the light amount of the light source. The failure of the light source is, for example, an open circuit and a short circuit of a light emitting element. The open circuit of the light emitting element is a state in which the light emitting element is turned off or cannot be controlled with a low light amount due to disconnection. The first driver DR1detects the open circuit of the light emitting element by, for example, comparing an anode voltage of the light emitting element with an open circuit detection threshold voltage. The short circuit of the light emitting element is a state in which the light emitting element is turned on or cannot be controlled with a high light amount due to short circuit of a power supply and the like. The first driver DR1detects the short circuit by, for example, comparing the anode voltage of the light emitting element with a short circuit detection threshold voltage. Alternatively, a failure of the light source may be a light amount abnormality of the light emitting element. The light amount abnormality is a state where a light amount is lower or higher than a light amount in a normal state. The first driver DR1detects the light amount abnormality of the light emitting element by detection of a current that flows through the light emitting element or an optical sensor or the like. The first driver DR1to the n-th driver DRn are connected for cascade communication. That is, the first driver DR1receives input data SDI such as the light source control data from the processing device310, and transmits the input data SDI to the second driver DR2, which is repeated until the n-th driver DRn, so that the input data SDI is transmitted to the first driver DR1to the n-th driver DRn. Further, the first driver DR1transmits output data such as the failure information on the light source to the second driver DR2. The second driver DR2adds transmission data of the second driver DR2to transmission data from the first driver DR1to transmit the added transmission data to the third driver DR3, which is repeated until the n-th driver DRn. The n-th driver DRn transmits output data SDO including output data of the first driver DR1to the n-th driver DRn to the processing device310. A communication connection method between the processing device310and the first driver DR1to the n-th driver DRn is not limited to the above, and connection methods of various communication methods may be adopted. The display panel340is, for example, a liquid crystal display panel. The liquid crystal display panel may be of a transmissive type or a reflective type. The display unit300includes a display controller and a display driver, which are not shown. The display controller outputs, to the display driver, output image data IMB from an output circuit130and a timing control signal for controlling a display timing. The display driver drives the display panel340based on the output image data IMB and the timing control signal, and causes the display panel340to display an image based on the output image data IMB. A function of the display controller may be incorporated in the circuit device100. FIG.3shows a first detailed configuration example of the circuit device100. The circuit device100includes an input circuit105, a color correction circuit115, a luminance analysis circuit125, an output circuit130, a dimming circuit135, a light amount abnormality detection circuit145, a light source control circuit180, and the light source interface circuit192. The input circuit105receives input image data IMA from the processing device200. The input circuit105may be a reception circuit for various communication interfaces, and is, for example, a reception circuit for an LVDS, a DVI, a display port, a GMSL, a GVIF, or the like. The LVDS is an abbreviation for a low voltage differential signaling, the DVI is an abbreviation for a digital visual interface, the GMSL is an abbreviation for a gigabit multimedia serial link, and the GVIF is an abbreviation for a gigabit video interface. The light source interface circuit192communicates with the light source driver320via the processing device310of the display unit300. The light source interface circuit192may be various communication interfaces used for communication between circuit devices, and is, for example, SPI or I2C. The SPI is an abbreviation for a serial peripheral interface. The I2C is an abbreviation for an inter integrated circuit. The light source interface circuit192and a host interface circuit191are not limited to the separately provided interface circuits, and may be one common interface circuit. The light amount abnormality detection circuit145acquires failure information LE on each light source of the backlight330from the light source driver320via the processing device310and the light source interface circuit192. The failure information LE includes information indicating a position of each light source of the backlight330, and information indicating whether each light source is normal, is in an open circuit state, is in a short circuit state, or has an abnormal light amount. The light amount abnormality detection circuit145detects a light amount abnormality based on the failure information LE, and outputs a detection result LDET thereof to the dimming circuit135. That is, when receiving the failure information LE including information on an abnormal light source, the light amount abnormality detection circuit145outputs information indicating a position of the abnormal light source and information indicating whether the abnormal light source is in the open circuit state, is in the short circuit state, or has the light amount abnormality. The luminance analysis circuit125analyzes luminance of input image data IMA, and outputs an analysis result thereof as luminance information YA. An example of the luminance information YA is a luminance image indicating a luminance value of each pixel, a luminance value of each area illuminated by each light source of the backlight330, or the like. The dimming circuit135dims the light sources of the backlight330based on the luminance information YA. For example, the dimming circuit135turns off a light source corresponding to an area of black data in the luminance information YA. Alternatively, the dimming circuit135may perform local dimming control for adjusting a light amount of each light source based on the luminance information YA of an area illuminated by each light source. Further, the dimming circuit135compensates for an insufficient light amount or an excessive light amount of an area illuminated by an abnormal light source by adjusting light amounts of normal light sources around the abnormal light source based on the detection result LDET of the light amount abnormality. The dimming circuit135outputs light amount information DIM of the light sources determined by dimming and light amount compensation. The light source control circuit180transmits the light source control data based on the light amount information DIM of the light sources to the light source driver320via the light source interface circuit192and the processing device310. The light source control data is data for controlling turning-on, turning-off, or light amounts of the light sources of the backlight330. The color correction circuit115performs the color correction on the input image data IMA based on the light amount information DIM of the light sources. The color correction is to correct color data of RGB colors. The color correction based on the light amount information DIM is mainly to correct a luminance value of each pixel of the input image data IMA based on the light amount information DIM. However, when a color balance changes according to a light amount, color correction for canceling the change may be performed. The color correction circuit115performs the color correction on the input image data IMA such that a display image of the display unit300does not substantially change even when the light amounts of the light sources are adjusted. That is, the color correction is performed such that an appearance when light emission of the backlight330is uniform and the input image data IMA is displayed as it is substantially the same as an appearance when the light amounts of the light sources of the backlight330are adjusted and the image data is color-corrected. The output circuit130transmits the image data from the color correction circuit115to the display unit300as the output image data IMB. The output circuit130may be a transmission circuit for various communication interfaces, and is, for example, a transmission circuit for an LVDS, a DVI, a display port, a GMSL, a GVIF, or the like. The color correction circuit115, the luminance analysis circuit125, the dimming circuit135, the light amount abnormality detection circuit145, and the light source control circuit180are logic circuits. Each of these circuits may be configured as an individual logic circuit, or may be configured as a logic circuit integrated by automatic arrangement wiring or the like. Further, some or all of these circuits may be implemented by a processor such as a DSP. The DSP is an abbreviation for a digital signal processor. In this case, a program or an instruction set in which a function of each circuit is described is stored in a memory, and the function of each circuit is implemented by a processor executing the program or the instruction set. In the above description, although an example in which the circuit device100performs dimming control such as local dimming and handles a light amount abnormality has been described, the circuit device100may handle the light amount abnormality without performing the dimming control such as the local dimming. In this case, the luminance analysis circuit125may be omitted. 3. Detailed Example of Processing Performed by Circuit Device of First Detailed Configuration Example Hereinafter, a detailed example of processing performed by the circuit device100of the first detailed configuration example will be described. Hereinafter, a case where a plurality of light sources332of the backlight are disposed in a matrix will be described as an example. FIG.4is a diagram illustrating a correspondence between the failure information and a light source position on the display panel. As shown in a left diagram, a column number of a light source matrix is set as i, a row number is set as j, and a light source position on the backlight is indicated by (i, j). i and j are integers of 1 or more.FIG.4shows an example in which a light source of (3, 2) is an abnormal light source. The failure information acquired by the light amount abnormality detection circuit145includes the position (3, 2) of the abnormal light source and a flag indicating whether the abnormal light source is in an open circuit state, is in a short circuit state, or has a light amount abnormality. There may be 2 or more abnormal light sources. As shown in a right diagram ofFIG.4, the backlight overlaps a back surface of the display panel in a plan view of the display panel. The light source332illuminates an area333on the display panel. A size of the area333may be fixed or may be changed according to a light amount of the light source332. The right diagram ofFIG.4shows only one area333, but there are areas corresponding to the light sources332. Pixel coordinates of the display panel are indicated by (x, y). x indicates a coordinate in a horizontal scanning direction, and y indicates a coordinate in a perpendicular scanning direction. It is assumed that the horizontal scanning direction is parallel to a row of the light source matrix. At this time, the light source position (i, j) on the backlight corresponds to the pixel coordinates (x, y) on the display panel in a plan view. Based on the correspondence, a position information acquisition circuit160converts the position (3, 2) of the abnormal light source into the light source position on the display panel. FIG.5shows a first example of the processing performed by the circuit device100of the first detailed configuration example when a light amount abnormality occurs. Here, a processing example in a case where the light emitting element is turned off due to an open circuit of the light emitting element will be described. In a case of an abnormality in which the light emitting element becomes darker than usual, the abnormality can also be handled in a similar way. An upper left part shows an example of the failure information. Each circle indicates a light source, and a number in the circle indicates an open-circuit failure flag. “0” indicates that the light source is normal, and “1” indicates that the light source has the open circuit. An upper middle part shows an example of the light amount compensation. The dimming circuit135increases light amounts of eight light sources around the abnormal light source. The light sources whose light amounts are adjusted are referred to as adjustment target light sources. When causing the backlight330to emit flat light, the dimming circuit135increases the light amounts of the adjustment target light sources with reference to a light amount of the backlight330. Alternatively, when the dimming such as the local dimming is performed, the dimming circuit135increases the light amounts of the adjustment target light sources with reference to a light amount determined by the dimming. Not only the eight light sources around the abnormal light source but also light sources further around the eight light sources may be included in the adjustment target light sources. A lower middle part shows illumination luminance of the display panel at a cross section AA′ of the upper middle part. The cross section AA′ is a cross section along an x coordinate direction of the display panel. BF1indicates a luminance distribution before the light amount adjustment. In an area corresponding to an abnormal light source turned off due to an open circuit failure, illumination luminance of the display panel decreases. AF1indicates a luminance distribution after the light amount adjustment. Since the light amounts of the adjustment target light sources are increased, the illumination luminance of the display panel increases in the area corresponding to the abnormal light source and areas corresponding to the adjustment target light sources. An upper right part shows an example of the color correction. Each cell indicated by a dotted line indicates an area on the display panel illuminated by a light source. It is described that areas of the light sources do not overlap with one another inFIG.5, but the areas of the light sources may overlap with one another. The color correction circuit115performs color correction of increasing luminance on image data of the area corresponding to the abnormal light source turned off due to the open circuit failure. Further, the color correction circuit115performs color correction of decreasing luminance on image data of the areas corresponding to the adjustment target light sources. When the illumination luminance is sufficiently compensated in the area corresponding to the abnormal light source, the color correction circuit115may not correct luminance of the image data of the area. Color correction of increasing or decreasing the luminance of the image data when the dimming such as the local dimming is performed means an increase or a decrease in luminance with reference to image data after color correction according to the dimming. However, since the dimming circuit135outputs the light amount information DIM of the light sources in which the dimming and the light amount compensation are combined, the color correction circuit115can perform the color correction according to the dimming and the light amount compensation based on the light amount information DIM of the light sources.FIG.5only shows a part of the color correction according to the light amount compensation between the dimming and the light amount compensation. A lower right part indicates luminance of image data after the luminance adjustment at a cross section BB′ of the upper right part. The cross section BB′ is a cross section along the x coordinate direction of the display panel. As shown in the lower middle part and the lower right part, the luminance of the image data is increased in the area in which the illumination luminance is decreased, and the luminance of the image data is decreased in the areas in which the illumination luminance is increased. Since a result obtained by combining an image displayed on the display panel340based on the image data and illumination performed by the backlight330is visually recognized by the user, a result of the color correction and a result of the light amount compensation cancel each other. Accordingly, even if an abnormality occurs in the light source, it is possible to provide a natural display image as if no abnormality occurs in the light source. FIG.6shows a second example of the processing performed by the circuit device100of the first detailed configuration example when a light amount abnormality occurs. Here, a processing example when control cannot be performed in a light-turned-on state due to the short circuit of the light emitting element is shown. In a case of an abnormality in which the light emitting element is brighter than usual, the abnormality can also be handled in a similar way. An upper left part shows an example of the failure information. Each circle indicates a light source, and a number in the circle indicates a short-circuit failure flag. “0” indicates that the light source is normal, and “1” indicates that the light source has the short circuit. An upper middle part shows an example of the light amount compensation. The dimming circuit135decreases the light amounts of the eight adjustment target light sources around the abnormal light source. When causing the backlight330to emit the flat light, the dimming circuit135decreases the light amounts of the adjustment target light sources with reference to the light amount of the backlight330. Alternatively, when performing the dimming such as the local dimming, the dimming circuit135decreases the light amounts of the adjustment target light sources with reference to a light amount determined by the dimming. A lower middle part indicates illumination luminance on the display panel at a cross section CC′ of the upper middle part. The cross section CC′ is a cross section along the x coordinate direction of the display panel. BF2indicates a luminance distribution before the light amount adjustment. In an area corresponding to an abnormal light source turned off due to a short-circuit failure, the illumination luminance on the display panel is increased. AF2indicates a luminance distribution after the light amount adjustment. Since the light amounts of the adjustment target light sources are decreased, the illumination luminance on the display panel is decreased in the area corresponding to the abnormal light source and the areas corresponding to the adjustment target light sources. An upper right part shows an example of the color correction. Each cell indicated by a dotted line indicates an area on the display panel illuminated by a light source. It is described that the areas of the light sources do not overlap with one another inFIG.6, but the areas of the light sources may overlap with one another. The color correction circuit115performs the color correction of decreasing luminance on image data of an area corresponding to an abnormal light source turned on due to the short-circuit failure. Further, the color correction circuit115performs the color correction of increasing luminance on image data of the areas corresponding to the adjustment target light sources. When the illumination luminance is sufficiently compensated in the area corresponding to the abnormal light source, the color correction circuit115may not correct the luminance of the image data of the area. A lower right part indicates luminance of image data after the luminance adjustment at a cross section DD′ of the upper right part. The cross section DD′ is a cross section along the x coordinate direction of the display panel. As shown in the lower middle part and the lower right part, the luminance of the image data is decreased in the area in which the illumination luminance is increased, and the luminance of the image data is increased in the areas in which the illumination luminance is decreased. Since a result obtained by combining an image displayed on the display panel340based on the image data and illumination performed by the backlight330is visually recognized by the user, a result of the color correction and a result of the light amount compensation cancel each other. Accordingly, even if an abnormality occurs in the light source, it is possible to provide a natural display image as if no abnormality occurs in the light source. In the present embodiment described above, the circuit device100is used in the display apparatus40. The display apparatus40includes the display panel340and the backlight330including the plurality of light sources. The plurality of light sources332are respectively provided corresponding to a plurality of the areas333of the display panel340. The circuit device100includes the light amount abnormality detection circuit145, the dimming circuit135, and the color correction circuit115. The light amount abnormality detection circuit145detects a light amount abnormality of each light source332. The dimming circuit135performs the light amount compensation processing of compensating for the light amount of the area corresponding to the abnormal light source by adjusting the light amounts of the light sources other than the abnormal light source that is the light source in which the light amount abnormality is detected. The color correction circuit115performs the color correction according to the adjusted light amounts on the image data of the areas corresponding to the adjustment target light sources that are the light sources whose light amounts are adjusted. According to the present embodiment, the light amount of the area corresponding to the abnormal light source is compensated, and the color correction is performed according to the adjusted light amounts on the image data of the areas corresponding to the adjustment target light sources. Accordingly, since a result of the color correction and a result of the light amount compensation cancel each other, even if an abnormality occurs in the light source, it is possible to provide a natural display image as if no abnormality occurs in the light source. In the present embodiment, the dimming circuit135performs the light amount compensation processing by setting the light sources around the abnormal light source among the plurality of light sources as the adjustment target light sources. An area illuminated by a certain light source and an area illuminated by a light source around the certain light source normally overlap each other on the display panel340. Therefore, the light amount of the area corresponding to the abnormal light source is compensated by adjusting the light amounts of the light sources around the abnormal light source. In the present embodiment, the color correction circuit115performs the color correction of decreasing the luminance of the image data of the areas corresponding to the adjustment target light sources when the dimming circuit135increases the light amounts of the adjustment target light sources. According to the present embodiment, the increase in the light amount because of the light amount compensation and the decrease in the luminance of the image data because of the color correction cancel each other in the areas corresponding to the adjustment target light sources. Accordingly, even if the open-circuit failure or the abnormality of the light amount decrease occurs in the light source, it is possible to provide a natural display image. In the present embodiment, the color correction circuit115performs the color correction of increasing the luminance of the image data of the areas corresponding to the adjustment target light sources when the dimming circuit135decreases the light amounts of the adjustment target light sources. According to the present embodiment, the decrease in the light amount because of the light amount compensation and the increase in the luminance of the image data because of the color correction cancel each other in the areas corresponding to the adjustment target light sources. Accordingly, even if the short-circuit failure or the abnormality of the light amount increase occurs in the light source, it is possible to provide a natural display image. In the present embodiment, the dimming circuit135performs the dimming control of controlling the light amount of each light source332based on the image data of each area333. The color correction circuit115performs the color correction on the image data of each area based on the light amount controlled by the dimming control. According to the present embodiment, it is possible to perform the dimming control such as the local dimming. The light amount compensation and the color correction accompanying the light amount compensation have a mechanism similar to the light amount control in the dimming control and the color correction of the image data according to the light amount control. Therefore, the dimming circuit135used for the dimming control and the color correction circuit115can be used in combination for the light amount compensation and the color correction accompanying the light amount compensation. In the present embodiment, the circuit device100includes the light source interface circuit192. The light source interface circuit192performs interface processing with the light source driver320that drives the plurality of light sources. The light amount abnormality detection circuit145acquires the failure information LE on the light sources332from the light source driver320via the light source interface circuit192, and detects a light amount abnormality based on the failure information LE. According to the present embodiment, the light amount abnormality detection circuit145can acquire the failure information LE on the light sources detected by the light source driver320via the light source interface circuit192. The light amount abnormality detection circuit145can detect the light amount abnormality based on the failure information LE. In the present embodiment, the failure information LE includes at least one of open-circuit information and short-circuit information of the light emitting element of each light source332. According to the present embodiment, the light amount abnormality detection circuit145can detect whether a light emitting element of a light source in which an abnormality has occurred has the open circuit or the short circuit. Accordingly, the dimming circuit135can execute the light amount compensation according to content of an abnormality. 4. Head-Up Display Apparatus FIG.7shows a configuration example of a head-up display apparatus50as an example of the display apparatus including the circuit device100according to the present embodiment. The head-up display apparatus50includes the circuit device100, the display unit300, and a projection optical system52. Description of parts similar to those in the configuration example ofFIG.1is omitted. The circuit device100performs distortion correction on image data received from the processing device200, and transmits the image data after the distortion correction to the display unit300. The distortion correction is image correction for performing HUD display with no or reduced distortion by applying, to an image, image distortion inverse to image distortion when an image displayed on a display panel is projected. The HUD is an abbreviation for head-up display. The image distortion due to projection includes image distortion due to a curved surface of a screen image distortion due to the projection optical system52, or both of them. The display unit300displays the image data after the distortion correction from the circuit device100on the display panel. The backlight emits light to the display panel. The projection optical system52includes a reflection plate and the like. The reflection plate reflects light transmitted through the display panel toward the screen20, and the light reflected by the screen20is incident on eyes10of a user. Accordingly, a virtual image corresponding to an image displayed on the display panel is projected to a field of view of the user. The screen20transmits light from a real space that is a background of the HUD display. Accordingly, the virtual image created by the HUD appears to overlap the real space from the eyes10of the user. The screen20is, for example, a windscreen of a moving object on which the head-up display apparatus50is mounted. Although an example in which there is one abnormal light source has been shown inFIGS.5and6, the present embodiment can also be applied to a case where there are a plurality of abnormal light sources. For example, when the plurality of abnormal light sources are scattered, the light amount compensation may be performed using light sources around each abnormal light source, and image data may be color-corrected in an area in which the light amount compensation has been performed. Alternatively, when the plurality of abnormal light sources exist side by side, the light amount compensation may be performed using light sources around the plurality of abnormal light sources, and the image data may be color-corrected in an area in which the light amount compensation has been performed. For example, when light sources in one vertical column fail, the light amount compensation may be performed using light sources in two columns on both sides of the light sources in one vertical column, and the image data may be color-corrected in an area in which the light amount compensation has been performed. 5. Second Detailed Configuration Example of Circuit Device FIG.8shows a second detailed configuration example of the circuit device100that can be applied to the head-up display apparatus50or the like. The circuit device100includes the input circuit105, a distortion correction circuit110, the color correction circuit115, the luminance analysis circuit125, the output circuit130, the dimming circuit135, the light amount abnormality detection circuit145, the light source control circuit180, and the light source interface circuit192. Description of parts similar to those in the configuration example ofFIG.3is omitted. The distortion correction circuit110performs the distortion correction on the input image data IMA by using coordinate conversion between pixel coordinates of the input image data IMA and pixel coordinates of distortion-corrected image data IMC, and outputs a result thereof as the distortion-corrected image data IMC. The distortion correction circuit110corresponds to a reverse warp engine or a forward warp engine. The reverse warp is warp processing of converting pixel coordinates of the distortion-corrected image data IMC into reference coordinates corresponding to the pixel coordinates, and obtaining pixel data of the distortion-corrected image data IMC from pixel data of the input image data IMA at the reference coordinates. The forward warp is warp processing of converting pixel coordinates of the input image data IMA into movement destination coordinates corresponding to the pixel coordinates, and obtaining pixel data of the distortion-corrected image data IMC at the movement destination coordinates from pixel data of the input image data IMA at the pixel coordinates. The coordinate conversions in the reverse warp and the forward warp are defined by a warp parameter. The warp parameter is a table in which coordinates of the input image data IMA and coordinates of the distortion-corrected image data IMC are associated with each other, a table indicating a movement amount between the coordinates of the input image data IMA and the coordinates of the distortion-corrected image data IMC, a coefficient of a polynomial for associating the coordinates of the input image data IMA with the coordinates of the distortion-corrected image data IMC, or the like. The luminance analysis circuit125analyzes luminance of the distortion-corrected image data IMC, and outputs an analysis result thereof as the luminance information YA. The dimming circuit135performs the dimming and the light amount compensation based on the luminance information YA and the detection result LDET of the light amount abnormality, and outputs the light amount information DIM of the light sources. The color correction circuit115performs the color correction on the distortion-corrected image data IMC based on the light amount information DIM of the light sources. The output circuit130transmits the image data from the color correction circuit115to the display unit300as the output image data IMB. FIG.9shows an example of processing performed by the circuit device100of the second detailed configuration example when a light amount abnormality occurs.FIG.9shows an example of a case where the light emitting element is turned off due to the open circuit of the light emitting element. As shown in the upper left diagram, the pixel coordinates of the input image data IMA are indicated by (u, v). As shown in the upper middle diagram, the pixel coordinates of the distortion-corrected image data IMC are indicated by (x, y). u and x are coordinates in the horizontal scanning direction, and v and y are coordinates in the perpendicular scanning direction. The distortion correction circuit110performs the coordinate conversion between the coordinates (u, v) of the input image data IMA and the coordinates (x, y) of the distortion-corrected image data IMC, and maps the input image data IMA to the distortion-corrected image data IMC based on a result thereof. As shown in the lower diagram, the dimming circuit135compensates for a light amount of an abnormal light source by using surrounding light sources based on a failure flag. As shown in the upper right diagram, the color correction circuit115performs the color correction on the distortion-corrected image data IMC, and the output circuit130outputs the image data from the color correction circuit115as the output image data IMB. Since the color correction is performed after the distortion correction, an area corresponding to each light source of the distortion-corrected image data IMC may be regarded as the same as an area corresponding to each light source of the output image data IMB. The same applies to the dimming such as the local dimming based on a luminance analysis result. That is, since a luminance analysis target is the distortion-corrected image data IMC, an area corresponding to each light source in the luminance information may be regarded as the same as the area corresponding to each light source of the output image data IMB. In the present embodiment described above, the circuit device100includes the distortion correction circuit110. The distortion correction circuit110performs the distortion correction on the input image data IMA, and outputs the distortion-corrected image data IMC. The color correction circuit115receives the distortion-corrected image data IMC as image data, and performs the color correction on the distortion-corrected image data IMC. According to the present embodiment, since the color correction is performed after the distortion correction, the area corresponding to each light source of the distortion-corrected image data IMC and an area corresponding to each light source on the display panel can be regarded as the same as each other. Accordingly, it is not necessary to consider the distortion correction in the color correction. 6. Third Detailed Configuration Example of Circuit Device FIG.10shows a third detailed configuration example of the circuit device100that can be applied to the head-up display apparatus50or the like. The circuit device100includes the input circuit105, the distortion correction circuit110, the color correction circuit115, the luminance analysis circuit125, the output circuit130, the dimming circuit135, the light amount abnormality detection circuit145, the light source control circuit180, and the light source interface circuit192. Description of parts similar to those in the configuration example ofFIG.3or8is omitted. The luminance analysis circuit125analyzes the luminance of the input image data IMA, and outputs an analysis result thereof as the luminance information YA. The dimming circuit135performs the dimming and the light amount compensation based on the luminance information YA and the detection result LDET of the light amount abnormality, and outputs the light amount information DIM of the light sources. The color correction circuit115performs the color correction on the input image data IMA based on the light amount information DIM of the light sources, and outputs color-corrected image data IMD. The distortion correction circuit110performs the distortion correction on the color-corrected image data IMD by using the coordinate conversion between pixel coordinates of the color-corrected image data IMD and pixel coordinates of the output image data IMB. The output circuit130transmits the image data from the distortion correction circuit110to the display unit300as the output image data IMB. FIG.11shows an example of processing performed by the circuit device100of the third detailed configuration example when a light amount abnormality occurs.FIG.11shows an example of a case where the light emitting element is turned off due to the open circuit of the light emitting element. As shown in the lower diagram, the dimming circuit135compensates for a light amount of an abnormal light source by using surrounding light sources based on a failure flag. As shown in the upper left diagram and the middle diagram, the color correction circuit115performs the color correction on the input image data IMA, and outputs the color-corrected image data IMD. Pixel coordinates of the color-corrected image data IMD are indicated by (u, v). As shown in the upper right diagram, pixel coordinates of the output image data IMB are indicated by (x, y). The distortion correction circuit110performs the coordinate conversion between the coordinates (u, v) of the color-corrected image data IMD and the coordinates (x, y) of the output image data IMB, and maps the color-corrected image data IMD to the output image data IMB based on a result thereof. As shown in the upper left to right diagrams, since the color correction is performed before the distortion correction, an area corresponding to each light source of the input image data IMA that is a color correction target is different from an area corresponding to each light source of the output image data IMB. The area corresponding to each light source of the input image data IMA is referred to as an input-image-side area. The color correction circuit115determines the input-image-side area based on correspondence between (u, v) and (x, y) in the distortion correction. The color correction circuit115acquires, for example, correspondence information between (u, v) and (x, y) from the distortion correction circuit110. Alternatively, a storage circuit (not shown) may store table information indicating the correspondence between (u, v) and (x, y), and the color correction circuit115may determine the input-image-side area based on the table information. The same applies to the dimming such as the local dimming based on a luminance analysis result. That is, since the luminance analysis target is the input image data IMA, an area corresponding to each light source in the luminance information is different from the area corresponding to each light source of the output image data IMB. The dimming circuit135determines the area corresponding to each light source in the luminance information based on the correspondence between (u, v) and (x, y) in the distortion correction. In the present embodiment, the circuit device100includes the distortion correction circuit110. The distortion correction circuit110performs the distortion correction on the color-corrected image data IMD output from the color correction circuit115, and outputs the distortion-corrected image data. The input image data IMA is input as the image data to the color correction circuit115. Areas corresponding to the adjustment target light sources of the distortion-corrected image data and the input-image-side areas of the input image data IMA correspond to each other in the distortion correction. At this time, the color correction circuit115performs the color correction on the input image data IMA of the input-image-side areas, and outputs the color-corrected image data IMD to the distortion correction circuit110. According to the present embodiment, since the color correction is performed before the distortion correction, the input-image-side area corresponding to each light source of the color-corrected image data IMD and the area corresponding to each light source on the display panel are different from each other. Since coordinates of the color-corrected image data IMD and coordinates of the display panel are associated with each other in the distortion correction, the color correction circuit115can determine correspondence between the input-image-side area and the area corresponding to each light source on the display panel. In the example ofFIG.10, the distortion-corrected image data corresponds to the output image data IMB, and is output from the output circuit130to an outside of the circuit device100. However, the configuration is not limited to the configuration ofFIG.10, and a circuit that performs some kind of image processing may be further provided between the distortion correction circuit110and the output circuit130. 7. Fourth Detailed Configuration Example of Circuit Device FIG.12shows a fourth detailed configuration example of the circuit device100. The circuit device100includes the input circuit105, the color correction circuit115, the luminance analysis circuit125, the output circuit130, the dimming circuit135, the light amount abnormality detection circuit145, an insufficient compensation detection circuit155, the light source control circuit180, the host interface circuit191, and the light source interface circuit192. Description of parts similar to those in the configuration example ofFIG.3is omitted.FIG.12shows an example in which the insufficient compensation detection circuit155and the host interface circuit191are combined in the first detailed configuration example, but the insufficient compensation detection circuit155and the host interface circuit191may be combined in the second detailed configuration example or the third detailed configuration example. The insufficient compensation detection circuit155detects insufficient compensation of a light amount based on information from the dimming circuit135. As an example, the insufficient compensation detection circuit155determines that a light amount is insufficiently compensated when light amounts cannot be increased because the light amounts of light sources around an abnormal light source are close to a maximum light amount, or when the light amounts cannot be decreased because the light amounts of the light sources around the abnormal light source are close to a minimum light amount. The host interface circuit191communicates with the processing device200that is a host of the circuit device100. The host interface circuit191may be various communication interfaces used for communication between circuit devices, and is, for example, the SPI or the I2C. The host interface circuit191and the light source interface circuit192are not limited to separately provided interface circuits, and may be one common interface circuit. When the insufficient compensation detection circuit155detects the insufficient compensation of the light amount, the host interface circuit191notifies the processing device200of the information. When notified of the insufficient compensation of the light amount, the processing device200may transmit a display notifying occurrence of an abnormality to the circuit device100in addition to the input image data IMA. Alternatively, when the insufficient compensation detection circuit155detects the insufficient compensation of the light amount, the output circuit130of the circuit device100may transmit the display notifying the occurrence of the abnormality to the display apparatus40in addition to the output image data IMB. In the present embodiment described above, the circuit device100includes the host interface circuit191. When light amount compensation of an area corresponding to the abnormal light source is insufficient even if the light amount compensation processing is performed, the host interface circuit191outputs an error signal to the host. According to the present embodiment, when the light amount compensation of the area corresponding to the abnormal light source is insufficient even if the light amount compensation processing is performed, the host can execute processing of handling the insufficient compensation. In the example ofFIG.12, the host corresponds to the processing device200. The circuit device according to the present embodiment described above is used in the display apparatus. The display apparatus includes the display panel and the backlight including the plurality of light sources. The plurality of light sources are respectively provided corresponding to the plurality of areas of the display panel. The circuit device includes the light amount abnormality detection circuit, the dimming circuit, and the color correction circuit. The light amount abnormality detection circuit detects a light amount abnormality of each light source. The dimming circuit performs the light amount compensation processing of compensating for the light amount of the area corresponding to the abnormal light source by adjusting the light amounts of the light sources other than the abnormal light source that is the light source in which the light amount abnormality is detected. The color correction circuit performs the color correction according to the adjusted light amounts on the image data of the areas corresponding to the adjustment target light sources that are the light sources whose light amounts are adjusted. According to the present embodiment, the light amount of the area corresponding to the abnormal light source is compensated, and the color correction is performed according to the adjusted light amounts on the image data of the areas corresponding to the adjustment target light sources. Accordingly, since a result of the color correction and a result of the light amount compensation cancel each other, even if an abnormality occurs in the light source, it is possible to provide a natural display image as if no abnormality occurs in the light source. In the present embodiment, the dimming circuit may perform the light amount compensation processing by setting the light sources around the abnormal light source among the plurality of light sources as the adjustment target light sources. On the display panel, the area illuminated by a certain light source and the areas illuminated by the light sources around the certain light source normally overlap each other. Therefore, the light amount of the area corresponding to the abnormal light source is compensated by adjusting the light amounts of the light sources around the abnormal light source. In the present embodiment, when the dimming circuit increases the light amounts of the adjustment target light sources, the color correction circuit may perform the color correction of decreasing the luminance of the image data of the areas corresponding to the adjustment target light sources. According to the present embodiment, the increase in the light amount because of the light amount compensation and the decrease in the luminance of the image data because of the color correction cancel each other in the areas corresponding to the adjustment target light sources. Accordingly, even if the open-circuit failure or the abnormality of the light amount decrease occurs in the light source, it is possible to provide a natural display image. In the present embodiment, when the dimming circuit decreases the light amounts of the adjustment target light sources, the color correction circuit may perform the color correction of increasing the luminance of the image data of the areas corresponding to the adjustment target light sources. According to the present embodiment, the decrease in the light amount because of the light amount compensation and the increase in the luminance of the image data because of the color correction cancel each other in the areas corresponding to the adjustment target light sources. Accordingly, even if the short-circuit failure or the abnormality of the light amount increase occurs in the light source, it is possible to provide a natural display image. In the present embodiment, the dimming circuit may perform the dimming control of controlling the light amount of each light source based on the image data of each area. The color correction circuit may perform the color correction on the image data of each area based on the light amount controlled by the dimming control. According to the present embodiment, it is possible to perform the dimming control such as the local dimming. The light amount compensation and the color correction accompanying the light amount compensation have the mechanism similar to the light amount control in the dimming control and the color correction of the image data according to the light amount control. Therefore, the dimming circuit used for the dimming control and the color correction circuit can be used in combination for the light amount compensation and the color correction accompanying the light amount compensation. In the present embodiment, the circuit device may include the distortion correction circuit. The distortion correction circuit may perform the distortion correction on the input image data, and output the distortion-corrected image data. The color correction circuit may receive the distortion-corrected image data as the image data, and perform the color correction on the distortion-corrected image data. According to the present embodiment, since the color correction is performed after the distortion correction, the area corresponding to each light source of the distortion-corrected image data and the area corresponding to each light source on the display panel can be regarded as the same as each other. Accordingly, it is not necessary to consider the distortion correction in the color correction. In the present embodiment, the circuit device may include the distortion correction circuit. The distortion correction circuit may perform the distortion correction on the color-corrected image data output from the color correction circuit, and output the distortion-corrected image data. The color correction circuit may receive the input image data as the image data. When the areas corresponding to the adjustment target light sources of the distortion-corrected image data and the input-image-side areas of the input image data correspond to each other in the distortion correction, the color correction circuit may perform the color correction on the input image data of the input-image-side areas, and output the color-corrected image data to the distortion correction circuit. According to the present embodiment, since the color correction is performed before the distortion correction, the input-image-side area corresponding to each light source of the color-corrected image data and the area corresponding to each light source on the display panel are different from each other. Since the coordinates of the color-corrected image data and the coordinates of the display panel are associated with each other in the distortion correction, the color correction circuit can determine the correspondence between the input-image-side area and the area corresponding to each light source on the display panel. In the present embodiment, the circuit device may include the light source interface circuit. The light source interface circuit may perform the interface processing with the light source driver that drives the plurality of light sources. The light amount abnormality detection circuit may acquire the failure information on the light source from the light source driver via the light source interface circuit, and detect the light amount abnormality based on the failure information. According to the present embodiment, the light amount abnormality detection circuit can acquire the failure information on the light source detected by the light source driver via the light source interface circuit. The light amount abnormality detection circuit can detect the light amount abnormality based on the failure information. In the present embodiment, the failure information may include at least one of the open-circuit information and the short-circuit information on the light emitting element of each light source. According to the present embodiment, the light amount abnormality detection circuit can detect whether the light emitting element of the light source in which the abnormality occurs has the open circuit or the short circuit. Accordingly, the dimming circuit can execute the light amount compensation according to content of the abnormality. In the present embodiment, the circuit device may include the host interface circuit. When the light amount compensation of the area corresponding to the abnormal light source is insufficient even if the light amount compensation processing is performed, the host interface circuit may output the error signal to the host. According to the present embodiment, when the light amount compensation of the area corresponding to the abnormal light source is insufficient even if the light amount compensation processing is performed, the host can execute the processing of handling the insufficient compensation. The display apparatus according to the present embodiment relates to a display apparatus including the circuit device described in any one of the above, the display panel that displays the image based on the image data, and the backlight. Although the present embodiment has been described in detail as described above, it will be readily apparent to those skilled in the art that many modifications may be made without departing substantially from novel matters and effects of the present disclosure. Therefore, all such modifications are intended to be included within the scope of the present disclosure. For example, a term described at least once together with a different term having a broader meaning or the same meaning in the description or the drawings can be replaced with the different term in any place in the description or the drawings. Further, all combinations of the present embodiment and the modifications are also included in the scope of the present disclosure. Further, configurations, operations, and the like of the circuit device, the display unit, the processing device, the display apparatus, the head-up display apparatus, and the like are not limited to those described in the present embodiment, and various modifications can be made. | 60,649 |
11862116 | DETAILED DESCRIPTION Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure can be embodied in various forms and should not be limited by the embodiments set forth herein. Rather, these embodiments are provided so that the present disclosure can be more thoroughly understood, and the scope of the present disclosure can be fully conveyed to those skilled in the art. The handwriting reading device includes an electromagnetic board, a System on Chip, a display controller and an ink screen, that are electrically connected in sequence. The electromagnetic board is provided with an electromagnetic film, and a handwriting operation instruction generated by an electromagnetic stylus on the ink screen is detected by the electromagnetic film. In the prior art, after the electromagnetic film of the handwriting reading device detects an electromagnetic signal triggered by a handwriting stylus, the electromagnetic signal is converted into report point data, and then transmitted to the core layer (also referred to as Kernel) of the SOC. The Kernel transmits the report point data to the upper application layer, and the application layer generates a handwriting trajectory image to be displayed based on the report point data, and converts the handwriting trajectory image into a grayscale image, and then the application layer transmits the grayscale image back to the Kernel. The SOC transmits the grayscale image to the display controller such as an Electronic Paper Display Controller (EPDC) or a Timer Control Register (TCON). The EPDC or TCON obtains waveform for driving respective pixel points based on a grayscale image Look-Up-Table (LUT) of each pixel point in the grayscale image, and the EPDC or TCON drives the EPD for displaying by using the waveform. It can be seen that in this process, the report point data goes through a process from the Kernel of the SOC to its application layer, to the Kernel of the SOC, and then to the EPDC or TCON, which increases time-consuming of handwriting displaying process. The hardware structure of the existing handwriting reading device is shown inFIG.1, and transmission process of the report point data is shown inFIG.2. FIG.3shows a schematic flowchart of a method for processing a report point data provided by an embodiment of the present disclosure, which is applied in the handwriting reading device. As shown inFIG.3, the method includes the following steps. a step110: transmitting report point data associated with received handwriting to a display controller by a System on Chip. After the electromagnetic film of the handwriting reading device detects the electromagnetic signal triggered by the handwriting stylus, the electromagnetic signal is converted into the report point data, and then transmitted to the System on Chip. The System on Chip directly transmits the report point data associated with handwriting to the display controller at the hardware layer. Specifically, the core layer of the System on Chip directly transmits the report point data to the display controller. Therefore, the process of the report point data from the core layer of the System on Chip to its application layer and then back to the core layer from the application layer is cancelled, which saves transmission time for the report point data, shortens time-consuming of drawing the handwriting, and improves speed of drawing the handwriting. The report point data of the handwriting, in addition to coordinate data of the handwriting, generally also includes pressure-sensitive data. In the prior art, the application layer of the System on Chip realizes different stroke effects based on the pressure-sensitive data when generating a handwriting image, and at the same time, the application layer also synthesizes the image based on attribute information such as handwriting color, line thickness and the like preselected by a user. But the core layer of the System on Chip does not have synthesis ability of handwriting effects of the application layer. Therefore, after the report point data is directly transmitted by the core layer to the display controller, the displaying screen only can be driven to display a black line, and stroke, thickness and other effects of the line will not be displayed. Since in the solution in which the core layer of the System on Chip directly transmits the report point data to the display controller, the pressure-sensitive data does not need to be used, in some embodiments of the present disclosure, the System on Chip can remove the pressure-sensitive data from the report point data associated with the handwriting to remain only coordinate data in the report point data, and only the coordinate data in the report point data is transmitted to the display controller. In this way, the amount of data transmission can be reduced, transmission time of the report point data can be further saved, and speed of drawing the handwriting can be improved. a step120: looking up a LUT table and acquiring a waveform of driving an ink screen based on the report point data by the display controller. After the display controller receives the report point data, a Look-Up-Table (LUT) needs to be looked up based on the report point data to acquire the waveform of driving the ink screen. Specifically, as shown inFIG.4, the step120can be implemented in the following way: a step121: receiving the report point data by the display controller; a step122: establishing a layer cache by the display controller; a step123: generating a image of the handwriting in the layer cache based on the report point data by the display controller; a step124: looking up the LUT and acquiring the waveform of driving the ink screen based on the image of the handwriting by the display controller. Because the display controller does not have synthesis ability of handwriting effects of the application layer, after the report point data is directly transmitted from the core layer to the display controller, the image generated by the display controller in the step123does not include effects such as color, thickness and stroke and the like of the handwriting, and the display controller can only drive the displaying screen to display black lines, and will not display stroke, thickness and other effects of the lines. a step130: driving the ink screen to display the handwriting using the waveform by the display screen controller. In the embodiment of the present disclosure, the report point data is directly transmitted from the System on Chip to the display controller at the hardware layer, the process of the report point data from the core layer of the System on Chip to its application layer and then back to the core layer from the application layer is cancelled, which saves transmission time for the report point data, shortens time-consuming of drawing the handwriting, and improve speed of drawing the handwriting. When it is desired to be able to take into account response speed of the handwriting and displaying effect at the same time, another embodiment of the present disclosure provides an implementation method that takes into account both the speed and the effect. In this method, the report point data from the electromagnetic film is divided into two paths for transmissions. In one path the report point data is transmitted from the System on Chip directly to the display controller for displaying first, and in the other path it is transmitted by the core layer of the System on Chip to its application layer thereof at the same time or later. The application layer performs image synthesis (with color, thickness, stroke and other effects of the handwriting line added) to form a grayscale image, and then returns the grayscale image to the core layer. The core layer sends the grayscale image to the display controller. FIG.5shows a schematic flowchart of a method for processing a report point data provided by another embodiment of the present disclosure, which is applied in the handwriting reading device. As shown inFIG.5, the method includes the following steps. a step501: receiving report point data associated with the handwriting by a System on Chip. After the electromagnetic film of the handwriting reading device detects the electromagnetic signal triggered by the handwriting stylus, the electromagnetic signal is converted into the report point data, and then transmitted to the System on Chip. a step502A: transmitting the report point data associated with the handwriting to a display controller by a System on Chip. The System on Chip directly transmits the report point data associated with handwriting to the display controller at the hardware layer. Specifically, the core layer of the System on Chip directly transmits the report point data to the display controller. Therefore, the process of the report point data from the core layer of the System on Chip to its application layer and then back to the core layer from the application layer is cancelled, which saves transmission time for the report point data, shortens time-consuming of drawing the handwriting, and improve speed of drawing the handwriting. The report point data of the handwriting, in addition to coordinate data of the handwriting, generally also includes pressure-sensitive data. In the prior art, the application layer of the System on Chip realizes different stroke effects based on the pressure-sensitive data when generating a handwriting image, and at the same time, the application layer also synthesizes the image based on attribute information such as handwriting color, line thickness and the like preselected by a user. But the core layer of the System on Chip does not have synthesis ability of handwriting effects of the application layer. Therefore, after the report point data is directly transmitted by the core layer to the display controller, the displaying screen only can be driven to display a black line, and stroke, thickness and other effects of the line will not be displayed. Since in the solution in which the core layer of the System on Chip directly transmits the report point data to the display controller, the pressure-sensitive data does not need to be used, in some embodiments of the present disclosure, the System on Chip can remove the pressure-sensitive data from the report point data associated with the handwriting to remain only coordinate data in the report point data, and only the coordinate data in the report point data is transmitted to the display controller. In this way, the amount of data transmission can be reduced, transmission time of the report point data can be further saved, and speed of drawing handwriting can be improved. a step503A: looking up a LUT table and acquiring a first waveform of driving an ink screen based on the report point data by the display controller. After the display controller receives the report point data, a Look-Up-Table needs to be looked up based on the report point data to acquire the first waveform of driving the ink screen. Specifically, as shown inFIG.6, the step503A can be implemented in the following way: a step a1: receiving the report point data by the display controller; a step a2: establishing a layer cache by the display controller; a step a3: generating a first image of the handwriting in the layer cache based on the report point data by the display controller; a step a4: looking up the LUT and acquiring the first waveform of driving the ink screen based on the first image of the handwriting by the display controller. Because the display controller does not have synthesis ability of handwriting effects of the application layer, after the report point data is directly transmitted by the core layer to the display controller, the image generated by the display controller in the step503A does not include effects such color, thickness and stroke and the like of the handwriting, and the display controller can only drive the displaying screen to display black lines, and will not display stroke, thickness and other effects of the lines. a step504A: driving the ink screen to display the handwriting using the first waveform by the display screen controller. a step502B: transmitting the report point data associated with the handwriting to an application layer of the System on Chip by a core layer of the System on Chip. It should be noted that the step502B is performed at the same time as or after the step502A. a step503B: generating a second image of the handwriting based on the report point data by the application layer. The second image includes one or more of color, thickness and stroke of the handwriting. a step504B: converting the second image into a grayscale image by the application layer. a step505B: transmitting the grayscale image to the core layer by the application layer. a step506B: transmitting the grayscale image to the display controller by the core layer. In an implementation manner, it is possible to refresh the image of a second frame when the user finishes writing a stroke of a handwriting line and lifts the pen, and the visual effect presented at this time is: first, a black line with no effect is displayed along with the handwriting, and then the stylus is lifted, and the screen is refreshed, which turns the original black line with no effect into a line with effect. Since the coordinate data used to draw the handwriting in succession is the same, it appears that the latter line “replaces” the former line in the visual effect, but the latter line is actually a redrawn line. Specifically, as shown inFIG.7, the step506B further includes the following steps: a step b1: receiving the grayscale image transmitted from the application layer by the core layer; a step b2: transmitting the grayscale image to the display controller by the core layer after a tip of a handwriting stylus leaves the ink screen. Wherein the tip of the handwriting stylus leaving the ink screen is determined by: the tip of the handwriting stylus leaving the ink screen is determined if receiving the report point data is stopped. When the user lifts the stylus, the electromagnetic film cannot detect the new report point data. At this time, the core layer of the System on Chip will stop receiving the report point data. Therefore, by determining whether receiving the report point data is stopped, it is determined whether the tip of the handwriting stylus leaves the ink screen. Through the above method, the System on Chip transmits the report point data in a two-path way. When the core layer receives the grayscale image returned by the application layer, the grayscale image is not transmitted to the display controller first, but after the core layer receives a “stylus lifted” signal reported by the electromagnetic film, then it is transmitted to the display controller, so that it is realized that a line of the first path is drawn first, and then a line of the second path and with effect is drawn. a step507B: looking up the LUT table and acquiring a second waveform of driving the ink screen based on the grayscale image by the display controller. a step508B: driving the ink screen to display the handwriting using the second waveform by the display controller. In the above steps, in the steps502A-504A, the report point data is directly transmitted at the hardware layer from the System on Chip to the display controller, and the process of displaying of the ink screen is driven by the display controller; in steps502B-508B, at the same time as or later after the report point data is transmitted at the hardware layer from the System on Chip to the display controller, it is transmitted from the core layer of the System on Chip to its application layer, and after image synthesis is performed by the application layer, then the grayscale image is transmitted to the display controller through the core layer, and the process of displaying of the ink screen is driven by the display controller. Before completion of the processing by the display controller, after the report point data is transmitted at the hardware layer by the System on Chip, the grayscale image synthesized by the application layer is processed, and the displaying effect observed by the user is: a black line with effect of no thickness and no stroke is displayed in the screen first, and then the screen performs refreshing once to refresh the original black line into a line with various effects, so as to achieve not only quickly displaying the handwriting, but also taking into account the displaying effect of the handwriting. An embodiment of the present disclosure provides a non-volatile computer-readable storage medium, in which at least one executable instruction is stored, the executable instruction causing a System on Chip and a display controller to perform the following operations:transmitting report point data associated with received handwriting to a display controller by a System on Chip;looking up a LUT table and acquiring a first waveform of driving an ink screen based on the report point data by the display controller; anddriving the ink screen to display the handwriting using the first waveform by the display controller. In an optional way, the looking up the LUT table and acquiring the first waveform of driving the ink screen based on the report point data by the display controller further includes:receiving the report point data by the display controller;establishing a layer cache by the display controller;generating a first image of the handwriting in the layer cache based on the report point data by the display controller; andlooking up the LUT table and acquiring the first waveform of driving the ink screen based on the first image of the handwriting by the display controller. In an optional way, the transmitting the report point data associated with received handwriting to the display controller by the System on Chip further includes:removing pressure-sensitive data from the report point data associated with the received handwriting by the System on Chip to remain only coordinate data in the report point data; andtransmitting only the coordinate data in the report point data to the display controller by the System on Chip. In an optional way, the executable instruction further causes the System on Chip and the display controller to perform the following operations:transmitting the report point data associated with the received handwriting to an application layer of the System on Chip by a core layer of the System on Chip;generating a second image of the handwriting based on the report point data by the application layer;converting the second image into a grayscale image by the application layer;transmitting the grayscale image to the core layer by the application layer;transmitting the grayscale image to the display controller by the core layer;looking up the LUT table and acquiring a second waveform of driving the ink screen based on the grayscale image by the display controller; anddriving the ink screen to display the handwriting using the second waveform by the display controller. In an optional way, the transmitting the report point data associated with the received handwriting to an application layer of the System on Chip by a core layer of the System on Chip is performed at the same time as or after the transmitting the report point data associated with received handwriting to the display controller by a System on Chip. In an optional way, the transmitting the grayscale image to the display controller by the core layer further includes: receiving, by the core layer, the grayscale image transmitted from the application layer; transmitting the grayscale image to the display controller by the core layer after a tip of a handwriting stylus leaves the ink screen. In an optional way, the tip of the handwriting stylus leaving the ink screen is determined by: the tip of the handwriting stylus leaving the ink screen is determined if receiving the report point data is stopped. In an optional way, the second image of the handwriting includes one or more of color, thickness and stroke of the handwriting. FIG.8shows a structural schematic diagram of a handwriting reading device provided by an embodiment of the present disclosure. As shown inFIG.8, the handwriting reading device800includes an electromagnetic board81, a System on Chip82, a display controller83and ink screen84, that are electrically connected in sequence. The electromagnetic board81is configured to detect handwriting operation instructions generated by the electromagnetic stylus on the ink screen. Main control devices such as a central processing unit (CPU) are integrated in the System on Chip82, which is the main chip of the reader. The display controller83is an Electronic Paper Display controller (EPDC) or a Timer Control Register (TCON). The TCON is also referred to as a logic board, a screen driver board, or a central control board. The System on Chip82outputs a grayscale image Look-Up-Table (LUT) to the display controller83, and the display controller83is configured to obtain a waveform based on the grayscale image Look-Up-Table, and to drive the ink particles on the ink screen84to move based on the waveform, so as to realize imaging. The System on Chip82is provided with a communication interface, and the System on Chip82is electrically connected with the display controller83through the communication interface, and the System on Chip82is configured to send image data to be displayed to the display controller83through the communication interface. The display controller83is electrically connected with the ink screen84, and the display controller83is configured to convert the image data into the waveform, and to drive the ink screen84to display the contents of the image data based on the waveform. The System on Chip82and the display controller83are respectively configured to store at least one executable instruction85, which causes the System on Chip82and the display controller83to perform the following operations:transmitting report point data associated with received handwriting to a display controller by a System on Chip;looking up a LUT table and acquiring a first waveform of driving an ink screen based on the report point data by the display controller; anddriving the ink screen to display the handwriting using the first waveform by the display controller. In an optional way, the looking up the LUT table and acquiring the first waveform of driving the ink screen based on the report point data by the display controller further includes:receiving the report point data by the display controller;establishing a layer cache by the display controller;generating a first image of the handwriting in the layer cache based on the report point data by the display controller; andlooking up the LUT table and acquiring the first waveform of driving the ink screen based on the first image of the handwriting by the display controller. In an optional way, the transmitting the report point data associated with received handwriting to the display controller by the System on Chip further includes:removing pressure-sensitive data from the report point data associated with the received handwriting by the System on Chip to remain only coordinate data in the report point data; andtransmitting only the coordinate data in the report point data to the display controller by the System on Chip. In an optional way, the executable instruction further causes the System on Chip and the display controller to perform the following operations:transmitting the report point data associated with the received handwriting to an application layer of the System on Chip by a core layer of the System on Chip;generating a second image of the handwriting based on the report point data by the application layer;converting the second image into a grayscale image by the application layer;transmitting the grayscale image to the core layer by the application layer;transmitting the grayscale image to the display controller by the core layer;looking up the LUT table and acquiring a second waveform of driving the ink screen based on the grayscale image by the display controller; anddriving the ink screen to display the handwriting using the second waveform by the display controller. In an optional way, the transmitting the report point data associated with the received handwriting to an application layer of the System on Chip by a core layer of the System on Chip is performed at the same time as or after the transmitting the report point data associated with received handwriting to the display controller by a System on Chip. In an optional way, the transmitting the grayscale image to the display controller by the core layer further includes:receiving, by the core layer, the grayscale image transmitted from the application layer;transmitting the grayscale image to the display controller by the core layer after a tip of a handwriting stylus leaves the ink screen. In an optional way, the tip of the handwriting stylus leaving the ink screen is determined by: the tip of the handwriting stylus leaving the ink screen is determined if receiving the report point data is stopped. In an optional way, the second image includes one or more of color, thickness and stroke of the handwriting. Another embodiment of the present disclosure provides a handwriting reading device.FIG.9shows a structural schematic diagram of the handwriting reading device provided by this embodiment. As shown inFIG.9, a handwriting reading device100includes a System on Chip10, an ink screen20, a display controller30, a handwriting board40and a switching component50. Main control devices such as a central processing unit (CPU) and the like are integrated in the System on Chip10, which is the main chip of the reader. The display controller30is an Electronic Paper Display controller or a Timer Control Register. The handwriting board40is configured to detect the report point data, and may be an electromagnetic board, a capacitor board or a resistance board. The System on Chip10is electrically connected with the display controller30, the display controller30is electrically connected with the ink screen20, and the switching component50is electrically connected with the handwriting board40, the System on Chip10, and the display controller30, respectively, and is configured to switch between a connection of the handwriting board40and the System on Chip10and another connection of the handwriting board40and the di splay controller30. When the handwriting board40and the System on Chip10are connected, the report point data is transmitted to the System on Chip10through the switching component50. When the handwriting board40and the display controller30are connected, the report point data is transmitted to the display controller30through the switching component50. At this time, the report point data does not need to be processed layer by layer by the System on Chip10, and the display controller30directly looks up the LUT table based on the report point data to acquire the waveform of driving the ink screen20and to drive the ink screen20to display. In the present embodiment, by providing the switching component to switch between a connection of the handwriting board and the System on Chip and another connection of the handwriting board and the display controller, when the connection of the handwriting board and the display controller are switched on, direct transmission of report point data between the handwriting board and the display controller is realized, and the display controller receives the report point data detected by the handwriting board and drives the ink screen to display. Compared to the way of the report point data from the core layer of the System on Chip to its application layer and then back to the core layer from the application layer in the prior art, the present disclosure saves transmission time for the report point data, shortens time-consuming of drawing the handwriting, and improve speed of drawing the handwriting. Yet another embodiment of the present disclosure provides a handwriting reading device.FIG.10shows a structural schematic diagram of the handwriting reading device provided by this embodiment. As shown inFIG.10, a handwriting reading device100includes a System on Chip10, an ink screen20, a display controller30, an electromagnetic board41and a switching component50. Main control devices such as a central processing unit (CPU) and the like are integrated in the System on Chip10, which is the main chip of the reader. The display controller30is an Electronic Paper Display controller or a Timer Control Register. The electromagnetic board41detects a handwriting operation instructions generated by the electromagnetic stylus on the ink screen, and after the electromagnetic signal triggered by the electromagnetic pen is detected, the electromagnetic signal is converted into report point data. The System on Chip10is electrically connected with the display controller30, the display controller30is electrically connected with the ink screen20, and the switching component50is electrically connected with the electromagnetic board41, the System on Chip10, and the display controller30, respectively, and is configured to switch between a connection of the electromagnetic board41and the System on Chip10and another connection of the electromagnetic board41and the display controller30. The switching component50has a first end51, a second end52and a third end53; the first end51is electrically connected with the electromagnetic board41for receiving the report point data, the second end52is electrically connected with the System on Chip10, and the third end53is electrically connected with the display controller30. The first end51is configured to be connected to either the second end52or the third end53. When the first end51is connected to the second end52, the report point data is transmitted to the System on Chip10through the switching component50, and when the first end51is connected to the third end53, the report point data is transmitted to the display controller30through the switching component50. At this time, the report point data does not need to be processed layer by layer by the System on Chip10, and the display controller30directly looks up the LUT table based on the report point data to acquire the waveform of driving the ink screen20and to drive the ink screen20to display. In the present embodiment, by providing three ends in the switching component, a first end of which receives the report point data of the electromagnetic board, a second end of which is connected with the System on Chip, and a third end of which is connected with the display controller, and by controlling connection and disconnection of the first end and the third end, when the first end and the third end are connected, direct transmission of the report point data between the electromagnetic board and the display controller is realized, and the display controller can receive the report point data detected by the electromagnetic board and drives the ink screen to display. Compared to the way of the report point data from the core layer of the System on Chip to its application layer and then back to the core layer from the application layer in the prior art, the present disclosure saves transmission time for the report point data, shortens time-consuming of drawing the handwriting, and improve speed of drawing the handwriting. Yet another embodiment of the present disclosure provides a handwriting reading device.FIG.11shows a structural schematic diagram of the handwriting reading device provided by this embodiment. As shown inFIG.11, a handwriting reading device100includes a System on Chip10, an ink screen20, a display controller30, an electromagnetic board41and a switching component50. Main control devices such as a central processing unit (CPU) and the like are integrated in the System on Chip10, which is the main chip of the reader. The display controller30is an Electronic Paper Display controller or a Timer Control Register. The electromagnetic board41detects a handwriting operation instructions generated by the electromagnetic stylus on the ink screen, and after the electromagnetic signal triggered by the electromagnetic pen is detected, the electromagnetic signal is converted into the report point data. The System on Chip10is electrically connected with the display controller30, the display controller30is electrically connected with the ink screen20, and the switching component50is electrically connected with the electromagnetic board41, the System on Chip10, and the display controller30, respectively, and is configured to switch between a connection of the electromagnetic board41and the System on Chip10and another connection of the electromagnetic board41and the display controller30. The switching component50has a first end51, a second end52, a third end53and a button54. The first end51is electrically connected with the electromagnetic board41for receiving the report point data, the second end52is electrically connected with the System on Chip10, and the third end53is electrically connected with the display controller30. One end of the button54is fixedly connected with the first end51of the switching component50, and the other end of the button54is switchable. When the other end of the button54is switched to be electrically connected with the second end52of the switching component50, the first end51and the second end52of the switching component50are connected, and the report point data is transmitted to the System on Chip10through the switching component50, and when the other end of the button54is switched to be electrically connected with the third end53of the switching component50, the first end51and the third end53of the switching component50are connected, and the report point data is transmitted to the display controller through the switching component. At this time, the handwritten report data does not need to be processed layer by layer by the System on Chip10, and the display controller30directly looks up the LUT table based on the report point data to acquire the waveform of driving the ink screen20and to drive the ink screen20to display. Wherein the button54may be a toggle button or a push button. In the present embodiment, by providing three ends in the switching component, a first end of which receives the report point data of the electromagnetic board, a second end of which is connected with the System on Chip, and a third end of which is connected with the display controller, and by controlling connection and disconnection of the first end and the third end by providing the button, when the first end and the third end are connected, direct transmission of the report point data between the electromagnetic board and the display controller is realized, and the display controller receives the report point data detected by the electromagnetic board and drives the ink screen to display. Compared to the way of the report point data from the core layer of the System on Chip to its application layer and then back to the core layer from the application layer in the prior art, the present disclosure saves transmission time for the report point data, shortens time-consuming of drawing the handwriting, and improve speed of drawing the handwriting. Yet another embodiment of the present disclosure provides a handwriting reading device.FIG.12shows a structural schematic diagram of the handwriting reading device provided by this embodiment. As shown inFIG.12, a handwriting reading device100includes a System on Chip10, an ink screen20, a display controller30, an electromagnetic board41, a switching component50and a MCU60. Main control devices such as a central processing unit (CPU) and the like are integrated in the System on Chip10, which is the main chip of the reader. The display controller30is an Electronic Paper Display controller or a Timer Control Register. The electromagnetic board41detects a handwriting operation instructions generated by the electromagnetic stylus on the ink screen, and after the electromagnetic signal triggered by the electromagnetic pen is detected, the electromagnetic signal is converted into the report point data. The System on Chip10is electrically connected with the display controller30, the display controller30is electrically connected with the ink screen20, and the switching component50is electrically connected with the electromagnetic board41, the System on Chip10, and the display controller30, respectively, and is configured to switch between a connection of the electromagnetic board41and the System on Chip10and another connection of the electromagnetic board41and the display controller30. The switching component50has a first end51, a second end52, a third end53and a control end55; the first end51is electrically connected with the electromagnetic board41for receiving the report point data, the second end52is electrically connected with the System on Chip10, and the third end53is electrically connected with the display controller30. The MCU60is electrically connected with the control end55of the switching component50to switch between a connection of the first end51and the second end52and another connection of the first end51and the third end53of the switching component50. When the MCU60controls the first end51and the second end52of the switching component50are connected, the report point data is transmitted to the System on Chip10through the switching component50; when the MCU60controls the first end51and the third end53of the switching component50are connected, the report point data is transmitted to the display controller30through the switching component50. At this time, the handwritten report data does not need to be processed layer by layer by the System on Chip10, and the display controller30directly looks up the LUT table based on the report point data to acquire the waveform of driving the ink screen20and to drive the ink screen20to display. In the present embodiment, by providing three ends in the switching component, a first end of which receives the report point data of the electromagnetic board, a second end of which is connected with the System on Chip, and a third end of which is connected with the display controller, and by controlling connection and disconnection of the first end and the third end by providing a MCU, when the first end and the third end are connected, direct transmission of the report point data between the electromagnetic board and the display controller is realized, and the display controller receives the report point data detected by the electromagnetic board and drives the ink screen to display. Compared to the way of the report point data from the core layer of the System on Chip to the application layer and then back to the core layer from the application layer in the prior art, the present disclosure saves transmission time for the report point data, shortens time-consuming of drawing the handwriting, and improve speed of drawing the handwriting. Yet another embodiment of the present disclosure provides a handwriting reading device.FIG.13shows a structural schematic diagram of the handwriting reading device provided by this embodiment. As shown inFIG.13, a handwriting reading device100includes a System on Chip10, an ink screen20, a display controller30, an electromagnetic board41and a switching component50. Main control devices such as a central processing unit (CPU) and the like are integrated in the System on Chip10, which is the main chip of the reader. The display controller30is an Electronic Paper Display controller or a Timer Control Register. The electromagnetic board41detects a handwriting operation instructions generated by the electromagnetic stylus on the ink screen, and after the electromagnetic signal triggered by the electromagnetic pen is detected, the electromagnetic signal is converted into report point data. The System on Chip10is electrically connected with the display controller30, the display controller30is electrically connected with the ink screen20, and the switching component50is electrically connected with the electromagnetic board41, the System on Chip10, and the display controller30, respectively, and is configured to switch between a connection of the electromagnetic board41and the System on Chip10and another connection of the electromagnetic board41and the display controller30. The switching component50has a first end51, a second end52and a third end53. The first end51is electrically connected with the electromagnetic board41for receiving the report point data, the second end52is electrically connected with the System on Chip10, and the third end53is electrically connected with the display controller30. When the first end51are connected to the second end52, the report point data is transmitted to the System on Chip10through the switching component50. The first end51is electrically connected to the third end53through a switching unit56, and when the first end51and the third end53are connected by the switching unit56, the report point data is also transmitted to the display controller30through the switching component50. At this time, the handwritten report data does not need to be processed layer by layer by the System on Chip10, and the display controller30directly looks up the LUT table based on the report point data to acquire the waveform of driving the ink screen20and to drive the ink screen20to display; when the first end51and the third end53are disconnected by the switching unit56, the report point data is not transmitted to the display controller30through the switching component50. The switching unit56is a button switch or a combination of a button switch and a switching diode. As shown inFIG.14, in this embodiment, a button switch S is used as the switching unit. As shown inFIG.15, in this embodiment, a combination of a button switch S and a switching diode D is used as the switching unit. One end of the switching diode D is electrically connected to the first end51of the switching component50, and the one end of the switching diode D is also connected to the power source through the button switch S, and the other end of the switching diode D is electrically connected to the third end53of the switching component50. When the button switch S is switched on, the switching diode D is turned on, and the first end51and the third end53of the switching component50are connected; when the button switch S is switched off, the switching diode D is turned off, and the first end51and the third end53of the switch module50are disconnected. In the present embodiment, by providing three ends in the switching component, a first end of which receives the report point data of the electromagnetic board, a second end of which is connected with the System on Chip, and a third end of which is connected with the display controller, the first end and the second end being kept connected, and by controlling connection and disconnection of the first end and the third end by a switching unit, when the first end and the third end are connected, transmission of the report point data in two paths is realized. In one path the report point data is directly transmitted from the electromagnetic board to the display controller, and the ink screen is driven by the display controller to display, which saves transmission time of the report point data, shortens time-consuming of drawing the handwriting, and improves speed of drawing the handwriting, in the other path, the report point data is transmitted to the display controller after being processed the System on Chip, and at this time, the report point data is transmitted by the System on Chip from the core layer to application layer, and after image synthesis is performed by the application layer, then the grayscale image is transmitted to the display controller through the core layer, and displaying of the ink screen is driven by the display controller. After processing of the report point data previously transmitted directly by the electromagnetic board is completed by the display controller, then the grayscale image synthesized by the application layer will be processed, and the displaying effect observed by the user is: a black line with effect of no thickness and no stroke is displayed in the screen first, and then the screen performs refreshing once to refresh the original black line into a line with various effects, so as to achieve not only quickly displaying the handwriting, but also taking into account the displaying effect of the handwriting. Yet another embodiment of the present disclosure provides a handwriting reading device.FIG.16shows a structural schematic diagram of the handwriting reading device provided by this embodiment. As shown inFIG.16, a handwriting reading device100includes a System on Chip10, an ink screen20, a display controller30, an electromagnetic board41, a switching component50and a MCU60. Main control devices such as a central processing unit (CPU) and the like are integrated in the System on Chip10, which is the main chip of the reader. The display controller30is an Electronic Paper Display controller or a Timer Control Register. The electromagnetic board41detects a handwriting operation instructions generated by the electromagnetic stylus on the ink screen, and after the electromagnetic signal triggered by the electromagnetic pen is detected, the electromagnetic signal is converted into report point data. The System on Chip10is electrically connected with the display controller30, the display controller30is electrically connected with the ink screen20, and the switching component50is electrically connected with the electromagnetic board41, the System on Chip10, and the display controller30, respectively, and is configured to switch between a connection of the electromagnetic board41and the System on Chip10and another connection of the electromagnetic board41and the display controller30. The switching component50has a first end51, a second end52, a third end53and a control end55, the first end51is electrically connected with the electromagnetic board41for receiving the report point data, the second end52is electrically connected with the System on Chip10, and the third end53is electrically connected with the display controller30; the first end51and the second end52are continuously connected, and the report point data is transmitted to the System on Chip10through the switching component50. The MCU60is electrically connected with the control end55of the switching component50to control connection and disconnection between the first end51and the third end53of the switching component50. When the first end51and the third end53of the switching component50are connected by the MCU60, the report point data is also transmitted to the display controller30through the switching component50. At this time, the handwritten report data does not need to be processed layer by layer by the System on Chip10, and the display controller30directly looks up the LUT table based on the report point data to acquire the waveform of driving the ink screen20and to drive the ink screen20to display. When the first end51and the third end53of the switching component50are disconnected by the MCU60, the report point data is not transmitted to the display controller30through the switching component50. At this time, the report point data is transmitted to the display controller30only after being processed by the System on Chip10. In the present embodiment, by providing three ends in the switching component, a first end of which receives the report point data of the electromagnetic board, a second end of which is connected with the System on Chip, and a third end of which is connected with the display controller, the first end and the second end being kept connected, and by controlling connection and disconnection between the first end and the third end by a MCU, when the first end and the third end are connected, transmission of the report point data in two paths is realized. In one path the report point data is directly transmitted from the electromagnetic board to the display controller, and the ink screen is driven by the display controller to display, which saves transmission time of the report point data, shortens time-consuming of drawing the handwriting, and improves speed of drawing the handwriting. In the other path the report point data of is transmitted to the display controller after being processed the System on Chip. At this time, the report point data is transmitted by the System on Chip from the core layer to the application layer, and after image synthesis is performed by the application layer, the grayscale image is transmitted to the display controller through the core layer, and displaying of the ink screen is driven by the display controller. After processing of the report point data previously transmitted directly by the electromagnetic board is completed by the display controller, the grayscale image synthesized by the application layer will be processed, and the displaying effect observed by the user is: a black line with effect of no thickness and no stroke is displayed in the screen first, and then the screen performs refreshing once to refresh the original black line into a line with various effects, so as to not only quickly display the handwriting, but also take into account the displaying effect of the handwriting. In the above embodiments, the first end51, the second end52and the third end53of the switching component50may all use data interfaces, such as a Digital Peripheral Interface (DPI), a Serial Peripheral Interface (SPI) or an I2C (Inter-Integrated Circuit) interface and the like. The algorithms and displays provided herein are not inherently related to any particular computer, virtual system, or other device. Various general-purpose systems can also be used with teaching based on this. The structure required to construct such a system is apparent from the above description. The present disclosure is not directed to any particular programming language. It should be understood that various programming languages may be used to implement the disclosures described herein, and that the descriptions of specific languages above are intended to disclose optimal embodiments of the disclosure. In the description provided herein, numerous specific details are set forth. It will be understood, however, that embodiments of the present disclosure may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description. Similarly, it is to be understood that in the above description of exemplary embodiments of the present disclosure, various features of the present disclosure are sometimes grouped together into a single implementation in order to simplify the present disclosure and to aid in the understanding of one or more of the various disclosed aspects, examples, figures, or descriptions thereof. However, this method of disclosure should not be interpreted as reflecting an intention that the claimed disclosure requires more features than are expressly recited in each claim. Rather, as the following claims reflect, disclosed aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the Detailed Description are hereby expressly incorporated into this Detailed Description, with each claim standing on its own as a separate embodiment of the present disclosure. Those skilled in the art will understand that the modules in the device in the embodiment can be adaptively changed and arranged in one or more devices different from the embodiment. The modules or units or components in the embodiments may be combined into one module or unit or component, and further they may be divided into multiple sub-modules or sub-units or sub-assemblies. All features disclosed in this specification (including accompanying claims, abstract and drawings) and any method or apparatus so disclosed may be employed in any combination, unless at least some of such features or procedures or elements are mutually exclusive. All processes or units are combined. Each feature disclosed in this specification (including the accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise. Furthermore, it is to be understood by those skilled in the art that although some of the embodiments herein include certain features, but not others, included in other embodiments, that combinations of features of the different embodiments are intended to be within the scope of the present disclosure And form different embodiments. For example, in the following claims, any of the claimed embodiments may be used in any combination. It should be noted that the above-described embodiments illustrate rather than limit the disclosure, and that alternative embodiments may be devised by those skilled in the art without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word “comprising” does not exclude the presence of elements or steps not listed in a claim. The word “a” or “an” preceding an element does not exclude the presence of a plurality of such elements. The present disclosure may be implemented by means of hardware comprising several different elements and by means of a suitably programmed computer. In a unit claim enumerating several means, several of these means may be embodied by one and the same item of hardware. The use of the words first, second, and third, etc. do not denote any order. These words can be interpreted as names. | 54,354 |
11862117 | DETAILED DESCRIPTION Decompression circuits may be utilized to transfer data on devices. Such data may be video data, image data, audio data, text data, numerical data, etc. In some examples, data decompression circuits are used to transfer data on a display device. The data may be transferred from a display device transceiver side including a transceiver device to a display device receiver side including a parallel register. The transceiver device may capture and prepare the data to be transferred to the parallel register. The parallel register may store data to be utilized by a memory operator to perform memory operations for transferring the data from the parallel register to a memory. Data may include groups of similar data such as, for example, similar data regions, rows, columns, etc. In one example, the data is image data including similar adjacent rows, such as dark sections. The transceiver device may exploit these data similarities by implementing compression algorithms (e.g., lossless compression algorithms) to decrease the time and energy needed to transmit the data from the transceiver side to the receiver side of the display device. In some examples, pipelining is utilized to transfer data from the transceiver side to the receiver side of the display device. The pipelining may be synchronous and include multiple pipeline stages. In cases where the pipelining is synchronous, the pipeline stages receive the same timing by being on the same clock (e.g., the pipeline stages receive the same clock signal). The pipeline stages may include synchronous logic utilizing a register-based implementation. In one example, each pipeline stage is a shift register, which inserts a clock cycle. These shift registers may be synchronous registers, causing a synchronous delay (e.g., stalls) between each shift register. The shift registers may move data and accept data every clock cycle. However, in some cases, the input data and the output data may not be ready every clock cycle, which can lead to additional stalls. The pipeline stalls may require complex compression hardware for the transceiver device, which increases the risk of bugs to stop performance of the lossless compression algorithms. Example approaches disclosed herein implement a decompression circuit including buffers to transfer data (e.g., compressed data) on a display device such as, for example, from a transceiver side to a receiver side of the display device. The buffers load the data to data elements. As used herein, a data element refers a portion of a bus including data transferred in a single internal clock cycle. In one example, the data element is a 64-bit data element. The buffers are controlled by clock signals including clock events. The clock events cause the buffers to load data to the data elements. The buffers are matched utilizing matching techniques. The buffers may be matched in quantity (e.g., determining a number of buffers on the display device) and layout (e.g., determining locations and routing of buffers on the display device) to affect the timing of data arriving at buffers. Further, the clock signals are matched to prevent delays loading data to the data elements. The buffers and clock signals are matched to maintain relationships between the timing of data and clock events arriving at buffers, such as a timing margin. The timing margin is the required time difference between data and a clock event arriving at the buffer for the decompression circuit to function correctly (e.g., the correct data loading to the data element). For example, data arrives at a buffer at a first time, and a clock event to a data element arrives at a second time. The time difference between the first time and the second time is to be matched based on the timing margin. In one example, the data elements are loaded with the compressed data including data bits for a data row and/or a data column. The data elements may be loaded at different times, so long as all data bits are loaded to the data elements before a parallel shift clock event included in a parallel shift clock signal. The parallel shift clock event causes data from the data elements to be decompressed and transferred to the parallel register. As a result, asynchronous delay across loading the data elements is removed and the data is loaded to the parallel register in a single internal clock cycle (e.g., a clock cycle of the parallel shift clock signal). FIG.1is a block diagram of an example display device100. The display device100may be utilized in any display system such as, for example, a projector system, a video wall, a multi-view monitor, a stereoscopic display, a monitor with multiple display surfaces, a multi-focal plane display, a near eye display (e.g., 3D glasses), a headset, a vehicle headlight, etc. The display device100may be any display device such as, for example, a digital micromirror device (DMD), a liquid crystal display, a magneto-optic spatial light modulator, a liquid crystal on silicon (LcOS) display, a microLED display, a phase light modulator (PLM), etc. In the example ofFIG.1, the display device100includes a transceiver device110, a decompression circuit120, clocks130, a parallel register140, a parallel register clock150, a memory operator160, and a display memory170. The display device100may obtain input data105containing display data (e.g., image data and/or video data) of any format, resolution, etc. from an interface107. The display device100may be in communication with the interface107using a wired or wireless communication interface. The interface107may be any interface including the input data105. In one example, the interface107is a camera that captures the input data105. In another example, the interface107is a game server that generates the input data105from video games. In another example, the interface107is a content server that generates the input data105from media files. In another example, the interface107is a memory such as, for example, at least one memory including cache(s), random-access memory(s), hard disk drive(s), flash memory(s), read-only memory(s), compact disk(s), digital versatile disk(s), etc. In another example, the interface107is one or more storage devices and/or computing devices (e.g., servers) located at the same or different locations of a network or collection of networks (e.g., in the cloud, in edge devices, etc.). In some examples, the input data105loaded to the transceiver device110includes groups of similar data such as, for example, similar data regions, rows, columns, etc. In one example, input data105includes similar adjacent rows, such as dark sections. The transceiver device110and the memory operator160may be implemented by hardware, such as a processor. However, any other type of circuitry may additionally or alternatively be used such as, for example, one or more analog or digital circuit(s), power management integrated circuits (PMIC(s)), logic circuits, programmable processor(s), programmable controller(s), graphics processing unit(s) (GPU(s)), digital signal processor(s) (DSP(s)), application specific integrated circuit(s) (ASIC(s)), programmable logic device(s) (PLD(s)), field programmable logic device(s) (FPLD(s)) (such as field programmable gate arrays (FPGAs)), etc. The decompression circuit120, the clocks130, and the parallel register clock150may be implemented by logic circuits. However, any other type of circuitry may additionally or alternatively be used such as, for example, one or more analog or digital circuit(s), PMIC(s), programmable processor(s), programmable controller(s), GPU(s), DSP(s), ASIC(s), PLD(s), FPLD(s) (such as FPGAs), etc. The display memory170may be any memory such as, for example, at least one memory including cache(s), random-access memory(s), hard disk drive(s), flash memory(s), read-only memory(s), compact disk(s), digital versatile disk(s), etc. The transceiver device110receives the input data105and compresses the input data105or a portion of the input data105to form compressed data175by implementing one or more compression algorithms. The input data105may be a row or a column of data. In some examples, the compression algorithms are lossless compression algorithms that take advantage of similar data included in the input data105. The similar data may be similar data regions, rows, columns, etc. In one example, the similar data includes similar adjacent rows or columns, such as dark sections. The compressed data175may be sent to the decompression circuit120. The decompression circuit120includes a transmission circuit180, a compressed data memory185, and compression override logic circuits187. The compressed data175is transmitted from a transceiver side190(e.g., the transceiver device110) to a receiver side193(e.g., compressed data memory185) of the display device100via the transmission circuit180. Transmitting compressed data175may decrease the time and energy needed for transmission compared to transmitting less compressed data (e.g., the input data105). The compressed data175may include compression addressing bits and/or data bits. For example, compression addressing bits include a packet control word (PCW), compression control word (CCW), etc. The PCW indicates the operation (e.g., a row or a column) to be written to in a memory. The CCW indicates compression is being applied. The data bits may be the data from the row or column of the input data105. The compression override logic circuits187decompress the compressed data175from the compressed data memory185to form decompressed data195. The decompressed data195is stored in the parallel register140. The transmission circuit180may be controlled by the clocks130. For example, as shown inFIG.3, the clocks130produce clock signals indicating to elements of the transmission circuit180to transfer portions of the compressed data175. The transmission circuit180stores the compressed data175in the compressed data memory185. In some examples, the physical distance between the transceiver side190and the receiver side193is a large physical distance (e.g., 10 millimeters to 20 millimeters) relative to sizes of display device circuits. For example, the display device circuits are decompression circuits implemented in DMDs, liquid crystal displays, magneto-optic spatial light modulators, a LcOS displays, microLED displays, PLMs, etc. The transmission circuit180transmitting the compressed data175to the compressed data memory185may delay the rate at which subsequent compressed data175can be sent by the transceiver device110because the transmission circuit180is not ready to receive data. For example, in cases where the transmission circuit180cannot transfer the compressed data175to the compressed data memory185as fast as the transceiver device110is able to send compressed data175to the transmission circuit180, the process of transmitting data to the parallel register140slows down. Decompression overhead is the amount of extra data added to the compressed data175to facilitate decompression of the compressed data175. For example, the parallel register140obtains a data stream including data bits from the compressed data memory185and the extra data. The extra data may include stall bits (e.g., idle bits) indicating stalls for clock cycles associated with the parallel register clock150. Further, a stall bit causes no meaningful data to be transferred from the compressed data memory185to the parallel register140via the compression override logic circuits187for a clock cycle duration. The parallel register140may be controlled by the parallel register clock150. For example, the parallel register clock150produces signals indicating to transfer decompressed data195from the decompression circuit120to the parallel register140. The delay of decompressing the compressed data175may be caused by a delay of transmitting the compressed data175to the compressed data memory185via the transmission circuit180. Increasing the decompression overhead may lead to a decreased data compression ratio, which is the ratio between the uncompressed data size and the compressed data size. For example, a data compression ratio is the number of uncompressed bits sent compared to the number of compressed bits sent for the same data. Thus, as the stall bits increase, the compressed data size increases, leading to a decreased data compression ratio. The memory operator160may perform memory operations to store the decompressed data195from the parallel register140to the display memory170. In some examples, these memory operations are completed before new decompressed data is available in the parallel register140, which leads to the stall bits discussed above. For example, the new decompressed data is being formed by the decompression circuit120at a time the memory operations are complete. As a result, memory operator160stalls because no new decompressed data is available in the parallel register140for performing memory operations. As the memory operator160is performing memory operations on data loaded to the parallel register140, new compressed data may be transmitted and loaded to the to the compressed data memory185. The display memory170may be an array of memory elements to configure the display device100. The array of memory elements may be embedded on a semiconductor substrate. For example, the memory operator160loads data to the array of memory elements to store a configuration of the display device100. In one example, the display device100is a DMD including an array of mirrors. The array of memory elements may store tilt states for each of the mirrors. For example, a tilt state corresponds to a mirror tilted to a degree value relative to the semiconductor substrate (e.g., +10 degrees). The configuration of the array of the mirrors (e.g., tilting of the mirrors) is based on data indicating tilt states loaded to the array of the memory elements. In another example, the display device100is a PLM including an array of micromirrors. The array of memory elements may store vertical states for each of the mirrors. For example, a vertical state corresponds to a mirror vertically displaced relative to the semiconductor substrate (e.g., moving towards or away from the semiconductor substrate). The configuration of the array of the mirrors (e.g., vertical displacement of the mirrors) is based on data indicating vertical states loaded to the array of the memory elements. While an example manner of implementing the display device100is illustrated inFIG.1, one or more of the elements, processes and/or devices illustrated inFIG.1may be combined, divided, re-arranged, omitted, eliminated and/or implemented in any other way. Further, the transceiver device110, the decompression circuit120, the clocks130, the parallel register140, the parallel register clock150, the memory operator160, and/or, more generally, the display device100may be implemented by hardware, software, firmware and/or any combination of hardware, software and/or firmware. Thus, for example, any of the transceiver device110, the decompression circuit120, the clocks130, the parallel register140, the parallel register clock150, the memory operator160and/or, more generally, the display device100could be implemented by one or more analog or digital circuit(s), PMIC(s), logic circuits, programmable processor(s), programmable controller(s), GPU(s), DSP(s), ASIC(s), PLD(s) and/or FPLD(s). When reading any of the apparatus or system claims of this patent to cover a purely software and/or firmware implementation, at least one of the example, the transceiver device110, the decompression circuit120, the clocks130, the parallel register140, the parallel register clock150, the memory operator160, and/or, more generally, the display device100is/are hereby expressly defined to include a non-transitory computer readable storage device or storage disk such as a memory, a digital versatile disk (DVD), a compact disk (CD), a Blu-ray disk, etc. including the software and/or firmware. Further still, the display device100ofFIG.1may include one or more elements, processes and/or devices in addition to, or instead of, those illustrated inFIG.1, and/or may include more than one of any or all of the illustrated elements, processes and devices. As used herein, the phrase “in communication,” including variations thereof, encompasses direct communication and/or indirect communication through one or more intermediary components, and does not require direct physical (e.g., wired) communication and/or constant communication, but rather additionally includes selective communication at periodic intervals, scheduled intervals, aperiodic intervals, and/or one-time events. FIG.2is an illustration of an example matched buffer transmission circuit200. For example, the matched buffer transmission circuit200ofFIG.2is an example of the transmission circuit180ofFIG.1that transfers data over a large physical distance between a transceiver device202and data elements204. The data elements204are abbreviated herein as “DE” inFIG.2. For instance, the transceiver device202ofFIG.2is an example of the transceiver device110ofFIG.1, and the data elements204are examples of the compressed data memory185ofFIG.1. For example, the data elements204are portions of a bus. In one example, the data elements204are 64-bit data elements. The matched buffer transmission circuit200utilizes buffers to transfer data between the transceiver device202and data elements204. The buffers are controlled by clock signals including clock events (e.g., a rising edge or a falling edge). The clock events cause the buffers to load data to the data elements204. The buffers are matched in quantity and/or layout based on matching techniques to affect the timing of data being transferred via buffers. For example, the number of buffers included in the matched buffer transmission circuit200affects the time it takes for data to travel from the transceiver device202to a given buffer because the buffers provide asynchronous delays. The number of buffers may be matched in quantity based on the distance between the transceiver device202and the data elements204. In one example, a buffer needs to be placed every 1 millimeter in between the transceiver device202and the data elements204. Additionally, the location of the buffers included in the matched buffer transmission circuit200affects the time it takes for data to travel from the transceiver device202to a given buffer because the buffers provide asynchronous delays. The location of buffers may be matched in layout based on the physical placement of the buffers and the routing of wires in between the buffers. The buffers may be matched in quantity (e.g., a number of buffers on the display device) and layout (e.g., locations and routing of buffers on the display device) based on a timing margin. The timing margin is the time difference between data and a clock event arriving at a buffer for the matched buffer transmission circuit200to function correctly. The clock event may cause the buffer to load data to one of the data elements204. In cases where the matched buffer transmission circuit200does not function correctly, the incorrect data may be loaded to the data element. As described above, the buffers are matched in quantity and/or layout to affect the timing of data arriving at buffers. Additionally, clock signals including the clock events (e.g., the clock events causing buffers to load data to data elements204) are matched to not cause delays loading data to the data elements. The buffers and clock signals are matched to maintain relationships between the timing of data and clock events arriving at buffers. For example, data arrives at a buffer at a first time, and a clock event to a data element arrives at a second time. The time between the first time and the second time is matched based on the timing margin. The first time is based on the quantity and/or layout of the buffers. The second time is based on the clock signal. The time difference between the first time and the second time being than the timing margin may lead to setup time violations. A setup time violation may cause incorrect data to be written and/or loaded to the data element. Additionally, the buffers may be matched based on avoiding skew between data bits to be transferred from the buffers to the data elements. For example, the buffers are matched in quantity and/or layout to introduce little to no skew between data bits loaded in a data element. In one example, the buffers include buffers205,210. Alternatively, more or fewer buffers than the two buffers205,210may be included. The timing of data bits delivered to data elements204is matched by matching clock signals220,225associated with the buffers205,210. In one example, the clock signals220,225are produced by the clocks130ofFIG.1. The first buffer205is controlled by the first clock signal220. The second buffer210is controlled by the second clock signal225. The buffers205,210and clock signals220,225are matched to maintain relationships between the timing of data and clock events arriving at buffers, as described above. For example, first data arrives at the first buffer205at a first time. Further, a first clock event associated with the first clock signal220may occur at a second time, which causes the first buffer205to load first data to a first data element from the data elements204. The time difference between the first time and the second time may be matched to the timing margin. In one example, the timing margin is 1 nanosecond. As described above, the time difference may be matched by matching the first clock signal220and matching the buffers205,210in quantity and/or layout. In some examples, the buffers205,210transmitting data bits to data elements204are delayed relative to the clock signals220,225driving the buffers205,210. For example, data bits of a data element from the data elements204take more than one clock cycle associated with a clock signal to be delivered to the data element. However, a clock event corresponding to the completion of the clock cycle occurring before the data bits are ready to be written and/or loaded to a data element may cause a setup time violation. The clock cycle (e.g., frequency of the clock signal) may be reduced to introduce a skew on the clock signal, which buffers the clock signal with the data bits. As a result, the data bits are delivered to the data element in one clock cycle, and the clock signals are matched. If the time for data bits to be delivered to a data element (e.g., variable delays) increases, the frequency of the clock cycle is reduced. In one example, a first data element is routed a physical distance closer to the transceiver device202than a second data element. As a result, the first clock signal220associated with the first data element may produce a clock event sooner than a second clock signal225associated with the second data element due to data taking a longer time to reach the first data element compared to the second data element. Essentially, timing skew is being introduced to the clock signals220,225to match the timing skew of the data being transferred. The timing skew may be the difference between data and clock events being delivered to components. The timing skew is introduced to avoid decreasing the timing margin for a given buffer. In this example, the timing margin is the time difference between the data and a clock event arriving at the given buffer. If the timing is the same for both the first clock signal and the second clock signal, the timing margin is greater for the first data element compared to the second data element. Further, if the clock event occurs before the data is ready to be written and/or loaded to a data element, this may cause a setup time violation causing incorrect data to be written and/or loaded to the data element. In one example, the transceiver device202initiates a transfer of compressed data via one or more of the buffers205,210in response to a first clock event of the second clock signal225(e.g., beginning of a first clock cycle). The compressed data is loaded to four data elements235associated with the second buffer210during the first clock cycle. As a result, the compressed data may be written to four data elements235before a second clock event of the second clock signal225(e.g., completion of the first clock cycle). In some examples, the compressed data includes a CCW addressing bit indicating only one data element needs to be explicitly written to and the other data elements may be specified as compressed (e.g., all 0's, all 1's). Therefore, one of the four data elements235includes explicit data. As shown inFIG.2, a data element240includes explicit data and other three data elements of the four data elements235are specified as compressed. FIG.3is an illustration of an example matched buffer decompression circuit300. For example, the matched buffer decompression circuit300is an example of the decompression circuit120ofFIG.1. The matched buffer decompression circuit300includes a matched buffer transmission circuit302, data elements304, and compression override logic circuits306. The matched buffer transmission circuit302is an example of the transmission circuit180ofFIG.1and/or the matched buffer transmission circuit200ofFIG.2. The data elements304are examples of the compressed data memory185ofFIG.1and/or the data elements204ofFIG.2. The compression override logic circuits306is an example of the compression override logic circuits187ofFIG.1. The buffers and clock signals are matched to maintain relationships between the timing of data and clock events arriving at buffers, such as a timing margin. The matched buffer transmission circuit302includes buffers308,310,312and clocks314,316,318for transferring data from a transceiver device320to the data elements304. The transceiver device320is an example of the transceiver device110ofFIG.1and/or the transceiver device202ofFIG.2. The clocks314,316,318are coupled to the buffers308,310,312. Further, the buffers308,310,312are coupled to the data elements304. The clocks314,316,318produce clock signals to drive the buffers308,310,312to load data from the transceiver device320to the data elements304. The data elements304are coupled to compression override logic circuits306. Further, a parallel register322may be coupled to the compression override logic circuits306. The parallel register322may be coupled to a parallel register clock324which drives the parallel register322. In some examples, the clocks314,316,318are implemented by the clocks130ofFIG.1. The buffers308,310,312are examples of the buffers205,210ofFIG.2. The data elements304are examples of the data elements204ofFIG.2. The parallel register322is an example of the parallel register140ofFIG.1. Further, the parallel register clock324is an example of the parallel register clock150ofFIG.1. The data elements304may include data elements326,328,330loaded with explicit data, whereas the other data elements may be loaded with data that is specified as compressed (e.g., all 0s, all 1s). For example, four data elements331from the data elements304include compressed data written by the first buffer308. The four data elements331may include a data element326with explicit data and the other three data elements with compressed bits (e.g., all 0s, all 1s). The compression override logic circuits306may be configured by the CCW addressing bit included in the compressed data, as described in connection with the compressed data memory185ofFIG.1. Therefore, the compression override logic circuits306identifies the data elements304that include compressed bits (e.g., all 0s, all 1s). As a result, the compression override logic circuits306may decompress the data elements304by modifying the compressed bits to the explicit data written to the data elements304. For example, the three data elements with compressed bits (e.g., all 0's, all 1's) from the four data elements331are modified to explicit data in the data element326. The compression override logic circuits306may store the decompressed data to the parallel register322. For example, a first data element clock signal332is produced by the first clock314to instruct the first buffer308to load data to a first set of the data elements304(e.g., write explicit data to the first data element326); a second data element clock signal334is produced by the second clock316to instruct the second buffer310to load data to a second set of the data elements304(e.g., write explicit data to a second data element328); and a third data element clock signal336is produced by the third clock318to instruct the third buffer312to load data to a third set of the data elements304(e.g., write explicit data to the third data element330). In one example, a clock event (e.g., a rising edge) of the first data element clock signal332causes the first buffer308to explicitly write first compressed data to the first data element326. Further, a clock event (e.g., a rising edge) of the second data element clock signal334causes the second buffer310to explicitly write second compressed data to the second data element328. Further, a clock event (e.g., a rising edge) of the third data element clock signal336causes the third buffer312to explicitly write third compressed data to the third data element330. The clock events of the first data element clock signal332, the second data element clock signal334, and the third data element clock signal336can be mismatched timing, so long as the data has been loaded to the data elements304before the initiation of a parallel shift by a parallel shift clock event338(e.g., a rising edge) of the parallel shift clock signal340. The parallel shift clock signal340indicates clock cycles. In response to the parallel register322receiving a parallel shift clock event338(e.g., a rising edge), the parallel register322parallel shifts the data from the data elements304to the parallel register322via the compression override logic circuits306. The parallel shift occurs after all of the data for a given data row or column is loaded in the data elements304. Thus, the data elements304are loaded in a single internal clock cycle (e.g., a clock cycle of the parallel shift clock signal340) to the parallel register322regardless of the location of data elements304in a data stream. This parallel shift removes asynchronous delay across loading the data elements304. Loading all data from the transceiver device320in a single cycle eliminates stalls due to pipelines for compressed data, such as in register-based implementations. The absence of a register-based implementation reduces the complexity of the matched buffer decompression circuit300, transceiver device320, etc. For example, the reduced complexity increases the bandwidth of a compression algorithm (e.g., compression algorithm to compress data), which reduces the overall energy usage of the display device100. Additionally, the reduced complexity leads to less expensive verification and/or reduced area of the matched buffer decompression circuit300and/or the transceiver device320. As a result, the risk of bugs stopping the compression algorithms is reduced. Additionally, the absence of a register-based implementation eliminates the synchronous delay (e.g., stalls) between pipeline stages (e.g., shift registers), which increases the data compression ratio. FIG.4is an illustration of example data transmitted via transmission circuits from a transceiver device to a compressed data memory. The first illustration400corresponds to the example data transmitted via shift registers from a transceiver device to a compressed data memory. For example, the shift registers are synchronous registers, causing a synchronous delay (e.g., stalls) between each shift register. The second illustration405corresponds to the example data transmitted via buffers from a transceiver device to a compressed data memory. For example, the buffers implement the transmission circuit180ofFIG.1, the matched buffer transmission circuit200, ofFIG.2, and/or the matched buffer transmission circuit302ofFIG.3. The example data corresponds to data bits inFIG.4. As shown inFIG.4, the total bits to transfer data of a first row410, a second row415, and third row420are more bits in the first illustration400compared to the second illustration405. As previously described, the shift register transmission circuit causes pipeline stalls which causes significant decompression overhead and poor data compression ratios. The decompression overhead may be the amount of extra data to be added to the compressed data for decompressing the compressed data. For example, the extra data may include adding stall bits (e.g., idle bits) provided to the data stream for the parallel register140ofFIG.1, among data bits and compression addressing bits. The data bits and compression addressing bits include PCW bits, CCW bits, and data element (abbreviated herein as “DE” inFIG.4) bits. As shown inFIG.4, more idle bits are included in the first illustration400compared to second illustration405. FIG.5is a flowchart representative of an example process500that may be performed using configured hardware and/or machine-readable instructions that may be executed by a processor to implement the display device100including a transmission circuit implemented by the transmission circuit ofFIG.1, the matched buffer transmission circuit200ofFIG.2, and/or the matched buffer transmission circuit302ofFIG.3. The example process500ofFIG.5begins at block505, at which a transceiver device320transfers compressed data to buffers308,310,312via a matched buffer transmission circuit302. Alternatively, the transceiver device320may implement the transceiver device110ofFIG.1and/or the transceiver device202ofFIG.2; the buffers308,310,312may implement the buffers210,210ofFIG.2; and the matched buffer transmission circuit302may implement the transmission circuit180ofFIG.1and/or the matched buffer transmission circuit200ofFIG.2. The compressed data is described in connection with the compressed data175ofFIG.1. At block510, a buffer stores compressed data to a set of data elements304. Alternatively, the data elements304may implement the data elements204ofFIG.2. The buffer may be the buffers308,310,312ofFIG.3and/or the buffers205,210ofFIG.2. In one example, the set of the data elements304may be loaded in response to a buffer from the buffers308,310,312receiving a clock event from one of the data element clock signals332,334,336corresponding to the set of the data elements304. At block515, the parallel register322determines whether all data elements304have been loaded. Alternatively, the parallel register322implements the parallel register140ofFIG.1; and the data elements304implements the data elements204ofFIG.2. In one example, the parallel register322determines whether all data elements304have been loaded in response to receiving clock events from a parallel shift clock signal340. For example, the parallel register322determines whether a row and/or column of compressed data is loaded to the data elements304. If the parallel register322determines all data elements304have not been loaded (e.g., block515returns a result of “NO”), the parallel register322returns to block510. If the parallel register322determines all data elements304have been loaded (e.g., block515returns a result of “YES”), the parallel register322continues to block520. At block525, the parallel register322stores decompressed data to the parallel register322. Alternatively, the parallel register322may implement the parallel register140ofFIG.1. For example, the decompressed data includes explicit data from the data elements304and compressed bits modified by the compression override logic circuits306. At block530, the matched buffer transmission circuit302determines whether new compressed data is to be transferred to the data elements304. Alternatively, the matched buffer transmission circuit302implements the transmission circuit180ofFIG.1and/or the matched buffer transmission circuit200ofFIG.2; and the data elements304implements the data elements204ofFIG.2. For example, the decompressed data stored to the parallel register322is associated with a first row of the input data105ofFIG.1. If the matched buffer decompression circuit300determines new compressed data is to be transferred to the data elements304(e.g., block530returns a result of “YES”), the matched buffer transmission circuit302returns to block505. For example, the new compressed data is a second row of input data105. As a result, the new compressed data may be transferred to the data elements304while a memory operator160ofFIG.1is performing memory operations in the parallel register140. If the matched buffer transmission circuit302determines new compressed data is not to be transferred to the data elements304(e.g., block530returns a result of “NO”), the example process500ofFIG.5terminates. FIG.6is a block diagram of an example processor platform600structured to execute and/or instantiate the machine readable instructions and/or operations ofFIG.5to implement the display device ofFIG.1. The processor platform600can be, for example, a server, a personal computer, a workstation, a self-learning machine (e.g., a neural network), a mobile device (e.g., a cell phone, a smart phone, a tablet), a personal digital assistant (PDA), an Internet appliance, a DVD player, a CD player, a digital video recorder, a Blu-ray player, a gaming console, a personal video recorder, a set top box, a headset (e.g., an augmented reality (AR) headset, a virtual reality (VR) headset, etc.) or other wearable device, or any other type of computing device. The processor platform600of the illustrated example includes processor circuitry612. The processor circuitry612of the illustrated example is hardware. For example, the processor circuitry612can be implemented by one or more integrated circuits, logic circuits, FPGAs microprocessors, central processing units (CPUs), GPUs, DSPs, and/or microcontrollers from any desired family or manufacturer. The processor circuitry612may be implemented by one or more semiconductor based (e.g., silicon based) devices. In this example, the processor circuitry612implements the transceiver device110, the clocks130, the parallel register140, the parallel register clock150, the memory operator160the transmission circuit180, and the compression override logic circuits187. The processor circuitry612of the illustrated example includes a local memory613(e.g., a cache, registers, etc.). The processor circuitry612of the illustrated example is in communication with a main memory including a volatile memory614and a non-volatile memory616by a bus618. The volatile memory614may be implemented by Synchronous Dynamic Random Access Memory (SDRAM), Dynamic Random Access Memory (DRAM), RAMBUS® Dynamic Random Access Memory (RDRAM®), and/or any other type of RAM device. The non-volatile memory616may be implemented by flash memory and/or any other desired type of memory device. Access to the main memory614,616of the illustrated example is controlled by a memory controller617. The processor platform600of the illustrated example also includes interface circuitry620. The interface circuitry620may be implemented by hardware in accordance with any type of interface standard, such as an Ethernet interface, a universal serial bus (USB) interface, a Bluetooth® interface, a near field communication (NFC) interface, a peripheral component interconnect (PCI) interface, and/or a PCIe interface. In the illustrated example, one or more input devices622are connected to the interface circuitry620. The input device(s)622enable(s) a user to enter data and/or commands into the processor circuitry612. The input device(s)622can be implemented by, for example, an audio sensor, a microphone, a camera (still or video), a keyboard, a button, a mouse, a touchscreen, a track-pad, a trackball, an isopoint device, and/or a voice recognition system. One or more output devices624are also connected to the interface circuitry620of the illustrated example. The output devices624can be implemented, for example, by display devices (e.g., a light emitting diode (LED), an organic light emitting diode (OLED), a liquid crystal display (LCD), a cathode ray tube (CRT) display, an in-place switching (IPS) display, a touchscreen, etc.), a tactile output device, a printer, and/or speaker. The interface circuitry620of the illustrated example, thus, typically includes a graphics driver card, a graphics driver chip, and/or graphics processor circuitry such as a GPU. The interface circuitry620of the illustrated example also includes a communication device such as a transmitter, a receiver, a transceiver, a modem, a residential gateway, a wireless access point, and/or a network interface to facilitate exchange of data with external machines (e.g., computing devices of any kind) by a network626. The communication can be by, for example, an Ethernet connection, a digital subscriber line (DSL) connection, a telephone line connection, a coaxial cable system, a satellite system, a line-of-site wireless system, a cellular telephone system, an optical connection, etc. The processor platform600of the illustrated example also includes one or more mass storage devices628to store software and/or data. Examples of such mass storage devices628include magnetic storage devices, optical storage devices, floppy disk drives, HDDs, CDs, Blu-ray disk drives, redundant array of independent disks (RAID) systems, solid state storage devices such as flash memory devices, and DVD drives. The machine readable instructions632,634,636,638,640may be implemented by the machine readable instructions ofFIG.5may be stored in the mass storage device628, in the volatile memory614, in the non-volatile memory616, and/or on a removable non-transitory computer readable storage medium such as a CD or DVD. FIG.7is a block diagram of an example implementation of the processor circuitry612ofFIG.6. In this example, the processor circuitry612ofFIG.6is implemented by a microprocessor700. For example, the microprocessor700may implement multi-core hardware circuitry such as a CPU, a DSP, a GPU, an XPU (any type of processing unit), etc. Although it may include any number of example cores702(e.g., 1 core), the microprocessor700of this example is a multi-core semiconductor device including N cores. The cores702of the microprocessor700may operate independently or may cooperate to execute machine readable instructions. For example, machine code corresponding to a firmware program, an embedded software program, or a software program may be executed by one of the cores702or may be executed by multiple ones of the cores702at the same or different times. In some examples, the machine code corresponding to the firmware program, the embedded software program, or the software program is split into threads and executed in parallel by two or more of the cores702. The software program may correspond to a portion or all of the machine readable instructions and/or operations represented by the flowchart ofFIG.5. The cores702may communicate by an example bus704. In some examples, the bus704may implement a communication bus to effectuate communication associated with one(s) of the cores702. For example, the bus704may implement at least one of an Inter-Integrated Circuit (I2C) bus, a Serial Peripheral Interface (SPI) bus, a PCI bus, or a PCIe bus. Additionally or alternatively, the bus704may implement any other type of computing or electrical bus. The cores702may obtain data, instructions, and/or signals from one or more external devices by example interface circuitry706. The cores702may output data, instructions, and/or signals to the one or more external devices by the interface circuitry706. Although the cores702of this example include example local memory720(e.g., Level 1 (L1) cache that may be split into an L1 data cache and an L1 instruction cache), the microprocessor700also includes example shared memory710that may be shared by the cores (e.g., Level 2 (L2) cache)) for high-speed access to data and/or instructions. Data and/or instructions may be transferred (e.g., shared) by writing to and/or reading from the shared memory710. The local memory720of each of the cores702and the shared memory710may be part of a hierarchy of storage devices including multiple levels of cache memory and the main memory (e.g., the main memory614,616ofFIG.6). Typically, higher levels of memory in the hierarchy exhibit lower access time and have smaller storage capacity than lower levels of memory. Changes in the various levels of the cache hierarchy are managed (e.g., coordinated) by a cache coherency policy. Each core702may be referred to as a CPU, DSP, GPU, etc., or any other type of hardware circuitry. Each core702includes control unit circuitry714, arithmetic and logic (AL) circuitry (sometimes referred to as an ALU)716, a plurality of registers718, the L1 cache720, and an example bus722. Other structures may be present. For example, each core702may include vector unit circuitry, single instruction multiple data (SIMD) unit circuitry, load/store unit (LSU) circuitry, branch/jump unit circuitry, floating-point unit (FPU) circuitry, etc. The control unit circuitry714includes semiconductor-based circuits structured to control (e.g., coordinate) data movement within the corresponding core702. The AL circuitry716includes semiconductor-based circuits structured to perform one or more mathematic and/or logic operations on the data within the corresponding core702. The AL circuitry716of some examples performs integer based operations. In other examples, the AL circuitry716also performs floating point operations. In yet other examples, the AL circuitry716may include first AL circuitry that performs integer based operations and second AL circuitry that performs floating point operations. In some examples, the AL circuitry716may be referred to as an Arithmetic Logic Unit (ALU). The registers718are semiconductor-based structures to store data and/or instructions such as results of one or more of the operations performed by the AL circuitry716of the corresponding core702. For example, the registers718may include vector register(s), SIMD register(s), general purpose register(s), flag register(s), segment register(s), machine specific register(s), instruction pointer register(s), control register(s), debug register(s), memory management register(s), machine check register(s), etc. The registers718may be arranged in a bank as shown inFIG.7. Alternatively, the registers718may be organized in any other arrangement, format, or structure including distributed throughout the core702to shorten access time. The bus704may implement at least one of an I2C bus, a SPI bus, a PCI bus, or a PCIe bus. Each core702and/or, more generally, the microprocessor700may include additional and/or alternate structures to those shown and described above. For example, one or more clock circuits, one or more power supplies, one or more power gates, one or more cache home agents (CHAs), one or more converged/common mesh stops (CMSs), one or more shifters (e.g., barrel shifter(s)) and/or other circuitry may be present. The microprocessor700is a semiconductor device fabricated to include many transistors interconnected to implement the structures described above in one or more integrated circuits (ICs) contained in one or more packages. The processor circuitry may include and/or cooperate with one or more accelerators. In some examples, accelerators are implemented by logic circuitry to perform certain tasks more quickly and/or efficiently than can be done by a general purpose processor. Examples of accelerators include ASICs and FPGAs such as those discussed herein. A GPU or other programmable device can also be an accelerator. Accelerators may be on-board the processor circuitry, in the same chip package as the processor circuitry and/or in one or more separate packages from the processor circuitry. FIG.8is a block diagram of another example implementation of the processor circuitry612ofFIG.6. In this example, the processor circuitry612is implemented by FPGA circuitry800. The FPGA circuitry800can be used, for example, to perform operations that could otherwise be performed by the example microprocessor700ofFIG.7executing corresponding machine readable instructions. However, once configured, the FPGA circuitry800instantiates the machine readable instructions in hardware and, thus, can often execute the operations faster than they could be performed by a general purpose microprocessor executing the corresponding software. More specifically, in contrast to the microprocessor700ofFIG.7described above (which is a general purpose device that may be programmed to execute some or all of the machine readable instructions represented by the flowchart ofFIG.5but whose interconnections and logic circuitry are fixed once fabricated), the FPGA circuitry800of the example ofFIG.8includes interconnections and logic circuitry that may be configured and/or interconnected in different ways after fabrication to instantiate, for example, some or all of the machine readable instructions represented by the flowchart ofFIG.5. In particular, the FPGA circuitry800may be thought of as an array of logic gates, interconnections, and switches. The switches can be programmed to change how the logic gates are interconnected by the interconnections, effectively forming one or more dedicated logic circuits (unless and until the FPGA circuitry800is reprogrammed). The configured logic circuits enable the logic gates to cooperate in different ways to perform different operations on data received by input circuitry. Those operations may correspond to some or all of the software represented by the flowchart ofFIG.5. As such, the FPGA circuitry800may be structured to effectively instantiate some or all of the machine readable instructions of the flowchart ofFIG.5as dedicated logic circuits to perform the operations corresponding to those software instructions in a dedicated manner analogous to an ASIC. Therefore, the FPGA circuitry800may perform the operations corresponding to the some or all of the machine readable instructions ofFIG.5faster than the general purpose microprocessor can execute the same. In the example ofFIG.8, the FPGA circuitry800is structured to be programmed (and/or reprogrammed one or more times) by an end user by a hardware description language (HDL) such as Verilog. The FPGA circuitry800ofFIG.8, includes example input/output (I/O) circuitry802to obtain and/or output data to/from example configuration circuitry804and/or external hardware (e.g., external hardware circuitry)806. For example, the configuration circuitry804may implement interface circuitry that may obtain machine readable instructions to configure the FPGA circuitry800, or portion(s) thereof. In some such examples, the configuration circuitry804may obtain the machine readable instructions from a user, a machine (e.g., hardware circuitry (e.g., programmed or dedicated circuitry) that may implement an Artificial Intelligence/Machine Learning (AI/ML) model to generate the instructions), etc. In some examples, the external hardware806may implement the microprocessor700ofFIG.7. The FPGA circuitry800also includes an array of example logic gate circuitry808, a plurality of example configurable interconnections810, and example storage circuitry812. The logic gate circuitry808and interconnections810are configurable to instantiate one or more operations that may correspond to at least some of the machine readable instructions ofFIG.5and/or other desired operations. The logic gate circuitry808shown inFIG.8is fabricated in groups or blocks. Each block includes semiconductor-based electrical structures that may be configured into logic circuits. In some examples, the electrical structures include logic gates (e.g., AND gates, OR gates, NOR gates, etc.) that provide basic building blocks for logic circuits. Electrically controllable switches (e.g., transistors) are present within each of the logic gate circuitry808to enable configuration of the electrical structures and/or the logic gates to form circuits to perform desired operations. The logic gate circuitry808may include other electrical structures such as look-up tables (LUTs), registers (e.g., flip-flops or latches), multiplexers, etc. The interconnections810of the illustrated example are conductive pathways, traces, vias, or the like that may include electrically controllable switches (e.g., transistors) whose state can be changed by programming (e.g., using an HDL instruction language) to activate or deactivate one or more connections between one or more of the logic gate circuitry808to program desired logic circuits. The storage circuitry812of the illustrated example is structured to store result(s) of the one or more of the operations performed by corresponding logic gates. The storage circuitry812may be implemented by registers or the like. In the illustrated example, the storage circuitry812is distributed amongst the logic gate circuitry808to facilitate access and increase execution speed. The example FPGA circuitry800ofFIG.8also includes example dedicated operations circuitry814. In this example, the dedicated operations circuitry814includes special purpose circuitry816that may be invoked to implement commonly used functions to avoid the need to program those functions in the field. Examples of such special purpose circuitry816include memory (e.g., DRAM) controller circuitry, PCIe controller circuitry, clock circuitry, transceiver circuitry, memory, and multiplier-accumulator circuitry. Other types of special purpose circuitry may be present. In some examples, the FPGA circuitry800may also include example general purpose programmable circuitry818such as an example CPU820and/or an example DSP822. Other general purpose programmable circuitry818may additionally or alternatively be present such as a GPU, an XPU, etc., that can be programmed to perform other operations. AlthoughFIGS.7and8illustrate two example implementations of the processor circuitry612ofFIG.6, many other approaches are contemplated. For example, as mentioned above, modern FPGA circuitry may include an on-board CPU, such as one or more of the example CPU820ofFIG.8. Therefore, the processor circuitry612ofFIG.6may additionally be implemented by combining the example microprocessor700ofFIG.7and the example FPGA circuitry800ofFIG.8. In some such hybrid examples, a first portion of the machine readable instructions represented by the flowchart ofFIG.5may be executed by one or more of the cores702ofFIG.7and a second portion of the machine readable instructions represented by the flowchart ofFIG.5may be executed by the FPGA circuitry800ofFIG.8. In some examples, the processor circuitry612ofFIG.6may be in one or more packages. For example, the processor circuitry612ofFIG.6and/or the FPGA circuitry800ofFIG.8may be in one or more packages. In some examples, an XPU may be implemented by the processor circuitry612ofFIG.6, which may be in one or more packages. For example, the XPU may include a CPU in one package, a DSP in another package, a GPU in yet another package, and an FPGA in still yet another package. A block diagram illustrating an example software distribution platform905to distribute software such as the example machine readable instructions632,634,636,638,640ofFIG.6to hardware devices owned and/or operated by third parties is illustrated inFIG.9. The example software distribution platform905may be implemented by any computer server, data facility, cloud service, etc., capable of storing and transmitting software to other computing devices. The third parties may be customers of the entity owning and/or operating the software distribution platform905. For example, the entity that owns and/or operates the software distribution platform905may be a developer, a seller, and/or a licensor of software such as the example machine readable instructions632,634,636,638,640ofFIG.6. The third parties may be consumers, users, retailers, OEMs, etc., who purchase and/or license the software for use and/or re-sale and/or sub-licensing. In the illustrated example, the software distribution platform905includes one or more servers and one or more storage devices. The storage devices store the machine readable instructions632,634,636,638,640, which may correspond to the example process500ofFIG.5, as described above. The one or more servers of the example software distribution platform905are in communication with a network910, which may correspond to any one or more of the Internet and/or any of the example networks910described above. In some examples, the one or more servers are responsive to requests to transmit the software to a requesting party as part of a commercial transaction. Payment for the delivery, sale, and/or license of the software may be handled by the one or more servers of the software distribution platform and/or by a third party payment entity. The servers enable purchasers and/or licensors to download the machine readable instructions632,634,636,638,640from the software distribution platform905. For example, the software, which may correspond to the example process500ofFIG.5, may be downloaded to the example processor platform900, which is to execute the machine readable instructions632,634,636,638,640to implement the display device100ofFIG.1. In some example, one or more servers of the software distribution platform905periodically offer, transmit, and/or force updates to the software (e.g., the example machine readable instructions632,634,636,638,640ofFIG.6) to ensure improvements, patches, updates, etc., are distributed and applied to the software at the end user devices. From the foregoing, it will be appreciated that methods, apparatus and articles of manufacture have been disclosed that implement a decompression circuit including buffers to transfer data (e.g., compressed data) on a display device such as, for example, from a transceiver side to a receiver side of the display device. The buffers are matched utilizing matching techniques. The buffers may be matched in quantity (e.g., determining a number of buffers on the display device) and layout (e.g., determining locations and routing of buffers on the display device) to affect the timing of data arriving at buffers. Further, clock signals including the clock events (e.g., the clock events causing buffers to load data to data elements) are matched to not cause delays loading data to the data elements. The buffers and clock signals are matched to maintain relationships between the timing of data and clock events arriving at buffers, such as a timing margin. The disclosed methods, apparatus and articles of manufacture reduce the overall energy of the display device and the risks of bugs. The disclosed methods, apparatus and articles of manufacture are accordingly directed to one or more improvement(s) in the functioning of a computer. | 59,282 |
11862118 | DESCRIPTION OF EMBODIMENTS Hereinafter, a display data processing device according to an embodiment of the present invention will be described with reference to the drawings.FIG.1is a diagram which shows a configuration example of an image display system using a display data processing device according to an embodiment of the present invention. As shown inFIG.1, an image display system1includes a video receiving unit11, a display data processing device12, and a liquid crystal panel13. The display data processing device12includes each of a first gamma correction unit121, a video gain correction unit122, and a second gamma correction unit123. The video receiving unit11is a High-Definition Multimedia Interface (HDMI: registered trademark)/Display Port (DP)_Receiver (Rx), or the like. In addition, the video receiving unit11receives a video signal from an external device to perform waveform shaping, and the like, and then outputs this video signal to the display data processing device12. The display data processing device12corrects a gradation value of a pixel of a frame in the video signal according to a user setting gamma characteristic (a first gamma characteristic) and a liquid crystal panel gamma characteristic (a second gamma characteristic) of the liquid crystal panel13(to be detailed below). In addition, the display data processing device12corrects color adjustment (brightness adjustment, chromaticity adjustment, and the like) of each of displayed color components R, G, and B. The liquid crystal panel13is an example of a display device. The display data processing device12according to the present embodiment can also be used as a display device for a plasma display, a Cathode Ray Tube (CRT), a projector, and the like. In the display data processing device12, the first gamma correction unit121converts a first input gradation value (an input gradation value A) of each pixel of the color components R, G, and B of the video signal into a first corrected gradation value (an output gradation value C) corresponding to the user setting gamma characteristics (for example, the HDR standard characteristics). Here, the first gamma correction unit121includes, for example, a first gamma correction lookup table (LUT) referred to when the first input gradation value of the pixel of the video signal is converted into the first corrected gradation value. FIG.2is a conceptual diagram which describes gamma characteristics of the first gamma correction LUT. InFIG.2, a third quadrant QD3 indicates the gamma characteristics of the first gamma correction LUT. In addition, a first quadrant QD1 and a second quadrant QD2 indicate a method of generating the first gamma correction LUT. The first gamma correction LUT is a table which describes a correspondence between the input gradation value A and the output gradation value C, indicated by a gamma characteristic curve L3 shown in the third quadrant QD3. The input gradation value A is the gradation value input to the first gamma correction unit121. The gradation value of each pixel of the color components R, G, and B in each frame of a video signal supplied from the video receiving unit11serves as the input gradation value A. In addition, the first corrected gradation value is an output gradation value C corresponding to the input gradation value A on the gamma characteristic curve L3 inFIG.2. In the third quadrant QD3, a graph of the gamma characteristic curve L3 shows the input gradation value A on the vertical axis and the output gradation value C on the horizontal axis. The gamma characteristic curve L3 is generated using a first gamma characteristic curve L1 indicating the user setting gamma characteristic in the first quadrant QD1 and a power curve L2 in the second quadrant QD2. The Power curve L2 is the curve of a predetermined power function (for example, square function). In the graph of the first gamma characteristic curve L1, the horizontal axis indicates the input gradation value A, and the vertical axis indicates a normalized brightness value B (a first normalized brightness value). The input gradation value A indicates gradation values of pixels in the frame of a video signal. The normalized brightness value B indicates a brightness value obtained by normalizing brightness values less than a maximum brightness value by setting the maximum brightness value of each of the color components R, G, and B to 1. Here, the maximum brightness value is a brightness value at the maximum gradation value of 255 when the color components R, G, and B have gradations of 255. The power curve L2 is shown by the following Equation (3). Y=(x/255)n(3) In Equation (3) described above, Y is the normalized brightness value B and x is the output gradation value C described above. 255 is an example of the maximum gradation value that the liquid crystal panel13uses for display. In a graph of the power curve L2 of a predetermined power function, the horizontal axis indicates the output gradation value C (the power root of the power function), and the vertical axis indicates the normalized brightness value B (the first normalized brightness value), which is an output value of the power function. A power (exponent) n of the power function is preferably in a range of 1≤n≤3. Moreover, when a power n of the power function is 2 (n=2), a calculation load can be reduced. The gamma characteristic curve L3 of the first gamma correction LUT is generated by the following processing. By setting each of a gradation value 0 to the gradation value 255 as the input gradation value A, the first normalized brightness value is obtained as the normalized brightness value B corresponding to each of the input gradation values A on the first gamma characteristic curve L1. Then, on the power curve L2, the output gradation value C is obtained as a power root corresponding to the normalized brightness value B described above. At this time, since the power curve L2 is a power function of Equation (3), the output gradation value C is obtained as Y1/n*255, which is a power root x of the power function of Equation (3). If Y is the normalized brightness value B (the first normalized brightness value), the power root x is B1/n*255. Here, the normalized brightness value B is a power value of the power function of Equation (3). In addition, the output gradation value C is obtained as the power root x of the normalized brightness value B. In addition, when n=2, Equation (3) becomes a square function and a square root (the power root) x is Y1/2*255. If a squared value Y is the normalized brightness value B, the square root x is B1/2*255. As described above, the normalized brightness value B (the first normalized brightness value) in the first gamma characteristic curve L1 of the user setting gamma characteristic is set to a power value on the power curve L2 of a power function of the same value. Then, the output gradation value C is obtained as a power root corresponding to the power value of the power curve L2. As a result, when the input gradation value A is input, the gamma characteristic curve L3 is obtained as a curve that shows a relationship between the input gradation value A and the output gradation value C obtained by the processing described above. That is, the input gradation value A on the gamma characteristic curve L3 is converted into the output gradation value C as a power root of the power curve L2 corresponding to the normalized brightness value B. In this manner, since the output gradation value C is obtained as the power root of the power curve L2, a curve shape of the gamma characteristic curve L3 is set to a shape approximating to a straight line. Here, when a curve shape of the power curve L2 is a square characteristic of a power2, calculation for obtaining the output gradation value C corresponding to the normalized brightness value B on the power curve L2 becomes calculation for obtaining a square root (a so-called square root) and becomes easier. Next, in the display data processing device12, the second gamma correction unit123inputs a video gain corrected gradation value (a video gain corrected gradation value) of the pixels of a video signal supplied from the video gain correction unit122as the input gradation value D. Then, the second gamma correction unit123converts the input gradation value D into an output gradation value F (a second corrected gradation value) corresponding to the liquid crystal panel gamma characteristic of the liquid crystal panel13. Here, the second gamma correction unit123includes, for example, a second gamma correction LUT that is referred to when the input gradation value D is converted into the output gradation value F. The second gamma correction LUT is an LUT obtained based on the power curve L2 and the second gamma characteristic curve L4 of the liquid crystal panel gamma characteristic. The second gamma correction LUT shows a relationship between the input gradation value D and the output gradation value F (the second corrected gradation value). FIG.3is a conceptual diagram which describes gamma characteristics of the second gamma correction LUT. InFIG.3, a third quadrant QE3 indicates the gamma characteristics of the second gamma correction LUT. In addition, a first quadrant QE1 and a second quadrant QE2 are graphs for describing a method of generating the second gamma correction LUT. The second gamma characteristic curve L4 is a curve that indicates the liquid crystal panel gamma characteristic shown in the second quadrant QE2. In a graph of the second gamma characteristic curve IA, the horizontal axis indicates the output gradation value F, and the vertical axis indicates a normalized brightness value E (a second normalized brightness value). That is, the second gamma characteristic curve L4 is a curve that shows a brightness value (the normalized brightness value E) of a pixel displayed on the liquid crystal panel13with respect to the input gradation value D that is input. The gamma characteristic curve L5 is generated using the power curve L2 in the first quadrant QE1 and the second gamma characteristic curve L4 of the liquid crystal panel gamma characteristic in the second quadrant QE2. InFIG.3, the graph of the power curve L2 in the first quadrant QE1 shows the input gradation value D on the horizontal axis and the normalized brightness value E (the second normalized brightness value) on the vertical axis. That is, the power curve L2 inFIG.3is a curve of function that should be represented by the same Equation (3) as the power curve L2 of the first quadrant QD1 inFIG.2. The input gradation value D indicates a gradation value supplied to the second gamma correction unit123. For example, a video gain corrected gradation value provided by the video gain correction unit122becomes the input gradation value D. The normalized brightness value E (the second normalized brightness value) indicates a brightness value obtained by normalizing brightness values less than the maximum brightness value by setting the maximum brightness value of each of the color components R, G, and B to 1. The maximum brightness value is a brightness value at the maximum gradation value 255 when the color components R, G, and B have 256 gradations. The normalized brightness value E is obtained as a brightness value corresponding to the input gradation value D on the power curve L2. x in Equation (3) described above is the input gradation value D of the pixel of a video signal supplied to the second gamma correction unit123, as described above. Here, by substituting the input gradation value D for the power root x in Equation (3), the normalized brightness value E (second normalized brightness value) is obtained as the power value of the power function. That is, on the first gamma characteristic curve L1 of the user setting gamma characteristic, the normalized brightness value E is obtained as a normalized brightness value that is a result of performing gamma correction on the input gradation value D. That is, by setting the input gradation value D as the power root of a power function of the power curve L2, the normalized brightness value E is obtained as the power value of a power function corresponding to this power root. As a result, on the power curve L2, the input gradation value D can be converted into the normalized brightness value E of the power value as the power root of the power function, and the normalized brightness value E can be easily obtained based on the input gradation value D. In the graph of the second gamma characteristic curve L4 in the second quadrant QE2, the horizontal axis indicates the output gradation value F, and the vertical axis indicates the normalized brightness value E (second normalized brightness value). The normalized brightness value E on the second gamma characteristic curve L4 is a measurement value obtained by measuring a brightness value when each output gradation values F is input using an optical measuring instrument in a configuration of only the liquid crystal panel13and in a state of no correction. As a result, the gamma characteristic curve L5 is obtained as a curve which shows a relationship between the input gradation value D and the output gradation value F obtained by the processing described above when the input gradation value D is input. That is, the output gradation value F corresponding to the same brightness value as the normalized brightness value E of the first gamma characteristic curve L1 is obtained from the graph of the second gamma characteristic curve L4. According to the processing described above, the liquid crystal panel13displays an image according to a display characteristic corresponding to the user setting gamma characteristic. In the video gain correction unit122, correction of color temperature, adjustment of white point due to a change with time, adjustment of light emission unevenness, limitation of maximum brightness, and the like are performed by adjusting the gradation value in each pixel of the color components RGB. In the video gain correction, the processing of color adjustment is performed by, for example, setting an adjustment ratio k1 (0≤k1≤1), which is a ratio to the brightness value at the time of no correction, multiplying a gradation value x of a value with no correction by it, and calculating k1*x as a video gain corrected gradation value. In the present embodiment, when color adjustment is performed, the adjustment ratio k1 is used as a constant by which the output gradation value C supplied from the first gamma correction unit121is multiplied. Then, when the adjustment ratio k2 that adjusts a brightness of a screen is input, the following identical equation is established because the output gradation value C is the power root obtained from the power function. (k1*x)n=(k1)nf(x)=k2f(x) In the identical equation described above, since f(x) is a brightness value and a brightness value f(x) is multiplied by the adjustment ratio k2, an adjusted brightness value is k2f(x). In addition, since the brightness value is obtained as the power value of the power function, the brightness value when the gradation value x is multiplied by the adjustment ratio k1 is obtained as a power value (k1*x)n. Then, since a power function of the brightness value is f(x)=xn, the brightness value is expressed as (k1*x)n=(k1)nf(x). For this reason, when adjustment ratios k2r, k2g, and k2b are set as constants for adjusting the brightness (brightness of the screen) of the target color components R, G, and B by a user, when the video gain corrected gradation values of the color components R, G, and B are obtained, adjustment ratios k1r, k1g, and k1b by which the gradation values are multiplied are obtained by the following Equation (4). k1r=(k2r)1/n,k1g=(k2g)1/n,k1b=(k2b)1/n(4) For example, if a user wants to set the brightness of a color component R to 50% of the brightness with no correction, the user inputs 0.5 as the adjustment ratio k2r. For this reason, when the power n of the power function is set to 2, an adjustment ratio used for video gain correction is obtained as k1r=(k2r)1/2=(0.5)1/2≈0.707 for the color component R according to Equation (4). Then, if the gradation characteristics after the video gain correction unit122are expressed as the function f(x), the brightness value Y of the screen after color adjustment is f(k1*x). Here, x is a gradation value and k1 is an adjustment ratio. In addition, since the gradation characteristics are a power function having power characteristics, the brightness value Y can be expressed as (k1*x)nby Equation (3). As a result, the equation described above that represents the brightness value Y will be transformed to Y=(k1*x)n=(k1)n*(x)n=(k1)nf(x). Therefore, there are constants k1 (=(k2)1/n) and k2 that satisfy (k1*x))n=k2f(x), and the adjustment ratio k2 can be arbitrarily set by a user or the like as a constant. This maintains characteristics shown inFIG.9that each of the same gradation values in the color components RGB after color adjustment has the same screen brightness (a normalized brightness value). For this reason, in the present embodiment, when the brightness changes in gray color as in Patent Document 1, and each of the color components RGB has the same gradation value, coloring due to different screen brightness (normalized brightness values) as shown inFIG.10does not occur. In addition, in the present embodiment, a simple power function that is generally used in mathematics and programming is used as the power function of Equation (3). For this reason, it is possible to calculate mutual conversion between the adjustment ratio k2 and the adjustment ratio k1 used for video gain correction, and a brightness value (normalized brightness value) indicating screen brightness characteristics when the adjustment ratio k1 is used at high speed by using an equation (simple power function) without using a gamma correction LUT or the like. In addition, in the present embodiment, the second gamma correction unit123uses the video gain corrected gradation value input from the video gain correction unit122as the input gradation value D, and obtains the output gradation value F using the first gamma correction LUT. That is, the second gamma correction unit123inputs the video gain corrected gradation value (the output gradation value C multiplied by the adjustment ratio k1 (=(k2)1/n)) as the input gradation value D. As a result, the second gamma correction unit123uses the output gradation value C multiplied by the adjustment ratio k1 (=(k2)1/n) as the input gradation value D, and obtains the output gradation value F by referring to the first gamma correction LUT. As a result, in the present embodiment, each time the adjustment ratio k2 is input, the video gain correction unit122calculates the adjustment ratio k1 based on the supplied adjustment ratio k2. However, in the present embodiment, by substituting the video gain corrected gradation value (k1*x) into the equation (3) as the input gradation value D, there is no need to rewrite the second gamma correction LUT or the like each time the adjustment ratio k2 is input as in Patent Document 2 to obtain the normalized brightness value E. For this reason, in the present embodiment, since the gamma correction LUT is not rewritten, there is no need to reconfigure the gamma correction LUT as in Patent Document 2 each time color adjustment is performed, and a calculation load can be reduced. Therefore, in the present embodiment, a display image corresponding to the adjustment ratio k2 supplied for color adjustment can be produced by following an operation in color adjustment in real time. As a result, in the present embodiment, when color adjustment of the display screen is performed, it is not necessary to rewrite the gamma correction LUT each time the adjustment ratio k2 is supplied, flickering of the screen or noise such as display of unnecessary colors does not occur on the display screen as in Patent Document 2. In addition, in the present embodiment, it is possible to perform color adjustment (including white point adjustment) that follows an operation of a user for color adjustment in real time without delaying it, and to adjust colors to the colors desired by the user at high speed, compared to the conventional example. FIGS.4A through4Eare conceptual diagrams which describe gamma correction and color adjustment processing by the display data processing device12of the present embodiment. FIG.4Ais a histogram which shows the number of pixels (or the frequency of occurrence) of gradation values in each frame of a video signal supplied to the video receiving unit11in the configuration shown inFIG.1. InFIG.4A, the horizontal axis indicates the gradation value, and the vertical axis indicates the number of pixels in a corresponding gradation. The histogram inFIG.4Ashows the shape of a normal distribution. FIG.413shows gamma characteristics (the gamma characteristic curve L3 inFIG.2) of the first gamma correction LUT used for gamma correction in the first gamma correction unit121. In the graph inFIG.43, the horizontal axis indicates a gradation value that is input (the input gradation value A inFIG.2), and the vertical axis indicates a gradation value (the output gradation value C inFIG.2) that is corrected and output. In addition, since the HDR standard characteristics such as user setting gamma characteristics are converted by the power characteristics (power function) of the power curve L2, the shape is closer to a straight line than the gamma characteristic curve inFIG.13B. That is, in the graph ofFIG.4B, the gamma characteristics (the gamma characteristic curve L3) of the first gamma correction LUT have the correspondence relationship between an input gradation and an output gradation value in a more linear shape, compared to the gamma characteristic curve ofFIG.133. For this reason, in the present embodiment, unlike the curve inFIG.133, the gamma characteristic curve L3 of the first gamma correction LUT corrects the gradation values in all the gradation areas, including the gradation values in the dark area, at substantially the same ratio. FIG.4Cshows a histogram of the gradation values of a video signal corrected by the gamma characteristic curve L3 of the first gamma correction LUT inFIG.4B. InFIG.4C, the horizontal axis indicates a gradation value, and the vertical axis indicates the number of pixels (or the frequency of occurrence) in the gradation value. The histogram shown inFIG.4Cis a histogram which shows the number of pixels for each gradation value in the frame of the video signal supplied to the second gamma correction unit123inFIG.1. FIG.4Cis a histogram of a frame resulting from correction of the gradation values by the gamma characteristic curve (the gamma characteristic curve L3) ofFIG.4B. Compared to the histogram ofFIG.4A, the histogram ofFIG.4Chas a shape in which a center of the distribution shifts toward a lower gradation value as a whole, and is biased towards the dark area. However, the first gamma characteristic ofFIG.4Bhas a shape closer to a straight line than that ofFIG.13B. For this reason, the histogram ofFIG.4Cshows that each gradation value is maintained without the center of the distribution being biased toward the dark area as inFIG.13C. FIG.4Dshows the gamma characteristic of the second gamma correction LUT used by the second gamma correction unit123for gamma correction (the gamma characteristic curve L5 inFIG.3). In the graph inFIG.4D, the horizontal axis indicates an input gradation value that is input (the input gradation value D inFIG.3), and the vertical axis indicates an output gradation value that is corrected and output (output gradation value F inFIG.3). Here, the gamma characteristic curve L5 of the second gamma correction LUT sets the normalized brightness value E of the gamma characteristic of the liquid crystal panel13(the gamma characteristic curve L4 inFIG.3) as the power value of the power function of a power curve (the power curve L2 inFIG.3). For this reason, the gamma characteristic curve L5 inFIG.4Dhas a shape closer to a straight line than an inverse characteristic of the liquid crystal panel gamma characteristic inFIG.13D. For this reason, unlike the curve inFIG.13D, all gradation values in the gradation areas, including the gradation values in the dark area, are corrected at approximately the same ratio by the gamma characteristic curve L5 inFIG.4D. FIG.4Eshows a histogram of the gradation values of a video signal corrected by the gamma correction curve L5 (the second gamma correction LUT) inFIG.4D. The histogram ofFIG.4Eis a histogram which shows the number of pixels (or the frequency of occurrence) of the gradation values in a frame of the video signal supplied to the liquid crystal panel13in the configuration shown inFIG.1. InFIG.4E, the horizontal axis indicates a gradation value, and the vertical axis indicates the number of pixels (or the frequency of occurrence). FIG.4Eis a histogram approximating to the histogram inFIG.4Aas a result of correction by the gamma correction curve L5 (the second gamma correction LUT) inFIG.4Dand is a histogram in a shape closer to that of normal distribution as a whole. As shown inFIG.4B, the gamma characteristics (the gamma characteristic curve L3) of the first gamma correction LUT are close to a straight line. For this reason, compared to the histogram ofFIG.13C, the histogram ofFIG.4Chas fewer pixels that are quantized and corrected as specific gradation values in the dark area, and there is no bias that a center of distribution shifts to the dark area. Similarly, as shown inFIG.4D, the gamma characteristics (the gamma characteristic curve L5) of the second gamma correction LUT are also close to a straight line. For this reason, in the present embodiment, since the second gamma correction is close to a straight line, fewer gradation values are quantized as specific gradation values in the dark area, as shown in the histogram ofFIG.13Cof Patent Document 3. As a result, in the present embodiment, the gradation values of the dark area are not quantized as a specific gradation value as in Patent Document 3, so that neither the crushed shadows nor crushed gradation (skipped gradation) occurs in the gradation values of the dark area as shown inFIG.12. As a result, according to the present embodiment, as shown inFIG.11, even after gamma correction or color adjustment is performed, the gradation values in the dark area also have continuous changes in gradation values similar to those in the other gradation areas. FIG.5is a flowchart which describes operations of gamma correction and color adjustment processing by the display data processing device12of the present embodiment. In the following description, the power of a power function is set to be 2, that is, a power function is set to the square function. Step S101: The video receiving unit11outputs pixels of a frame of a video signal supplied from an external device to the display data processing device12. The first gamma correction unit121inputs gradation values of the pixels of the video signal from the video receiving unit11as the input gradation value A. Step S102: The first gamma correction unit121refers to the first gamma correction LUT written in its internal storage unit in advance. As already described, the first gamma correction LUT is generated based on the first gamma characteristic curve L1 of the user setting gamma characteristic and the power curve L2 of the square function. The first gamma correction unit121sets the gradation values (the color components R, G, and B) of the pixels in the frame of the video signal as the input gradation value A, and reads the output gradation value C corresponding to the input gradation value A from the first gamma correction LUT. That is, the output gradation value C is obtained by using the normalized brightness value B (a first normalized brightness value) of the first gamma characteristic curve L1 corresponding to the input gradation value A set as the power value of the square function, and the first gamma correction LUT set as the power root of a square function corresponding to this power value. In the present embodiment, processing of correcting the input gradation value A described above to the output gradation value C can be performed at high speed only by referring to the first gamma correction LUT. Then, the first gamma correction unit121outputs the output gradation value C read from the first gamma correction LUT, corresponding to the input gradation value A, to the video gain correction unit122as a result of the first gamma correction. Step S103: The video gain correction unit122obtains the adjustment ratios k1r, k1g, and k1b of the color components R, G, and B by which the output gradation value C that is input is multiplied when the video gain corrected gradation value is obtained. For this reason, the video gain correction unit122calculates the square root of each of the adjustment ratios k2r, k2g, and k2b with respect to brightness supplied from the external device. Then, the video gain correction unit122performs color adjustment by multiplying the gradation value of each of the color components R, G, and B by each of the calculated adjustment ratios k1r(=(k2r)1/2), k1r(=(k2r)1/2) and k1r(=(k2r)1/2). The video gain correction unit122outputs the gradation values of the color components R, G, and B on which color adjustment is performed to the second gamma correction unit123as video gain corrected gradation values. Step S104: The second gamma correction unit123acquires the video gain corrected gradation value from the video gain correction unit122. The second gamma correction unit123uses the input video gain corrected gradation value as the input gradation value D, and refers to the second gamma correction LUT written in its internal storage unit in advance. The second gamma correction LUT is generated based on the power curve L2 of the square function and the second gamma characteristic curve L4 of the second gamma characteristic of the liquid crystal panel13. That is, the second gamma correction LUT of the present embodiment is generated such that the input gradation value D is set to the power root of the square function, a power value corresponding to this power root is set to the normalized brightness value E (a second normalized brightness value), and a gradation value of the second gamma correction curve L4 corresponding to this normalized brightness value E is set to the output gradation value F. Then, the second gamma correction unit123sets the video gain corrected gradation value (the gradation values of the pixels in the frame of the video signal on which video gain correction is performed) as the input gradation value D, refers to the second gamma correction LUT, and reads the output gradation value F corresponding to the input gradation value D. As a result, in the present embodiment, processing of setting the video gain corrected gradation value that is input after the video gain correction described above as the input gradation value D, and correcting it to the output gradation value F corresponding to the gamma characteristic of the liquid crystal panel13can be performed at high speed by referring to the second gamma correction LUT. Step S105: The second gamma correction unit123outputs the output gradation value F read from the second gamma correction LUT to the liquid crystal panel13as a result of the second gamma correction. Here, the second gamma correction unit123acquires the output gradation value F as the gradation value of the pixel of a video signal displayed on the liquid crystal panel13according to the second gamma correction LUT including the second gamma characteristic of the liquid crystal panel13. Then, the liquid crystal panel13displays the pixel of the video signal as an image on the display screen according to each output gradation value F supplied from the second gamma correction unit123. As a result, the liquid crystal panel13can display an image that corresponds to the user setting gamma characteristic and on which color adjustment such as white point adjustment is performed. Step S106: The second gamma correction unit123determines whether to end display data processing. The second gamma correction unit123determines, for example, whether the image display system1has been powered off. The second gamma correction unit123ends the processing when the display data processing is finished, and advances the processing to step S101when the display data processing is not finished. FIG.6is a diagram which describes a concept of the embodiment of the present invention. The display data processing device12includes the first gamma correction unit121, the video gain correction unit122, and the second gamma correction unit123. Here, the first gamma correction unit121includes the first gamma correction LUT. The second gamma correction unit123includes the second gamma correction LUT. The display data processing device12performs color adjustment on gradation values of pixels of a frame image of an input video signal, and outputs the adjusted gradation values to a display device, for example, a liquid crystal panel. The first gamma correction unit121sets, based on first composite correction information (gamma characteristics in the first gamma correction LUT) according to a first gamma characteristic indicating a correspondence relationship between a first input gradation value (the input gradation value A) and a first normalized brightness value (the normalized brightness value B) and a first power function indicating a correspondence relationship between a first normalized brightness value and a first corrected gradation value (the output gradation value C), a gradation value as a first input gradation value and converts it into a first corrected gradation value. The video gain correction unit122performs color adjustment on the first corrected gradation value according to video gain correction and outputs the result as a video gain corrected gradation value. A second gamma correction unit sets, based on second composite correction information according to a second power function having the same power as the first power function, which indicates a correspondence relationship between a second input gradation value (the input gradation value D) and a second normalized brightness value (the normalized brightness value E), and a second gamma characteristic indicating a correspondence relationship between the second normalized brightness value and a second corrected gradation value (a second output gradation value), the video gain corrected gradation value as the second input gradation value and converts the second input gradation value into the second corrected gradation value. With the configuration ofFIG.6, it is possible to provide the display data processing device12that smoothly changes the color of the display screen and does not allow image quality deterioration to occur on the display screen by improving the speed of color adjustment on the display screen when color adjustment such as white point adjustment is performed. In addition, in the image display system1ofFIG.6, the display data processing device12is installed as an independent computer system, but it may also be configured to be included in any one of the video receiving unit11and the liquid crystal panel13shown inFIG.1. Then, control may also be performed to cause the display data processing device12to realize a function of performing the first gamma correction, the video gain correction, and the second gamma correction. The “computer system” herein is assumed to include OS or hardware such as peripheral devices. Although the embodiment of the present invention has been described in detail with reference to the drawings, the specific configuration is not limited to this embodiment, and includes designs within a range not departing from the gist of the present invention. INDUSTRIAL APPLICABILITY The image display system and image display method described above are effective in realizing a configuration for suppressing display image deterioration in Patent Documents 1, 2, and 3 when the first gamma correction and the second gamma correction are performed not only on liquid crystal panels but also on display devices such as CRTs, plasma displays and projectors. REFERENCE SIGNS LIST 1Image display system11Video receiving unit12Display data processing device13Liquid crystal panel121First gamma correction unit122Video gain correction unit123Second gamma correction unit | 36,840 |
11862119 | DETAILED DESCRIPTION Hereinafter, embodiments of the present disclosure will be described in detail with reference to the accompanying drawings. The disclosure is merely an example, and the present disclosure naturally encompasses appropriate modifications maintaining the gist of the invention that is easily conceivable by those skilled in the art. To further clarify the description, a width, a thickness, a shape, and the like of each component may be schematically illustrated in the drawings as compared with an actual aspect. However, this is merely an example, and the interpretation of the present disclosure is not limited thereto. The same element as that described in the drawing that has already been discussed is denoted by the same reference numeral throughout the present specification and the drawings, and the detailed description may be appropriately omitted. In this disclosure, when an element is described as being “on” another element, the element can be directly on the other element, or there can be one or more elements between the element and the other element. First Embodiment FIG.1is a schematic circuit diagram illustrating a main configuration of a display device100. The display device100includes a display panel module DPM and an image signal controller70. The display panel module DPM includes a display panel P and a light source device L. The display panel P includes a display area7, a signal output circuit8, a scanning circuit9, a VCOM drive circuit10, a timing controller13, and a power supply circuit14. Hereinafter, a surface of the display panel P corresponding to the display area7is referred to as a display surface, and the other surface is referred to as a rear surface. When it is described that an object is located on a lateral side of the display device100, the object is located in a direction intersecting with (for example, orthogonal to) a facing direction in which the display surface and the rear surface face each other relative to the display device100. In the display area7, a plurality of pixels Pix are disposed in a matrix (row-column configuration). Each of the pixels Pix includes a switching element1and two electrodes. InFIG.1andFIG.2, which will be described below, a pixel electrode2and a common electrode6are illustrated as the two electrodes. FIG.2is a schematic sectional view of the display panel P. The display panel P includes two substrates facing each other, and liquid crystals3sealed between the two substrates. Hereinafter, one of the two substrates is referred to as a first substrate30, and the other substrate is referred to as a second substrate20. The first substrate30includes a light transmitting glass substrate35, the pixel electrode2layered on the second substrate20side of the glass substrate35, and an insulation layer55layered on the second substrate20side so as to cover the pixel electrode2. The pixel electrode2is individually provided for each pixel Pix. The second substrate20includes a light transmitting glass substrate21, a common electrode6layered on the first substrate30side of the glass substrate21, and an insulation layer56layered on the first substrate30side so as to cover the common electrode6. The common electrode6is shared by the pixels Pix and is formed in a plate shape or a film shape. The liquid crystals3in the first embodiment are polymer-dispersed liquid crystals. More specifically, the liquid crystals3include bulk51and fine particles52. The orientation of the fine particles52changes in accordance with a potential difference between the pixel electrode2and the common electrode6in the bulk51. By controlling the potential of the pixel electrode2individually for each pixel Pix, at least one of the degree of the light transmission and the degree of dispersion is controlled for each pixel Pix. In the first embodiment described with reference toFIG.2, the pixel electrodes2face the common electrode6with the liquid crystals3interposed therebetween. However, the configuration of the display panel P may be such that the pixel electrodes2and the common electrode6are provided on a single substrate, and that the orientations of the liquid crystals3are controlled by the electric field generated by the pixel electrode2and the common electrode6. Next, a mechanism for controlling the potentials of the pixel electrode2and the common electrode6will be described. As illustrated inFIG.1, for example, the switching element1is a switching element using a semiconductor such as a thin film transistor (TFT). One of a source and a drain of the switching element1is coupled to one of the two electrodes (pixel electrode2). The other of the source and the drain of the switching element1is coupled to a signal line4. A gate of the switching element1is coupled to a scanning line5. Under the control of the scanning circuit9, the scanning line5supplies a potential for opening and closing the source and drain of the switching element1. The scanning circuit9controls the potential. In the example illustrated inFIG.1, a plurality of the signal lines4are aligned in one alignment direction (row direction) of the pixels Pix. The signal lines4extend along the other alignment direction (column direction) of the pixels Pix. Each of the signal lines4is shared by a plurality of the switching elements1of the pixels Pix aligned in the column direction. A plurality of the scanning lines5are aligned along the column direction. The scanning lines5extend along the row direction. Each of the scanning lines5is shared by the switching elements1of the pixels Pix aligned in the row direction. In the description of the first embodiment, the extending direction of the scanning lines5is referred to as an X direction, and a direction in which the scanning lines5are aligned is referred to as a Y direction. InFIG.1, among the scanning lines5, a scanning line5ais disposed at one end in the Y direction, and a scanning line5bis disposed at the other end. The common electrode6is coupled to the VCOM drive circuit10. The VCOM drive circuit10applies a potential that functions as a common potential to the common electrode6. At a timing when the scanning circuit9applies a potential that functions as a drive signal to the scanning line5, the signal output circuit8outputs a gradation signal, which will be described below, to the signal line4. Thus, the liquid crystal (fine particles52) serving as a storage capacitor and a capacitive load formed between the pixel electrode2and the common electrode6is charged. Consequently, the voltage between the pixel Pix and the common electrode6is set to a voltage corresponding to the gradation signal. When the drive signal is no longer supplied, the liquid crystal (fine particles52) serving as the storage capacitor and the capacitive load holds the gradation signal. The scattering degree of the liquid crystal (fine particles52) is controlled in accordance with the voltage of each pixel Pix and the voltage of the common electrode6. For example, the liquid crystals3may be polymer-dispersed liquid crystals in which the scattering degree increases with an increase in the voltage between each pixel Pix and the common electrode6. The liquid crystals3may also be polymer-dispersed liquid crystals in which the scattering degree increases with a decrease in the voltage between each pixel Pix and the common electrode6. As illustrated inFIG.2, the light source device L is disposed on the lateral side of the display panel P. The light source device L includes a light source11and a light source drive circuit12. The light source11includes a first light source11R that emits light in red (R), a second light source11G that emits light in green (G), and a third light source11B that emits light in blue (B). Each of the first light source11R, the second light source11G, and the third light source11B emits light under the control of the light source drive circuit12. For example, the first light source11R, the second light source11G, and the third light source11B in the first embodiment are light sources using a light emitting element such as a light emitting diode (LED). However, it is not limited thereto, and any light source is applicable as long as the light emission timing can be controlled. Under the control of the timing controller13, the light source drive circuit12controls the light emission timing of the first light source11R, the second light source11G, and the third light source11B. In the first embodiment, red (R) is the first primary color. In the first embodiment, green (G) is the second primary color. In the first embodiment, blue (B) is the third primary color. When light is emitted from the light source11, the display area7is illuminated by the light emitted from one side surface side in the Y direction. The pixels Pix transmit or scatter the light emitted from the one side surface side in the Y direction. The scattering degree depends on the state of the liquid crystals3controlled in accordance with the gradation signals. The timing controller13is a circuit that controls operation timings of the signal output circuit8, the scanning circuit9, the VCOM drive circuit10, and the light source drive circuit12. In the first embodiment, the timing controller13operates based on a signal input via the image signal controller70. The image signal controller70outputs a signal, which is based on an input signal I (seeFIG.4) from the outside of the display device100, to the signal output circuit8and the timing controller13. When a pixel signal is a signal indicating the gradation values of RGB assigned to a certain pixel Pix, the input signal I that is input to the image signal controller70to output a frame image is a set of a plurality of the pixel signals corresponding to the pixels Pix provided in the display area7. The image signal controller70may be provided on one of the substrates forming the display panel P, may be implemented in a flexible printed substrate provided with wiring extending from the display panel P or the like, or may be provided outside of the display panel P. FIG.3is a time chart illustrating an example of FSC control. As illustrated inFIG.3, the first embodiment employs the time-division field sequential color (FSC) method in which frame periods F such as frame periods Fn and F(n+1) each include subframe periods SF1, SF2, . . . , SFm and light in different colors are emitted in lighting periods Br of the respective subframe periods SF1, SF2, . . . , SFm. Hereinafter, the frame periods Fn, F(n+1), . . . , are collectively referred to as a frame period F when they are not distinguished from one another. Each of the frame periods Fn, F(n+1), . . . is a period during which one frame image is displayed. The frame period F(n+1) is a frame period subsequent to the frame period Fn. n is a natural number. The subframe periods SF1, SF2, . . . , SFm are collectively referred to as a subframe period SF when they are not distinguished from one another. m is a natural number of 4 or more. More specifically, in the first embodiment, a gradation signal corresponding to the lighting period Br is written in the subframe periods SF1, SF2, . . . , SFm included in the frame period F. Assume that the color component reproduced by a signal supplied to one pixel Pix in the frame period Fn is (R, G, B)=(r0, g0, b0) when expressed by gradation values of RGB. The value r0 represents a gradation value of red (R) in an input signal including information on the gradation values of RGB and functions as a red (R) component of an image to be displayed in the display area7. The value g0 represents a gradation value of green (G) in an input signal including information on the gradation values of RGB and functions as a green (G) component of an image to be displayed in the display area7. The value b0 represents a gradation value of blue (B) in an input signal including information on the gradation values of RGB and functions as a blue (B) component of an image to be displayed in the display area7. In this example, the value r0 can be divided into m components such as r0=r1+r2+ . . . +rm. The value g0 can be divided into m components such as g0=g1+g2+gm. The value b0 can be divided into m components such as b0=b1+b2+bm. Thus, for the one pixel Pix, a pixel signal that can be expressed as (R, G, B)=(r1, g1, b1) is supplied in the subframe period SF1. For the one pixel Pix, a pixel signal that can be expressed as (R, G, B)=(r(m−k), g(m−k), b(m−k)) is supplied in the subframe period SF(m−k). k is an integer less than m. For example, when m=4, a case where k=3, a case where k=2, a case where k=1, and a case where k=0 are sequentially provided. The case where k=3 corresponds to the subframe period SF1described above. For the one pixel Pix, a pixel signal that can be expressed as (R, G, B)=(rm, gm, bm) is supplied in the subframe period SFm. Consequently, a pixel signal corresponding to the color components the same as those of (R, G, B)=(r0, g0, b0) can be given to the one pixel Pix in the frame period Fn. In this example, a case where m=4, that is, a case where there are four subframe periods, will be described. In the case where m=4, (R, G, B)=(r0, g0, b0) can be divided into (R, G, B)=(r1, g1, b1) to be supplied in the subframe period SF1, (R, G, B)=(r2, g2, b2) to be supplied in the subframe period SF2, (R, G, B)=(r3, g3, b3) to be supplied in the subframe period SF3, and (R, G, B)=(r4, g4, b4) to be supplied in the subframe period SF4. Assume that (R, G, B)=(r0, g0, b0)=(35, 40, 30), for example. Light in white (W) can be reproduced by additive color mixing of red (R), green (G), and blue (B). Among the color components of (R, G, B)=(r0, g0, b0=(35, 40, 30) described above, the color components that can be extracted as white are (R, G, B)=(30, 30, 30). Thus, for example, by setting (R, G, B)=(r2, g2, b2)=(30, 30, 30), it is possible to supply a pixel signal corresponding to the color components that can be extracted as white in the subframe period SF2. By emitting light in white (W) during the lighting period Br in the subframe period SF2to the pixel supplied with such a pixel signal, white (W) can be displayed and output. More specifically, by turning ON the first light source11R, the second light source11G, and the third light source11B, the light source device L can emit light in white (W). The color components obtained by subtracting the color components that can be extracted as white from the color components of (R, G, B)=(r0, g0, b0)=(35, 40, 30) described above, are (R, G, B)=(5, 10, 0). Thus, for example, by setting (R, G, B)=(r1, g1, b1)=(5, 0, 0), it is possible to supply a pixel signal corresponding to the color component of red (R) in the subframe period SF1. By emitting light in red (R) for the pixel supplied with such a pixel signal toward the display panel P in the lighting period Br in the subframe period SF1, red (R) can be displayed and output. More specifically, by turning ON the first light source11R, the light source device L can emit light in red (R). By setting (R, G, B)=(r3, g3, b3)=(0, 10, 0), it is possible to supply a pixel signal corresponding to the color component of green (G) in the subframe period SF3. By emitting light in green (G) for the pixel supplied with such a pixel signal toward the display panel P in the lighting period Br in the subframe period SF3, green (G) can be displayed and output. More specifically, by turning ON the second light source11G, the light source device L can emit light in green (G). In this example, the output corresponding to the color components of (R, G, B)=(r0, g0, b0)=(35, 40, 30) is performed in the subframe periods SF1, SF2, and SF3, thereby establishing (R, G, B)=(r4, g4, b4)=(0, 0, 0). As far as this pixel Pix is concerned, there is no need to emit light in blue (B) toward the display panel P in the lighting period Br in the subframe period SF4. On the other hand, this example is merely explaining a signal supplied to one pixel Pix, and color reproduction corresponding to blue (B) may need to be performed for the other pixel Pix. Thus, in this example, light in blue (B) is emitted in the lighting period Br in the subframe period SF4. In this manner, each of signals supplied to the respective pixels Pix in the frame period is divided into m pieces and is individually supplied in each subframe period SF. The light corresponding to the supplied pixel signal is emitted to the display panel P from the light source device L. Thus, the display panel P can perform display output corresponding to the input image. During a writing period Wr in each subframe period SF, the TFT provided in the pixel Pix is turned ON by a drive signal from the scanning circuit9to the scanning line5, and signal control is performed to write a gradation signal to the pixel Pix by a gradation signal from the signal output circuit8to the signal line4. Thus, the gradation signals corresponding to the pixels Pix included in a pixel row that are coupled to the common scanning line5and that are simultaneously turned ON in accordance with the drive signal for the scanning line5, are written at the same timing. When an image written to the pixel row coupled to the common scanning line5in this manner is referred to as a line image, the frame image includes a plurality of the line images aligned along the alignment direction of the scanning lines5. The line image is an image displayed and output by the pixels Pix aligned along the extending direction of the scanning lines5(alignment direction of the signal lines4). Hereinafter, unless otherwise specified, when simply referred to as a “line”, it refers to a pixel row that outputs a line image. In the time chart and the like illustrated inFIG.3,FIG.9(which will be described later), and other figures, the gradation signal control related to a line image to be output to a display area of seven lines is illustrated as an example. For example, inFIG.3, a drive signal is output from the scanning circuit9to the scanning line5such that the scanning lines5are sequentially scanned from the scanning line5located on one end side in the Y direction (for example, the scanning line5aillustrated inFIG.1) toward the scanning line5located on the other side (for example, the scanning line5billustrated inFIG.1), during the writing period Wr in each subframe period SF. Consequently, for the display area of seven lines illustrated inFIG.3, line images SL11, SL21, SL31, SL41, SL51, SL61, and SL71are sequentially written during the writing period Wr in the subframe period SF1. Line images SL12, SL22, SL32, SL42, SL52, SL62, and SL72are also sequentially written during the writing period Wr in the subframe period SF2. Line images SL1m, SL2m, SL3m, SL4m, SL5m, SL6m, and SL7mare also sequentially written during the writing period Wr in the subframe period SFm. Although not clearly illustrated, line images SL1(m−k), SL2(m−k), SL3(m−k), SL4(m−k), SL5(m−k), SL6(m−k), and SL7(m−k) are sequentially written during the writing period Wr in the subframe period SF(m−k) prior to the subframe period SFm. An example of a relation, which is obtained when m=4, between the above line images and (R, G, B)=(r0, g0, b0) according to a series of the above processing will now be described. The pixel signal of (R, G, B)=(r1, g1, b1) is included in one of the line images SL11, SL21, SL31, SL41, SL51, SL61, and SL71written during the writing period Wr in the subframe period SF1. The pixel signal of (R, G, B)=(r2, g2, b2) is included in one of the line images SL12, SL22, SL32, SL42, SL52, SL62, and SL72written during the writing period Wr in the subframe period SF2. The pixel signal of (R, G, B)=(r3, g3, b3) is included in one of the line images SL13, SL23, SL33, SL43, SL53, SL63, and SL73written during the writing period Wr in the subframe period SF3. The pixel signal of (R, G, B)=(r4, g4, b4) is included in one of the line images SL14, SL24, SL34, SL44, SL54, SL64, and SL74written during the writing period Wr in the subframe period SF4. The configuration and control of the seven lines inFIG.3,FIG.9(which will be described later), and other figures are merely examples for easy understanding, and are not intended to limit the number of lines in the display area7to seven lines. The number of lines in the display area7may be any number as long as it is plural, and may be six or less or eight or more. FIG.4is a block diagram illustrating an example of a main configuration of the image signal controller70. The image signal controller70is an integrated circuit including a subframe lighting color configuration determiner71, a subframe display order determiner72, a subframe lighting color transition controller73, a latest subframe lighting color configuration storage74, a liquid crystal control signal generator75, and a light source control signal generator76. The subframe lighting color configuration determiner71determines the color to be output in the subframe period SF other than the subframe period SF in which the primary color component is output. The subframe lighting color configuration determiner71in the first embodiment divides the color components indicated by the pixel signal supplied to each of the pixels corresponding to the input frame image, into a white color component, a mixed color component, and a primary color component. The subframe lighting color configuration determiner71then determines the color component having a larger proportion other than the primary color components. The mixed color component is a color component obtained by mixing two or more colors of red (R), green (G), and blue (B) that are primary colors in the embodiment. The white color component is a color component that can be output as white (W). The mixed color component is a color component that can be output as a mixed color of the primary colors. The mixed color component is a color component obtained by mixing two or more colors of red (R), green (G), and blue (B) that are primary colors in the embodiment. More specifically, the mixed color component in the embodiment is a cyan (C) component, a magenta (M) component, or a yellow (Y) component. Cyan (C) is complementary color of red (R). Magenta (M) is complementary color of green (G). Yellow (Y) is complementary color of blue (B). The primary color component is a color component required for outputting primary color. The primary color component in the first embodiment is one of the color components of red (R), green (G), and blue (B) and is a color component that cannot be converted into white (W) nor a mixed color. FIG.5is a graph illustrating an example of color components indicated by a pixel signal supplied to a certain pixel. The vertical axis of the graph inFIG.5indicates the high and low of the gradation values. With an increase in the gradation value, the pixel Pix is controlled such that the luminance of light corresponding to the color component is increased. The bar graphs of R, G, and B aligned along the horizontal axis in the graph illustrated inFIG.5correspond to the color components of red (R), green (G), and blue (B). That is, the height of each of the bar graphs indicates the high and low of the gradation value of red (R), green (G), or blue (B). In the example illustrated inFIG.5, the gradation value of red (R) corresponds to the height P1, the gradation value of green (G) corresponds to the height obtained by adding the height P1, the height P2, and the height P3, and the gradation value of blue (B) corresponds to the height obtained by adding the height P1and the height P2. When the high and low relation of the height P1, the height P2, and the height P3is expressed with inequality signs, P2>P1>P3is established. FIG.6is a graph illustrating an example in which the color components inFIG.5are divided into a white color component, a mixed color component, and a primary color component. The white color component is a color component of white (W). The white color component is produced by adding the red (R) component, the green (G) component, and the blue (B) component the gradation values of which are equal. In the case of the example illustrated inFIG.5, the gradation value of red (R) corresponds to only the height P1and is lower than the gradation values of green (G) and blue (B). Thus, a portion that can be converted to the white color component among the color components illustrated inFIG.5is a portion corresponding to the height P1in each of the color components of red (R), green (G), and blue (B). The remaining color components obtained by subtracting the white color component from the color components illustrated inFIG.5, are the color component of green (G) and the color component of blue (B). In this example, in each of the color component of green (G) and the color component of blue (B), the portion corresponding to the height P1is converted to the white color component. Thus, the color component of green (G) that is not converted to the white color component corresponds to the height of the gradation value obtained by adding the height P2and the height P3. The color component of blue (B) that is not converted to white color component corresponds to the height of the gradation value indicated by the height P2. In the color component of green (G) corresponding to the height of the gradation value obtained by adding the height P2and the height P3, and in the color component of blue (B) corresponding to the height of the gradation value indicated by the height P2, the color component that can be converted to a mixed color component is the color component of cyan (C) corresponding to the height of the gradation value indicated by the height P2, as illustrated inFIG.6. The color component obtained by subtracting the white color component and the mixed color component from the color component illustrated inFIG.5, is the color component of green (G) corresponding to the height of the gradation value indicated by the height P3. Thus, in the examples illustrated inFIG.5andFIG.6, the gradation value of white (W) serving as the white color component corresponds to the height P1, the gradation value of cyan (C) serving as the mixed color component corresponds to the height P2, and the gradation value of green (G) serving as the primary color component corresponds to the height P3. The subframe lighting color configuration determiner71in the first embodiment determines the color component having a larger proportion other than the primary color components, among the color components reproduced by display output of a frame image corresponding to the input of the input signal I, based on the concept described with reference toFIG.5andFIG.6. That is, the subframe lighting color configuration determiner71divides the gradation values of red (R), green (G), and blue (B) supplied to each of the pixels Pix indicated by the input signal I, into a white color component, a mixed color component, and a primary color component, and determines the color component having a larger proportion other than the primary color components. The subframe lighting color configuration determiner71in the first embodiment determines the color component having the largest proportion other than the primary color components in each frame image. For example, in the frame image in which the pixel signal described with reference toFIG.5andFIG.6is supplied to all pixels Pix, P2>P1>P3is established. Hence, the largest color component is cyan (C). Thus, in this case, the subframe lighting color configuration determiner71in the first embodiment determines cyan (C) as the color component having the largest proportion other than the primary color components in the frame image. The subframe lighting color configuration determiner71in the first embodiment handles the determined color as the color to be output in the subframe period SF other than the subframe period SF in which the primary color component is output. The subframe lighting color configuration determiner71described above determines the color corresponding to the color component having the largest proportion other than the primary color components in the frame image, as the color to be output in the subframe period SF other than the subframe period SF in which the primary color component is output. However, this is an example obtained by the subframe lighting color configuration determiner71in the embodiment, and the function of the subframe lighting color configuration determiner71is not limited thereto. The subframe lighting color configuration determiner71may determine the color to be output in the subframe period SF using another method. More specifically, the subframe lighting color configuration determiner71may set thresholds for the gradation values of the mixed color components and the gradation value of the white color component and determine the “color component having the largest proportion other than the primary color components in each frame image”, in accordance with the result of the comparison between the gradation values and the threshold. In this case, the color that is output in another subframe period corresponds to the color component determined from among the mixed color components and the white color component included in the frame image to be displayed in the frame period including the other subframe period, based on the results of the comparison with a predetermined threshold. More specifically, a threshold is individually set for each of cyan (C), magenta (M), yellow (Y), and white (W). In this example, the threshold of yellow (Y) is smaller than the threshold of cyan (C) and the threshold of magenta (M). The threshold of white (W) is smaller than the threshold of cyan (C) and the threshold of magenta (M). The subframe lighting color configuration determiner71compares the gradation values of the mixed color components (cyan (C), magenta (M), and yellow (Y)) and the white color component with the thresholds individually set for the respective color components. The subframe lighting color configuration determiner71counts the number of pixels including the component indicating the gradation value equal to or higher than the threshold (or gradation value higher than the threshold) for each of the mixed color components and the white color component in the frame image. The subframe lighting color configuration determiner71then determines the component with the largest count number in the frame image, as the “color component having the largest proportion other than the primary color components in each frame image”. The subframe display order determiner72sets the color component other than the primary color components, which is determined by the subframe lighting color configuration determiner71, as one of the colors to be output in the subframe periods SF. In the first embodiment, the subframe display order determiner72sets the colors to be output in three subframe periods SF among m subframe periods SF in one frame period F to the first primary color, the second primary color, and the third primary color. More specifically, the subframe display order determiner72sets the colors to be output in the three subframe periods SF in the m subframe periods SF in one frame period F to red (R), green (G), and blue (B). Hereinafter, a case where m=4 will be described as an example. That is, the subframe display order determiner72in this example sets the colors to be output in the three subframe periods SF among m subframe periods SF (m=4) to red (R), green (G), and blue (B), and sets the color to be output in the remaining one subframe period SF as the color component other than the primary component determined by the subframe lighting color configuration determiner71. FIG.7is a diagram illustrating combinations of colors of the subframe periods SF that may be generated when m=4. In the display panel module DPM in the first embodiment described with reference toFIG.1andFIG.2, the color of an image that is output in the subframe period SF depends on the color of light from the light source device L. Thus, inFIG.7and other figures, the color of the subframe period SF, that is, the color of light that is output from the light source device L in the subframe period SF is referred to as a subframe-period lighting color. In the first embodiment, when the first primary color is red (R), the second primary color is green (G), and the third primary color is blue (B), the color components other than the primary color components are cyan (C), magenta (M), yellow (Y) and white (W). Thus, the combination of colors determined by the subframe display order determiner72is a pattern1, a pattern2, a pattern3, or a pattern4as illustrated inFIG.7. The pattern1is a pattern in which the first color is red (R), the second color is yellow (Y), the third color is green (G), and the fourth color is blue (B). The pattern2is a pattern in which the first color is red (R), the second color is green (G), the third color is cyan (C), and the fourth color is blue (B). The pattern3is a pattern in which the first color is red (R), the second color is green (G), the third color is blue (B), and the fourth color is magenta (M). The pattern4is a pattern in which the first color is red (R), the second color is white (W), the third color is green (G), and the fourth color is blue (B). The subframe lighting color transition controller73controls the color transition orders of the subframe periods SF of each frame period F, based on the combinations of colors of the subframe periods SF determined by the subframe display order determiner72. First, a case where there is no constraint in determining the color transition order of the subframe period SF will be described. For example, a first frame image that is displayed first in the display area7after the start of the operation of the display device100, is not subject to constraints based on the relation with frame images displayed before the display of the first frame image. Thus, when a frame image is displayed first in the display area7after the start of the operation of the display device100, there is no constraint. In addition, when a display output does not fall under the constraints related to the contents stored in the latest subframe lighting color configuration storage74, which will be describe later, there is no constraint on the display output. When there is no constraint, the subframe lighting color transition controller73uses the combination of colors determined by the subframe display order determiner72, as the combination of colors of the subframe periods SF included in the frame period F, as it is. More specifically, when there is no constraint, the subframe lighting color transition controller73sets, for example, the first color in the pattern employed by the subframe display order determiner72from among the pattern1to the pattern4illustrated inFIG.7as the color of the subframe period SF1, the second color in the pattern as the color of the subframe period SF2, the third color in the pattern as the color of the subframe period SF3, and the fourth color in the pattern as the color of the subframe period SF4. For example, when the subframe display order determiner72determines, based on the example described with reference toFIG.5andFIG.6, to use the pattern2illustrated inFIG.7as the combination of colors of the subframe periods SF, the subframe lighting color transition controller73sets the color of the subframe period SF1to red (R), sets the color of the subframe period SF2to green (G), sets the color of the subframe period SF3to cyan (C), and sets the color of the subframe period SF4to blue (B). In this example, the output order of colors of the subframe periods SF employed by the subframe lighting color transition controller73is the order of colors in a clockwise direction OD1or in a counterclockwise direction OD2in a hue circle200(seeFIG.16). In the hue circle200, when red (R) is the starting point, colors are arranged in the order of red (R), yellow (Y), green (G), cyan (C), blue (B), and magenta (M) in the counterclockwise direction OD2. When the color of the subframe period SF1is red (R), the color of the subframe period SF2is green (G), the color of the subframe period SF3is cyan (C), and the color of the subframe period SF4is blue (B) as described above, the output order of colors of the subframe periods SF is the order of colors in the counterclockwise direction OD2in the hue circle200(FIG.16). In this manner, the subframe lighting color transition controller73determines the output order of colors of the subframe periods SF in the frame period F such that the order of colors output of the subframe periods SF is in the clockwise direction OD1or in the counterclockwise direction OD2in the hue circle200(seeFIG.16). In the example, the color of the subframe period SF determined by the subframe lighting color configuration determiner71is cyan (C). However, even when the color of the subframe period SF determined by the subframe lighting color configuration determiner71is another color, the subframe lighting color transition controller73determines the output order of colors of the subframe periods SF using the same concept. When there is no constraint in determining the color transition order of the subframe period SF, and when the subframe display order determiner72determines to employ the pattern1, the subframe lighting color transition controller73sets the color of the subframe period SF1to red (R), sets the color of the subframe period SF2to yellow (Y), sets the color of the subframe period SF3to green (G), and sets the color of the subframe period SF4to blue (B). When the subframe display order determiner72determines to employ the pattern3, the subframe lighting color transition controller73sets the color of the subframe period SF1to red (R), sets the color of the subframe period SF2to green (G), sets the color of the subframe period SF3to blue (B), and sets the color of the subframe period SF4to magenta (M). When the subframe display order determiner72determines to employ the pattern4, the subframe lighting color transition controller73sets the color of the subframe period SF1to red (R), sets the color of the subframe period SF2to white (W), sets the color of the subframe period SF3to green (G), and sets the color of the subframe period SF4to blue (B). FIG.8is a diagram illustrating an example of subframe-period lighting colors of five consecutive frame periods F. In the description with reference toFIG.8, for example, the input signal I corresponding to the pattern2is input for the (n+1)-th frame period F(n+1), after the pattern1has been employed until the start of the n-th frame period Fn. In this case, in accordance with the input of the input signal I corresponding to the n-th frame period Fn, the subframe lighting color configuration determiner71determines yellow (Y) as the color component having a larger proportion other than the primary color components. The subframe display order determiner72sets yellow (Y), which is determined by the subframe lighting color configuration determiner71, as one of the colors to be output in the subframe periods SF. In accordance with the input of the input signal I corresponding to the (n+1)-th frame period F(n+1), the subframe lighting color configuration determiner71determines cyan (C) as the color component having a larger proportion other than the primary color components. The subframe display order determiner72sets cyan (C), which is determined by the subframe lighting color configuration determiner71, as one of the colors to be output in the subframe periods SF. Consequently, the pattern2is employed for the (n+1)-th frame period F(n+1), after the pattern1has been employed until the start of the n-th frame period Fn. In this example, assume that the pattern1is employed until the start of the n-th frame period Fn, and the display output of the colors of the subframe periods SF corresponding to the pattern1is not subject to the above constraints. If the colors of the subframe periods SF corresponding to the pattern2are output in the (n+1)-th frame period F(n+1) immediately afterwards, the subframe period SF in which green (G) is output is changed from the subframe period SF3in the n-th frame period Fn to the subframe period SF2in the (n+1)-th frame period F(n+1). Such a transition of colors in the subframe periods SF may be recognized as a flicker on the image for a user who is viewing the output for the frame periods F. Hence, when the pattern employed by the subframe display order determiner72is changed between the n-th frame period Fn and the (n+1)-th frame period F(n+1), the subframe lighting color transition controller73in the first embodiment gradually changes the color of the subframe period SF, the color of which is changed before and after the change in pattern. For example, when the subframe display order determiner72employs the pattern1for the n-th frame period Fn and employs the pattern2for the (n+1)-th frame period F(n+1), there is no change in the output order of red (R), which is the color of the subframe period SF1in the frame periods F, and in the output order of blue (B), which is the color of the subframe period SF4therein, before and after the change in pattern to be employed. Thus, in this case, the subframe period SF1and the subframe period SF4do not correspond to the subframe period SF the color of which is changed before and after the change in pattern. On the other hand, the color of the subframe period SF2is yellow (Y) in the pattern1, but is green (G) in the pattern2. The color of the subframe period SF3is green in the pattern1, but is cyan (C) in the pattern2. Thus, in this case, the subframe period SF2and the subframe period SF3correspond to the subframe period SF the color of which is changed before and after the change in pattern. FIG.8is a diagram illustrating an example of subframe-period lighting colors in the frame periods F in the case of gradually changing the color of the subframe period SF, the color of which is changed before and after the change in pattern. As the example described above, when the subframe display order determiner72employs the pattern1for the n-th frame period Fn and employs the pattern2for the (n+1)-th frame period F(n+1), the subframe lighting color transition controller73generates the frame periods F assigned intermediate patterns for gradually changing the color of the subframe period SF, the color of which is changed before and after the change in pattern, before the frame period F during which the colors of the subframe periods SF corresponding to the pattern2are output, that is, between the frame period F corresponding to the pattern1and the frame period F corresponding to the pattern2. More specifically, as illustrated inFIG.8, for the n-th frame period Fn, the subframe lighting color transition controller73follows the pattern1, and sets the color of the subframe period SF1to red (R), sets the color of the subframe period SF2to yellow (Y), sets the color of the subframe period SF3to green (G), and sets the color of the subframe period SF4to blue (B). Next, for the (n+1)-th frame period F(n+1), the subframe lighting color transition controller73employs an intermediate pattern1in which the color of the subframe period SF1is set to red (R), the color of the subframe period SF2is set to “color obtained by adding green to yellow (Y+G)”, the color of the subframe period SF3is set to green (G), and the color of the sub-frame period SF4is set to blue (B). In this example, “color obtained by adding β to α” is, for example, the color in which the ratio between the color component of α and the color component of β is 1:1. However, the color is not limited thereto. The “color obtained by adding β to α” may be any mixed color of α and β, and the ratio and the like thereof may be changed as appropriate. α and β are different colors. Next, for the (n+2)-th frame period F(n+2), the subframe lighting color transition controller73employs an intermediate pattern2in which the color of the subframe period SF1is set to red (R), the color of the subframe period SF2is set to green (G), the color of the subframe periods SF3is set to green (G), and the color of the subframe period SF4is set to blue (B). Next, for the (n+3)-th frame period F(n+3), the subframe lighting color transition controller73employs an intermediate pattern3in which the color of the subframe period SF1is set to red (R), the color of the subframe period SF2is set to green (G), the color of the subframe period SF3is set to “color obtained by adding cyan to green (G+C)”, and the color of the subframe period SF4is set to blue (B). Then, for the (n+4)-th frame period F(n+4), the subframe lighting color transition controller73sets the color of the subframe period SF1to red (R), sets the color of the subframe period SF2to green (G), sets the color of the subframe period SF3to cyan (C), and sets the color of the subframe period SF4to blue (B), thereby causing the pattern2to be in an employed state. That is, in this example, when the subframe display order determiner72employs the pattern2for the (n+1)-th frame period F(n+1), the frame period in which the color of the subframe period SF corresponding to the pattern2is actually reflected on the output of the display panel P is the (n+4)-th frame period F(n+4). The pattern is changed from the pattern1of the frame period Fn through the intermediate pattern1of the frame period F(n+1) to the intermediate pattern2of the frame period F(n+2). Thus, the color of the subframe period SF2changes smoothly from yellow (Y) to green (G) via the “color obtained by adding green to yellow (Y+G)” over the three frame periods F. The pattern changes from the intermediate pattern2of the frame period F(n+2) through the intermediate pattern3of the frame period F(n+3) to the pattern2of the frame period F(n+4). Thus, the color of the subframe period SF3changes smoothly from green (G) to cyan (C) via the “color obtained by adding cyan to green (G+C)” over the three frame periods F. In this manner, the color of the subframe period SF is changed smoothly between the frame periods F, whereby the occurrence of a state known as a flicker on the image can be reduced. In this manner, in the first embodiment, when the mixed color component, which is contained most in the color of each of the frame images to be successively displayed on the display panel P, is transitioned from the first mixed color component to the second mixed color component, and when the frame period F prior to the transition includes the subframe SF in which the first mixed color component is output, the frame period F including the subframe period SF in which another color component between the first mixed color component and the second mixed color component in the hue circle is output, is generated before the frame period F subsequent to the transition and including the subframe period SF in which the second mixed color component is output. In the description with reference toFIG.8toFIG.13, the first mixed color component is yellow (Y), and the second mixed color component is cyan (C). That is, in this example, the frame period prior to the transition is the frame period Fn, and the frame period subsequent to the transition is the frame period F(n+4). The frame period F including the subframe period SF in which another color component is output is the frame period F(n+1), the frame period F(n+2), and the frame period F(n+3). The latest subframe lighting color configuration storage74stores data indicating the order of colors of the subframe periods SF in the frame period F employed in the past. For example, the latest subframe lighting color configuration storage74includes a storage circuit for storing the data such as a static random access memory (SRAM). For example, when the subframe display order determiner72determines to employ the pattern2for the frame period F(n+1) in the example described above, the process related to the output in the frame period Fn for which the pattern1is employed has already been performed. Thus, in this example, the latest subframe lighting color configuration storage74stores the data corresponding to the pattern1. A case where the constraint described above is imposed, is a case where the colors of the subframe periods SF indicated by the data stored in the latest subframe lighting color configuration storage74are different from the colors of the subframe periods SF indicated in the pattern newly employed by the subframe display order determiner72. In this case, as in the example described above, the color of the subframe period SF will differ in the successive frame periods F. Thus, in this case, the subframe lighting color transition controller73determines that the constraints are imposed, and generates the frame periods F each including the colors of the subframe periods SF to each of which the intermediate pattern is applied, before the frame period F including the colors of the subframe periods SF directly corresponding to the pattern employed by the subframe display order determiner72, thereby smoothly changing the color of the subframe periods SF. Under the control of the subframe lighting color transition controller73, the liquid crystal control signal generator75generates a pixel signal for each pixel Pix and outputs the generated pixel signal to the display panel P. The pixel signal is output to the signal output circuit8(seeFIG.1), and transmitted to each pixel Pix under the control of the signal output circuit8. Under the control of the subframe lighting color transition controller73, the light source control signal generator76generates a control signal for controlling the operation of the light source device L such that the color of light from the light source device L in each subframe period SF becomes the subframe-period lighting color. The light source control signal generator76outputs the control signal to the light source device L. The first light source11R, the second light source11G, and the third light source11B in the light source device L are turned ON in accordance with the control signal. An example of how the display panel module DPM is controlled by the liquid crystal control signal generator75and the light source control signal generator76under the control of the subframe lighting color transition controller73will be described with reference toFIG.9toFIG.13. FIG.9is a diagram illustrating an example of lighting control of the first light source11R, the second light source11G, and the third light source11B during the frame period Fn.FIG.10is a diagram illustrating an example of lighting control of the first light source11R, the second light source11G, and the third light source11B during the frame period F(n+1).FIG.11is a diagram illustrating an example of lighting control of the first light source11R, the second light source11G, and the third light source11B during the frame period F(n+2).FIG.12is a diagram illustrating an example of lighting control of the first light source11R, the second light source11G, and the third light source11B during the frame period F(n+3).FIG.13is a diagram illustrating an example of lighting control of the first light source11R, the second light source11G, and the third light source11B during the frame period F(n+4). The examples fromFIG.9toFIG.13correspond to the example in which the intermediate pattern1, the intermediate pattern2, and the intermediate pattern3are generated corresponding to the frame period F(n+1), the frame period F(n+2), and the frame period F(n+3) in a period of time during which the pattern is changed from the pattern1of the frame period Fn to the pattern2of the frame period F(n+4) described with reference toFIG.8. In the frame period Fn, the subframe lighting color transition controller73causes the liquid crystal control signal generator75and the light source control signal generator76to generate signals such that the display panel module DPM is operated corresponding to the pattern1. The light source control signal generator76generates a control signal such that the colors of light during the subframe periods SF in the frame period Fn become the colors of light corresponding to the subframe-period lighting colors in the pattern1illustrated inFIG.8. More specifically, as illustrated inFIG.9, the light source control signal generator76outputs a high (H) signal that turns ON the first light source11R in the lighting period Br in the subframe period SF1of the frame period Fn. Consequently, the color of light emitted from the light source device L in the subframe period SF1becomes red (R). As illustrated inFIG.9, the light source control signal generator76also outputs high (H) signals that turn ON the first light source11R and the second light source11G in the lighting period Br in the subframe period SF2of the frame period Fn. Consequently, the color of light emitted from the light source device L in the subframe period SF2becomes yellow (Y). As illustrated inFIG.9, the light source control signal generator76also outputs a high (H) signal that turns ON the second light source11G in the lighting period Br in the subframe period SF3of the frame period Fn. Consequently, the color of light emitted from the light source device L in the subframe period SF3becomes green (G). As illustrated inFIG.9, the light source control signal generator76also outputs a high (H) signal that turns ON the second light source11G in the lighting period Br in the subframe period SF4of the frame period Fn. Consequently, the color of light emitted from the light source device L in the subframe period SF4becomes blue (B). In this manner, a high (H) signal is output to at least one of the first light source11R, the second light source11G, and the third light source11B in a period in which the lighting control is performed by operating the light source device L described with reference toFIG.3, whereby the light corresponding to the subframe-period lighting color is emitted. In the writing period Wr, a low (L) signal is output to the first light source11R, the second light source11G, and the third light source11B to turn OFF the light source device L. In the first embodiment, the first light source11R, the second light source11G, and the third light source11B are turned ON when a high (H) signal is supplied, and turned OFF when a low (L) signal is supplied. However, this is merely an example for describing high (H) and low (L) of the signal control, and the embodiment is not limited thereto. The relation between high (H) and low (L) and turning ON and turning OFF may be reversed. In this case, the transition of high (H) and low (L) indicated inFIG.9toFIG.13is reversed. In the first embodiment, the light emission amounts of the first light source11R, the second light source11G, and the third light source11B in the subframe periods SF are controlled by the length of light emission period corresponding to the period during which the high (H) signal is supplied (for example, a period T1or a period T2). However, the embodiment is not limited thereto. In order to control the light emission amount of the first light source11R, the second light source11G, and the third light source11B in the subframe periods SF, the light emission intensity may be controlled by controlling the amount of current supplied thereto. Alternatively, a combination of the control of the light emission period and the control of the light emission intensity may be performed to control the light emission amount. In the frame period Fn described with reference toFIG.9, a period during which the high (H) signal is supplied in each lighting period Br is the period T1. Based on the subframe-period lighting colors applied in the frame period F by the subframe lighting color transition controller73, the liquid crystal control signal generator75determines the gradation values indicated by the pixel signals included in the line image supplied to each line in the subframe periods SF. For example, assume that the gradation value of the pixel signal supplied to a certain pixel Pix in the frame period Fn is (R, G, B)=(40, 30, 10). In this case, (R, G, B)=(rb, gb, bb)=(30, 30, 0) can be output as yellow (Y). The color component of red (R) obtained by subtracting (R, G, B)=(rb, gb, bb)=(30, 30, 0) from (R, G, B)=(40, 30, 10), is (R, G, B)=(ra, ga, ba)=(10, 0, 0). The color component of green (G) obtained by subtracting (R, G, B)=(rb, gb, bb)=(30, 30, 0) from (R, G, B)=(40, 30, 10), is (R, G, B)=(rc, gc, bc)=(0, 0, 0). The color component of blue (B) obtained by subtracting (R, G, B)=(rb, gb, bb)=(30, 30, 0) from (R, G, B)=(40, 30, 10), is (R, G, B)=(rd, gd, bd)=(0, 0, 10). As illustrated inFIG.9, the liquid crystal control signal generator75generates a pixel signal such that (ra, ga, ba) is written to the pixel Pix in the writing period Wr in the subframe period SF1. Thus, in the case of this example, the pixel signal corresponding to (ra, ga, ba) is included in one of the line images SL11, SL21, . . . , SL71in the subframe period SF1in the frame period Fn illustrated inFIG.9. As illustrated inFIG.9, the liquid crystal control signal generator75also generates a pixel signal such that (rb, gb, bb) is written to the pixel Pix in the writing period Wr in the subframe period SF2. Thus, in this example, the pixel signal corresponding to (rb, gb, bb) is included in one of the line images SL12, SL22, . . . , SL72in the subframe period SF2in the frame period Fn illustrated in inFIG.9. With the same concept, as illustrated inFIG.9, the liquid crystal control signal generator75generates a pixel signal such that (rc, gc, bc) is written to the pixel Pix in the writing period Wr in the subframe period SF3. As illustrated inFIG.9, the liquid crystal control signal generator75also generates a pixel signal such that (rd, gd, bd) is written to the pixel Pix in the writing period Wr in the subframe period SF4. By combining the generation of the pixel signal by the liquid crystal control signal generator75and the lighting control of the light source device L by the light source control signal generator76described above, red (R) is output in the subframe period SF1of the frame period Fn, yellow (Y) is output in the subframe period SF2of the frame period Fn, green (G) is output in the subframe period SF3of the frame period Fn, and blue (B) is output in the subframe period SF4of the frame period Fn, as illustrated inFIG.9. Hereinafter, in the descriptions on the frame period F(n+1) to the frame period F(n+4) with reference toFIG.10toFIG.13, a difference from the control performed in the preceding frame period F will be specifically described. The frame period F(n+1) in which the intermediate pattern1illustrated inFIG.8is applied, is different from the frame period Fn in that the color of the subframe period SF2is the “color obtained by adding green to yellow (Y+G)”. Thus, as illustrated inFIG.10, in the period T1in which the high (H) signals for turning ON the first light source11R and the second light source11G are supplied in the lighting period Br included in the subframe period SF2of the frame period F(n+1), the light source control signal generator76sets the lighting period of the first light source11R to be the period T2. The period T2is half of the period T1. Consequently, the color of light emitted from the light source device L in the subframe period SF2becomes yellow (Y) in the period T2and becomes green (G) in a period obtained by excluding the period T2from the period T1. Thus, the color of light emitted from the light source device L in the subframe period SF2becomes the “color obtained by adding green to yellow (Y+G)”. In a similar manner to the above, the liquid crystal control signal generator75determines the gradation values indicated by the pixel signals included in the line image supplied to each line in each subframe period SF, based on the subframe-period lighting color applied in each frame period F by the subframe lighting color transition controller73. However, in the frame period F(n+1) in this example and thereafter, a different pattern is employed from that in the frame period Fn. Thus, the gradation value of at least one pixel signal among the pixel signals indicated by the input signal I is also changed. For example, assume that the gradation value of the pixel signal supplied to a certain pixel Pix in the frame period F(n+1) is (R, G, B)=(10, 70, 30). In this case, (R, G, B)=(0, 30, 30) can be output as cyan (C), but the mixed color in the colors of light emitted in the frame period F(n+1) in which the intermediate pattern1is applied, is yellow (Y). Therefore, it is possible to output (R, G, B)=(rf, gf, bf)=(10, 10, 0) as yellow (Y). The color component of red (R) obtained by subtracting (R, G, B)=(rf, gf, bf)=(10, 10, 0) from (R, G, B)=(10, 70, 30), is (R, G, B)=(re, ge, be)=(0, 0, 0). The color component of green (G) obtained by subtracting (R, G, B)=gf, bf)=(10, 10, 0) from (R, G, B)=(10, 70, 30), is (R, G, B)=(rg, gg, bg)=(0, 60, 0). The color component of blue (B) obtained by subtracting (R, G, B)=(rf, gf, bf)=(10, 10, 0) from (R, G, B)=(10, 70, 30), is (R, G, B)=(rh, gh, bh)=(0, 0, 30). As illustrated inFIG.9, the liquid crystal control signal generator75generates a pixel signal such that (ra, ga, ba) is written to the pixel Pix in the writing period Wr in the subframe period SF1. However, in the frame period F in which one of the intermediate patterns is applied, the gradation value corresponding to the color component included in the color of light to be emitted in the subframe period SF in which the color of illumination light is changed from that in the immediately preceding frame, is corrected in accordance with the light emission amount. The liquid crystal control signal generator75outputs the pixel signal after performing the correction. In the case of the examples illustrated inFIG.9andFIG.10, the color of light in the subframe period SF2is yellow (Y) in the frame period Fn, but is changed to the “color obtained by adding green to yellow (Y+G)” in the frame period F(n+1). In this example, the light emission amount of yellow (Y) in the subframe period SF2in the frame period F(n+1) is half of that in the frame period Fn. The light emission amount of green (G) in the frame period F(n+1) is 1.5 times greater than that in the frame period Fn. This is because the light emission amount in a period obtained by excluding the period T2from the period T1in the subframe period SF2is added to the light emission amount in the subframe period SF3. Therefore, to perform an output corresponding to (R, G, B)=(rf, gf, bf)=(10, 10, 0) with yellow (Y) the light emission amount of which is reduced by half, the liquid crystal control signal generator75multiplies (rf, gf, bf) by 2 to set (R, G, B)=(ri, gi, bi)=(20, 20, 0). The liquid crystal control signal generator75also distributes the pixel signal such that (R, G, B)=(rj, gj, bj)=(0, 20, 0), which is a third of (R, G, B)=(rg, gg, bg)=(0, 60, 0), is allocated to the subframe period SF2, and that (R, G, B)=(rk, gk, bk)=(0, 40, 0), which is the remaining two third, is allocated to the subframe period SF3. Consequently, the gradation value of the pixel signal supplied to the pixel Pix in the subframe period SF2becomes (R, G, B)=(20, 40, 0) that is obtained by adding (R, G, B)=(ri, gi, bi)=(20, 20, 0) and (R, G, B)=(rj, gj, bj)=(0, 20, 0). The gradation value of the pixel signal supplied to the pixel Pix in the subframe period SF3becomes (R, G, B)=(rk, gk, bk)=(0, 40, 0). In the subframe period SF1and the subframe period SF4in which the lighting period is the period T1, the pixel signal is also output in the frame period F(n+1) in the similar way to that in the frame period Fn described with reference toFIG.9. Thus, the pixel signal corresponding to (re, ge, be) is included in one of the line images SL11, SL21, SL71in the subframe period SF1in the frame period F(n+1) illustrated inFIG.10. The pixel signal corresponding to (rh, gh, bh) is also included in one of the line images SL11, SL21, . . . , SL71in the subframe period SF4in the frame period F(n+1) illustrated inFIG.10. However, when the gradation value corresponding to the color of light, the lighting amount of which is reduced by half from the preceding frame, is multiplied by 2, and the resultant color exceeds the upper limit for the gradation value to be supplied to the pixel Pix, the amount of the gradation value exceeding the upper limit is distributed to the subframe period SF in which the light in color corresponding to the gradation value is emitted. For example, assume that there is a pixel Pix supplied with the color component corresponding to the gradation value of yellow (Y) of (R, G, B)=(130, 130, 0) in the frame period F(n+1). In this assumption, also assume that the gradation value is 8 bits, and the maximum value is 255. In this case, when the color component is multiplied by 2, (R, G, B)=(260, 260, 0) is obtained, and the gradation value of red (R) and the gradation value of green (G) exceed the maximum value. Thus, the liquid crystal control signal generator75supplies the maximum value (R, G, B)=(255, 255, 0) of the gradation value (R, G, B)=(260, 260, 0) obtained by multiplying the original value by 2, to pixel Pix with in the subframe period SF2. The liquid crystal control signal generator75also adds a red (R) component (R, G, B)=(5, 0, 0) to the gradation value of the pixel signal to be supplied to the pixel Pix in the subframe period SF1. The red (R) component (R, G, B)=(5, 0, 0) corresponds to the color component of red (R) in (R, G, B)=(5, 5, 0) obtained by subtracting the maximum value (R, G, B)=(255, 255, 0) from the gradation value (R, G, B)=(260, 260, 0) obtained by multiplying the original value by 2. The liquid crystal control signal generator75also adds a green (G) component (R, G, B)=(0, 5, 0) to the gradation value of the pixel signal to be supplied to the pixel Pix in the subframe period SF3. The green (G) component (R, G, B)=(0, 5, 0) corresponds to the color component of green (G) in (R, G, B)=(5, 5, 0) obtained by subtracting the maximum value (R, G, B)=(255, 255, 0) from the gradation value (R, G, B)=(260, 260, 0) obtained by multiplying the original value by 2. A case of yellow (Y) has been described above as an example. However, the same concept is also applicable to a case where the color exceeding the upper limit of the gradation value to be supplied to the pixel Pix is generated when the gradation value of the other color is multiplied by 2. The frame period F(n+2) in which the intermediate pattern2illustrated inFIG.8is applied, is different from the frame period F(n+1) in that the color of the subframe period SF2is green (G). Thus, as illustrated inFIG.11, the light source control signal generator76sets the second light source11G as a light source to which a high (H) signal is supplied during the period T1in the lighting period Br included in the subframe period SF2of the frame period F(n+2), and causes a low (L) signal to be supplied to the first light source11R to which the high (H) signal has been supplied during the period T2in the frame period F(n+1). Consequently, the color of light to be emitted from the light source device L in the subframe period SF2becomes green (G). As described above, in the frame period F(n+2) in which the intermediate pattern2is applied, the liquid crystal control signal generator75also corrects, in accordance with the light emission amount, the gradation value corresponding to the color component included in the color of light to be emitted in the subframe period SF in which the color of illumination light is changed from the immediately preceding frame. The frame period F(n+2) described with reference toFIG.11includes two subframe periods SF of the subframe period SF2and the subframe period SF3serving as the subframe period SF in which light in green (G) is ON during the period T1. Thus, the liquid crystal control signal generator75divides the gradation value of the color component of green (G) included in the pixel signal to be supplied to the pixels Pix in the frame period F(n+2) into two pieces and supplies the divided gradation values to the subframe period SF2and the subframe period SF3, respectively. For example, when the pixel Pix to be supplied with a pixel signal of (R, G, B)=(0, 20, 0) is in the frame period F(n+2), a pixel signal of (R, G, B)=(0, 10, 0) is supplied to the pixel Pix in the subframe period SF2, and a pixel signal of (R, G, B)=(0, 10, 0) is supplied in the subframe period SF3. The frame period F(n+3) in which the intermediate pattern3illustrated inFIG.8is applied, is different from the frame period F(n+2) in that the color of the subframe period SF3becomes “color obtained by adding cyan to green (G+C)”. Thus, as illustrated inFIG.12, the light source control signal generator76supplies a high (H) signal that turns ON the second light source11G and the third light source11B in the lighting period Br in the subframe period SF3of the frame period F(n+3). In this example, the lighting period of the second light source11G is the period T1, and the lighting period of the third light source11B is the period T2. Consequently, the color of light emitted from the light source device L in the subframe period SF3becomes cyan (C) in the period T2, and becomes green (G) in a period obtained by excluding the period T2from the period T1. Thus, the color of light emitted from the light source device L in the subframe period SF3becomes the “color obtained by adding cyan to green (G+C)”. As described above, in the frame period F(n+3) in which the intermediate pattern3is applied, the liquid crystal control signal generator75also corrects, in accordance with the light emission amount, the gradation value corresponding to the color component included in the color of light to be emitted in the subframe period SF in which the color of illumination light is changed from the immediately preceding frame. The specific concept is the same as that in the frame period F(n+1) described above except that the color of the gradation value to be corrected is cyan (C) and green, and the detailed description thereof will be omitted. The frame period F(n+4) in which the pattern2illustrated inFIG.8is applied, is different from the frame period F(n+3) in that the color of the subframe period SF3becomes cyan (C). Thus, as illustrated inFIG.13, the light source control signal generator76sets each of the periods during which a high (H) signals for turning ON the second light source11G and the third light source11B are supplied in the lighting period Br included in the subframe period SF3of the frame period F(n+4) to be the period T1. Consequently, the color of light emitted from the light source device L in the subframe period SF3becomes cyan (C). The frame period F(n+4) in which the pattern2is applied does not correspond to the frame period F in which any one of the intermediate patterns is applied. Hence, the correction will not be performed for the frame period F(n+4), and, with the same concept as that of the frame period Fn, the light source control signal generator76performs the allocation of the gradation value corresponding to the subframe-period lighting color of each subframe period SF. However, while the color component other than the primary color components is yellow (Y) in the frame period Fn, the color component other than the primary color components is cyan (C) in the frame period F(n+4). Hence, the liquid crystal control signal generator75allocates the color component that can be converted to cyan (C) in the gradation value indicated by the pixel signal, to the subframe period SF3. In the first embodiment, as illustrated inFIG.1, a synchronization control signal is also output from the image signal controller70to the timing controller13. However, the liquid crystal control signal generator75may output the synchronization control signal with a pixel signal, or a dedicated circuit may output the synchronization control signal. The synchronization control signal is a signal for matching the output timing of the pixel signal from the signal output circuit8and the output timing of the drive signal from the scanning circuit9. In general, when the pattern employed by the subframe display order determiner72is changed between the n-th frame period Fn and the (n+1)-th frame period F(n+1), the color of the frame image seldom changes such that the pattern employed by the subframe display order determiner72is also changed in the (n+2)-th frame period F(n+2). Thus, in general, the pattern employed by the subframe display order determiner72for the frame period F(n+1) is successively employed for the (n+4)-th frame period F(n+4) and thereafter. In the first embodiment, in view of such tendency, priority is given to reducing the occurrence of a flicker on the image by smoothly changing the color of the subframe period SF corresponding to the pattern employed for the (n+1)-th frame period F(n+1). If, at timing prior to the frame period F (for example, (n+4)-th frame period F(n+4)) in which the color change control corresponding to the change of the pattern to be employed by the subframe display order determiner72is to be completed, the pattern employed by the subframe display order determiner72is changed again, the control for adjusting the colors to the re-changed pattern may be started from that timing. Alternatively, the control for adjusting the colors to the re-changed pattern may be started after the application of the pattern employed by the subframe display order determiner72to the frame period F(n+1) is completed. The latter can further reduce the occurrence of a flicker on the image. An example of transition from the pattern1to the pattern2has been described above with reference toFIG.8toFIG.13. However, the basic concept of transition relating to other patterns is also the same. FIG.14is a diagram illustrating another example of subframe-period lighting colors of each frame period F, in a case of gradually changing the color of the subframe period SF, the color of which is changed before and after the change in pattern. When the subframe display order determiner72employs the pattern3for the n-th frame period Fn and employs the pattern4for the (n+1)-th frame period F(n+1), the subframe lighting color transition controller73generates the frame periods F assigned intermediate patterns for gradually changing the color of the subframe period SF, the color of which is changed before and after the change in pattern, before the frame period F in which the colors of the subframe periods SF corresponding to the pattern4are output, that is, between the frame period F corresponding to the pattern3and the frame period F corresponding to the pattern4. More specifically, as illustrated inFIG.14, for the n-th frame period Fn, the subframe lighting color transition controller73follows the pattern3, and sets the color of the subframe period SF1to red (R), sets the color of the subframe period SF2to green (G), sets the color of the subframe period SF3to blue (B), and sets the color of the subframe period SF4to magenta (M). Next, for the (n+1)-th frame period F(n+1), the subframe lighting color transition controller73employs an intermediate pattern4in which the color of the subframe period SF1is set to red (R), the color of the subframe period SF2is set to green (G), the color of the subframe period SF3is set to blue (B), and the color of the subframe period SF4is set to “color obtained by adding red to magenta (M+R)”. Next, for the (n+2)-th frame period F(n+2), the subframe lighting color transition controller73employs an intermediate pattern5in which the color of the subframe period SF1is set to red (R), the color of the subframe period SF2is set to green (G), the color of the subframe period SF3is set to blue (B), and the color of the subframe period SF4is set to red (R). Next, for the (n+3)-th frame period F(n+3), the subframe lighting color transition controller73employs an intermediate pattern6in which the color of the subframe period SF1is set to “color obtained by adding white to red (R+W)”, the color of the subframe period SF2is set to green (G), the color of the subframe period SF3is set to blue (B), and the color of the subframe period SF4is set to red (R). When white (W) is added, the light sources of the light source device L corresponding to primary colors (for example, green (G) and blue (B)) other than a primary color required for outputting the color to which white (W) is added (for example, red (R)), are ON during the period T2. When the color corresponding to a mixed color and the color corresponding to a primary color are added, the light source of the light source device L corresponding to the primary color is ON during the period T1, and the light source of the light source device L for emitting the color to be combined with the primary color to reproduce the mixed color is ON during the period T2. For the (n+4)-th frame period F(n+4), the subframe lighting color transition controller73sets the color of the subframe period SF1to white (W), sets the color of the subframe period SF2to green (G), sets the color of the subframe period SF3to blue (B), and sets the color of the subframe period SF4to red (R), thereby causing the pattern4to be in an employed state. In this example, in the pattern4employed for the frame period F(n+4) illustrated inFIG.14, the order of colors of the subframe periods SF is white (W), green (G), blue (B), and red (R). However, white (W) is not the color arranged along the clockwise direction OD1nor the counterclockwise direction OD2in the hue circle200, and thus the order of colors of the subframe periods SF does not contradict the order in the counterclockwise direction OD2in the hue circle200. That is, white (W) does not conform to the definition of the order of colors in the hue circle200. However, in the first embodiment, the order is controlled such that white (W) is in the subframe period SF between red (R) and green (G), in the subframe period SF immediately before green (G), or in the subframe period SF immediately after red (R). In this example, when the frame period F including the subframe period SF in which another color component is output is a “first frame period”, the first frame period in the example illustrated inFIG.14is the (n+3)-th frame period F(n+3). When the frame period F after the first frame period is a second frame period, the second frame period in the example illustrated inFIG.14is the (n+4)-th frame period F(n+4). When, among a predetermined number (m) of the subframe periods SF in the first frame period, the “subframe period at a certain position in the sequence” is a subframe period SF in which the “other color component” is output, the “subframe period at a certain position in the sequence” in the example illustrated inFIG.14is the first subframe period SF1, and the “other color component” in the example illustrated inFIG.14is “color obtained by adding white to red (R+W)”. According to the above, in the example illustrated inFIG.14, the “subframe period at a certain position in the sequence” in the second frame period, that is, the subframe period F1in the (n+4)-th frame period F(n+4) is the subframe period SF in which a “color component different from the other color component” is output. In this example, the “color component different from the other color component” in the example illustrated inFIG.14is the color component of white (W). FIG.15is a diagram illustrating another example of subframe-period lighting colors of each frame period F, in a case of gradually changing the color of the subframe period SF, the color of which is changed before and after the change in pattern. When the subframe display order determiner72employs the pattern1for the n-th frame period Fn and employs the pattern4for the (n+1)-th frame period F(n+1), the subframe lighting color transition controller73generates the frame period F assigned an intermediate pattern for gradually changing the color of the subframe period SF, the color of which is changed before and after the change in pattern, before the frame period F in which the colors of the subframe periods SF corresponding to the pattern4are output, that is, between the frame period F corresponding to the pattern1and the frame period F corresponding to the pattern4. More specifically, as illustrated inFIG.15, for the n-th frame period Fn, the subframe lighting color transition controller73follows the pattern1, and sets the color of the subframe period SF1to red (R), sets the color of the subframe period SF2to yellow (Y), sets the color of the subframe period SF3to green (G), and sets the color of the subframe period SF4to blue (B). Next, for the (n+1)-th frame period F(n+1), the subframe lighting color transition controller73employs an intermediate pattern7in which the color of the subframe period SF1is set to red (R), the color of the subframe period SF2is set to “color obtained by adding white to yellow (Y+W)”, the color of the subframe period SF3is set to green (G), and the color of the subframe period SF4is set to blue (B). Then, for the (n+2)-th frame period F(n+2), the subframe lighting color transition controller73sets the color of the subframe period SF1to white (W), sets the color of the subframe period SF2to green (G), sets the color of the subframe period SF3to blue (B), and sets the color of the subframe period SF4to red (R), thereby causing the pattern4to be in an employed state. In this manner, in the first embodiment, when a combination of different colors in the patterns employed for the n-th frame period Fn and for the (n+1)-th frame period F(n+1) by the subframe display order determiner72is a combination of yellow (Y) and white (W), only one intermediate pattern is required. When the pattern before the change and the pattern after the change described with reference toFIG.8,FIG.14,FIG.15, and other figures are reversed, the control is performed such that the order in the frame period F illustrated in each figures is reversed. Even when the combination of the pattern before the change and the pattern after the change is a combination of patterns that are not illustrated, the concept of the control is the same as that described with reference toFIG.8toFIG.15. As described above, in the first embodiment, the display panel (for example, the display panel P) that displays an image using the light from outside the display panel, and the light source11that emits light to the display panel are provided. The light source11includes the first light source11R that emits light in the first primary color, the second light source11G that emits light in the second primary color, and the third light source11B that emits light in the third primary color. The frame period F that is a display period of one frame image includes a predetermined number (m) of subframe periods SF, and m is four or greater. Color reproduction of one frame image is performed by the combination of colors output in the predetermined number of subframe periods SF. The output order of colors of the subframe periods SF is in the order of colors in the clockwise direction OD1or in the counterclockwise direction OD2in the hue circle200. Consequently, it is possible to reduce the occurrence of color breakup due to a change in the colors between the subframe periods SF. Thus, it is possible to further reduce the occurrence of a flicker on the image. When the mixed color component of each of the frame images to be displayed successively on the display panel (for example, the display panel P) is transitioned from the first mixed color component to the second mixed color component, and the frame period F prior to the transition includes the subframe period SF in which the first mixed color component is output, the frame period F including the subframe period SF in which another color component between the first mixed color component and the second mixed color component in the hue circle200is output, is generated before the frame period F that is subsequent to the transition and that includes the subframe period SF in which the second mixed color component is output. Consequently, even if the combinations of colors of the subframe periods SF may need to be changed between the successive frame periods F, it is possible, by smoothly changing the color of the subframe period SF between the frame periods F, to reduce the occurrence of a flicker on the image. The first primary color is red (R), the second primary color is green (G), and the third primary color is blue (B). Consequently, it is possible to output colors using light sources that output general colors. The frame period F includes at least the subframe period SF in which the first primary color is output, the subframe period SF in which the second primary color is output, and the subframe period SF in which the third primary color is output. The color that is output in one subframe period SF of the other subframe periods SF included in the frame period F is yellow (Y), cyan (C), magenta (M) or white (W). Consequently, it is possible to output variety of colors using the light in mixed color or white (W). The color that is output in each of the other subframe periods SF corresponds to the color component having a larger proportion among the mixed color components and the white color component included in the frame image to be displayed in the frame period F including the other subframe period SF. By including such a subframe period SF, it is possible to further reduce the occurrence of color breakup. When the mixed color component, which is contained most in the color of each of the frame images to be displayed successively on the display panel (for example, the display panel P), is transitioned from the mixed color component other than white (W) to white (W), and the frame period F prior to the transition includes the subframe period SF in which the mixed color component other than white (W) is output, the frame period F including the subframe period SF in which the color component obtained by making the mixed color component other than white (W) closer to white (W) is output is generated before the frame period F that is subsequent to the transition and that includes the subframe period SF in which white (W) is output. Consequently, even if the subframe period SF in which white (W) is output is included, it is possible to reduce the occurrence of a flicker on the image, by smoothly changing the color of the subframe period SF between the frame periods F. Each of the subframe periods SF includes the writing period Wr in which pixel signals are written to the pixels Pix provided in the display panel (for example, the display panel P), and the lighting period Br that is a period after the writing period Wr and in which the light source11is turned ON. Consequently, the FSC method can be implemented by emitting the light in color corresponding to the pixel signal written in the writing period Wr, to the display panel in the lighting period Br. The display panel P is a display panel in which polymer-dispersed liquid crystals (for example, the liquid crystals3) are sealed between the two substrates facing each other (for example, the second substrate20and the first substrate30). Consequently, it is possible to reduce the occurrence of a flicker on the image in the FSC display device using the polymer-dispersed liquid crystal. In the first embodiment described above, the subframe display order determiner72sets the colors output in the three subframe periods SF in the m subframe periods SF included in one frame period F to the first primary color, the second primary color, and the third primary color. However, the embodiment is not limited thereto. Hereinafter, a modification in which the subframe display order determiner72does not limit the colors output in the three subframe periods SF in the m subframe periods SF included in one frame period F to the first primary color, the second primary color, and the third primary color will be described with reference toFIG.16. In the description of the modification, the same reference numerals denote the same items as those in the first embodiment, and the description thereof may be omitted. FIG.16is a diagram illustrating a relation between the color components in the subframe periods SF and the order of colors in the hue circle200in a modification. InFIG.16, an example of a flow of time corresponding to the order of colors of the subframe periods SF is indicated in a solid line arrow. InFIG.16, another example of a flow of time corresponding to the order of colors of the subframe periods SF is indicated in a broken line arrow. In the example illustrated inFIG.16, the subframe-period lighting color of the subframe period SF1corresponds to a color pattern CP1, for example. The color pattern CP1is light in mixed color including the color components of red (R), green (G), and blue (B), in which blue (B) is the strongest, red (R) is the weakest, and green (G) is in the middle. In the example illustrated inFIG.16, the subframe-period lighting color of the subframe pattern SF2corresponds to a color pattern CP2. The color pattern CP2is light in mixed color including the color components of red (R), green (G), and blue (B), in which green (G) is the strongest, red (R) is the weakest, and blue (B) is in the middle. In the example illustrated inFIG.16, the subframe-period lighting color of the subframe period SF3corresponds to a color pattern CP3. The color pattern CP3is light in mixed color including the color components of red (R), green (G), and blue (B), in which green (G) is the strongest, blue (B) is the weakest, and red (R) is in the middle. In the example illustrated inFIG.16, the subframe-period lighting color of the subframe period SF4corresponds to a color pattern CP4. The color pattern CP3is light in mixed color including the color components of red (R), green (G), and blue (B), in which red (R) is the strongest, green (G) is the weakest, and blue (B) is in the middle. As illustrated by an arrow A1in the hue circle200, the color pattern CP1corresponds to the position close to blue (B) in cyan (C). As illustrated by an arrow A2in the hue circle200, the color pattern CP2corresponds to the position close to green (G) in cyan (C). As illustrated by an arrow A3in the hue circle200, the color pattern CP3corresponds to the position close to green (G) in yellow (Y). As illustrated by an arrow A4in the hue circle200, the color pattern CP4corresponds to the position close to red (R) in magenta (M). Thus, the order of the color pattern CP1, the color pattern CP2, the color pattern CP3, and the color pattern CP4is in the clockwise direction OD1in the hue circle200. As illustrated as another example, the order of the color pattern CP1, the color pattern CP2, the color pattern CP3, and the color pattern CP4may be reversed from the example. That is, the subframe-period lighting color during the subframe period SF1may correspond to the color pattern CP4, the subframe-period lighting color during the subframe period SF2may correspond to the color pattern CP3, the subframe-period lighting color during the subframe period SF3may correspond to the color pattern CP2, and the subframe-period lighting color during the subframe period SF4may correspond to the color pattern CP1. In this case, the order of the color pattern CP4, the color pattern CP3, the color pattern CP2, and the color pattern CP1is in the counterclockwise direction OD2in the hue circle200. Like the color pattern CP1, the color pattern CP2, the color pattern CP3, and the color pattern CP4described above, the subframe display order determiner72in the modification employs the light in color not limited to the primary color, as the color of light of each of the subframe periods SF. In a similar manner to the first embodiment, the subframe lighting color transition controller73controls the subframe-period lighting colors of the frame periods F in accordance with the presence or absence of constraints. In a similar manner to the first embodiment, the light source control signal generator76generates a control signal such that the light source device L emits the light in color that has been set as the subframe-period lighting color by the subframe lighting color transition controller73. In a similar manner to the first embodiment, the liquid crystal control signal generator75determines the gradation value such that the output corresponds to the color of light emitted in each subframe period SF. Specifically, in the modification, the light emitted in the subframe periods SF may be a mixed color. Hence, the color components indicated by the gradation values of red (R), green (G), and blue (B) included in the pixel signal are also distributed to the subframe periods SF in which the mixed color is emitted. The distribution degree corresponds to the strength of each primary color included in the light emitted in the subframe period SF. In the modification, the light emission amount of the primary color component included in each subframe period SF is a variable light emission amount that is not controlled by a fixed light emission period such as the period T1and the period T2. Thus, the liquid crystal control signal generator75corrects, as needed, the gradation value to be supplied to each pixel Pix by the pixel signal such that the output by the display panel P corresponding to the gradation value indicated by the input signal I is performed under the condition in which the light in the primary color component with such variable light emission amount is emitted. More particularly, when the light emission amount of the primary color component in the frame period F is not a “predetermined light emission amount corresponding to the period T1”, the liquid crystal control signal generator75calculates a value obtained by reversing the “ratio between the light emission amount of the primary color component and the predetermined light emission amount”, as a correction coefficient. The liquid crystal control signal generator75corrects the gradation value of the primary color component by multiplying the gradation value by the correction coefficient. Each of the color patterns CP1, CP2, CP3, and CP4described with reference toFIG.16is merely an example of a mixed color that can be employed as the color of the subframe period SF in the modification. The mixed color that can be employed as the color of the subframe period SF in the modification is not limited to the color patterns CP1, CP2, CP3, and CP4, and any mixed color may be employed. A part of the colors of the subframe periods SF in the modification may also be a primary color, a mixed color of the primary colors, or white (W). In the first embodiment and the modification, it is only required that the output order of colors of the subframe periods SF is in the order of colors in the clockwise direction OD1or in the counterclockwise direction OD2in the hue circle200. According to the modification as described above, it is possible, by distributing the primary color component to the subframe periods SF, to reduce the probability of occurrence of extreme change in the color component due to the change in colors between the successive frame periods F. Thus, it is possible to further reduce the occurrence of color breakup, and further reduce the occurrence of a flicker on the image due to the color breakup. The number of subframe periods is not limited to a case where m=4, and may be a case where m=5 or a case where m=6. Hereinafter, a second embodiment in a case where m=5 will be described with reference toFIG.17toFIG.21. A third embodiment in a case where m=6 will also be described with reference toFIG.22toFIG.24. In the descriptions of the second embodiment and the third embodiment, the same reference numerals denote the same items as those in the first embodiment, and the description thereof may be omitted. Second Embodiment FIG.17is a diagram illustrating combinations of colors of the subframe periods SF that may occur when m=5. In the second embodiment, the combination of colors determined by the subframe display order determiner72is a pattern11, a pattern12, a pattern13, a pattern14, a pattern15, or a pattern16illustrated inFIG.17. The pattern11is a pattern in which the first color is red (R), the second color is yellow (Y), the third color is green (G), the fourth color is cyan (C), and the fifth color is blue (B). The pattern12is a pattern in which the first color is red (R), the second color is yellow (Y), the third color is green (G), the fourth color is blue (B), and the fifth color is magenta (M). The pattern13is a pattern in which the first color is red (R), the second color is yellow (Y), the third color is white (W), the fourth color is green (G), and the fifth color is blue (B). The pattern14is a pattern in which the first color is red (R), the second color is white (W), the third color is green (G), the fourth color is cyan (C), and the fifth color is blue (B). The pattern15is a pattern in which the first color is red (R), the second color is green (G), the third color is cyan (C), the fourth color is blue (B), and the fifth color is magenta (M). The pattern16is a pattern in which the first color is red (R), the second color is white (W), the third color is green (G), the fourth color is blue (B), and the fifth color is magenta (M). In a similar manner to the first embodiment, the subframe lighting color transition controller73in the second embodiment controls the color transition orders of the subframe periods SF in each frame period F, based on the combinations of colors of the subframe periods SF determined by the subframe display order determiner72. Hereinafter, an example of the control of the subframe-period lighting color performed in the second embodiment when the constraint described above is imposed, will be described with reference toFIG.18toFIG.21. The concept of the control performed for the combination of the patterns described with reference toFIG.18toFIG.21is applicable to any combination other than the combination. FIG.18is a diagram illustrating an example of subframe-period lighting colors in each frame period F, in a case of gradually changing the color of the subframe period SF in the second embodiment. When the subframe display order determiner72employs the pattern12for the n-th frame period Fn and employs the pattern14for the (n+1)-th frame period F(n+1), the subframe lighting color transition controller73generates the frame periods F assigned intermediate patterns for gradually changing the color of the subframe period SF, the color of which is changed before and after the change in pattern, before the frame period F in which the colors of the subframe periods SF corresponding to the pattern14are output, that is, between the frame period F corresponding to the pattern12and the frame period F corresponding to the pattern14. More specifically, as illustrated inFIG.18, for the n-th frame period Fn, the subframe lighting color transition controller73follows the pattern12, and sets the color of the subframe period SF1to red (R), sets the color of the subframe period SF2to yellow (Y), sets the color of the subframe period SF3to green (G), sets the color of the subframe period SF4to blue (B), and sets the color of the subframe period SF5to magenta (M). Next, for the (n+1)-th frame period F(n+1), the subframe lighting color transition controller73employs an intermediate pattern11in which the color of the subframe period SF1is set to red (R), the color of the subframe period SF2is set to “color obtained by adding green to yellow (Y+G)”, the color of the subframe period SF3is set to green (G), the color of the subframe period SF4is set to blue (B), and the color of the subframe period SF5is set to magenta (M). Next, for the (n+2)-th frame period F(n+2), the subframe lighting color transition controller73employs an intermediate pattern12in which the color of the subframe period SF1is set to red (R), the color of the subframe period SF2is set to green (G), the color of the subframe period SF3is set to green (G), the color of the subframe period SF4is set to blue (B), and the color of the subframe period SF5is set to magenta (M). Next, for the (n+3)-th frame period F(n+3), the subframe lighting color transition controller73employs an intermediate pattern13in which the color of the subframe period SF1is set to red (R), the color of the subframe period SF2is set to green (G), the color of the subframe period SF3is set to “color obtained by adding cyan to green (G+C)”, the color of the subframe period SF4is set to blue (B), and the color of the subframe period SF5is set to magenta (M). Then, for the (n+4)-th frame period F(n+4), the subframe lighting color transition controller73sets the color of the subframe period SF1to red (R), sets the color of the subframe period SF2to green (G), sets the color of the subframe period SF3to cyan (C), sets the color of the subframe period SF4to blue (B), and sets the color of the subframe period SF5to magenta (M), thereby causing the pattern14to be in an employed state. FIG.19is a diagram illustrating another example of subframe-period lighting colors in each frame period F, in a case of gradually changing the color of the subframe period SF in the second embodiment. For example, when the subframe display order determiner72employs the pattern11for the n-th frame period Fn and employs the pattern15for the (n+1)-th frame period F(n+1), the subframe lighting color transition controller73generates the frame periods F assigned intermediate patterns for gradually changing the color of the subframe period SF, the color of which is changed before and after the change in pattern, before the frame period F in which the colors of the subframe periods SF corresponding to the pattern15are output, that is, between the frame period F corresponding to the pattern11and the frame period F corresponding to the pattern15. More specifically, as illustrated inFIG.19, for the n-th frame period Fn, the subframe lighting color transition controller73follows the pattern11, and sets the color of the subframe period SF1to red (R), sets the color of the subframe period SF2to yellow (Y), sets the color of the subframe period SF3to green (G), sets the color of the subframe period SF4to cyan (C), and sets the color of the subframe period SF5to blue (B). Next, for the (n+1)-th frame period F(n+1), the subframe lighting color transition controller73employs an intermediate pattern14in which the color of the subframe period SF1is set to red (R), the color of the subframe period SF2is set to “color obtained by adding red to yellow (Y+R)”, the color of the subframe period SF3is set to green (G), the color of the subframe period SF4is set to cyan (C), and the color of the subframe period SF5is set to blue (B). Next, for the (n+2)-th frame period F(n+2), the subframe lighting color transition controller73employs an intermediate pattern15in which the color of the subframe period SF1is set to red (R), the color of the subframe period SF2is set to red (R), the color of the subframe period SF3is set to green (G), the color of the subframe period SF4is set to cyan (C), and the color of the subframe period SF5is set to blue (B). Next, for the (n+3)-th frame period F(n+3), the subframe lighting color transition controller73employs an intermediate pattern16in which the color of the subframe period SF1is set to “color obtained by adding magenta to red (R+M)”, the color of the subframe period SF2is set to red (R), the color of the subframe period SF3is set to green (G), the color of the subframe period SF4is set to cyan (C), and the color of the subframe period SF5is set to blue (B). Then, for the (n+4)-th frame period F(n+4), the subframe lighting color transition controller73sets the color of the subframe period SF1to magenta (M), sets the color of the subframe period SF2to red (R), sets the color of the subframe period SF3to green (G), sets the color of the subframe period SF4to cyan (C), and sets the color of the subframe period SF5to blue (B), thereby causing the subframe-period lighting colors corresponding to the pattern15to be in an employed state. In the pattern15illustrated inFIG.19andFIG.21, which will be described later, the order of colors of the subframe periods SF is magenta (M), red (R), green (G), cyan (C), and blue (B), and is in the counterclockwise direction OD2in the hue circle200. In this example, when the frame period F including the subframe period SF in which another color component is output is referred to as the “first frame period”, the (n+1)-th frame period F(n+1) in the example illustrated inFIG.19corresponds to the first frame period. When the frame period F subsequent to the first frame period is referred to as the second frame period, the (n+2)-th frame period F(n+2) in the example illustrated inFIG.19corresponds to the second frame period. Among the predetermined number (m) of the subframe periods SF in the first frame period, when the “subframe period at a certain position in the sequence” is the subframe period SF in which the “other color component” is output, the “subframe period at a certain position in the sequence” in the (n+1)-th frame period F(n+1) in the example illustrated inFIG.19is the second subframe period SF2, and the “other color component” in the example illustrated inFIG.19is the “color obtained by adding red to yellow (Y+R)”. According to the above, in the example illustrated inFIG.19, the “subframe period at a certain position in the sequence” in the second frame period, that is, the subframe period SF2in the (n+2)-th frame period F(n+2) is the subframe period SF in which the “color component different from the other color component” is output. In this example, the “color component different from the other color component” in the example illustrated inFIG.19is red (R), that is, the “color component corresponding to a part of the colors contained in the other color component”. When the frame period F including the subframe period SF in which another color component is output is referred to as the “first frame period”, the (n+3)-th frame period F(n+3) in the example illustrated inFIG.19corresponds to the first frame period. When the frame period F subsequent to the first frame period is referred to as the second frame period, the (n+4)-th frame period F(n+4) in the example illustrated inFIG.19corresponds to the second frame period. Among the predetermined number (m) of the subframe periods SF in the first frame period, when the “subframe period at a certain position in the sequence” is the subframe period SF in which the “other color component” is output, the “subframe period at a certain position in the sequence” in the (n+3)-th frame period F(n+3) in the example illustrated inFIG.19is the first subframe period SF1, and the “other color component” in the example illustrated inFIG.19is the “color obtained by adding magenta to red (R+M)”. According to the above, in the example illustrated inFIG.19, the “subframe period at a certain position in the sequence” in the second frame period, that is, the subframe period SF1in the (n+4)-th frame period F(n+4) is the subframe period SF in which the “color component different from the other color component” is output. In the case of the example illustrated inFIG.19, the “color component different from the other color component” is magenta (M), that is, the “color component corresponding to a part of the colors contained in the other color component”. FIG.20is a diagram illustrating another example of subframe-period lighting colors in each frame period F, in a case of gradually changing the color of the subframe period SF in the second embodiment. For example, when the subframe display order determiner72employs the pattern13for the n-th frame period Fn and employs the pattern12for the (n+1)-th frame period F(n+1), the subframe lighting color transition controller73generates the frame periods F assigned intermediate patterns for gradually changing the color of the subframe period SF, the color of which is changed before and after the change in pattern, before the frame period F in which the colors of the subframe periods SF corresponding to the pattern12are output, that is, between the frame period F corresponding to the pattern13and the frame period F corresponding to the pattern12. More specifically, as illustrated inFIG.20, for the n-th frame period Fn, the subframe lighting color transition controller73follows the pattern13, and sets the color of the subframe period SF1to red (R), sets the color of the subframe period SF2to yellow (Y), sets the color of the subframe period SF3to white (W), sets the color of the subframe period SF4to green (G), and sets the color of the subframe period SF5to blue (B). Next, for the (n+1)-th frame period F(n+1), the subframe lighting color transition controller73employs an intermediate pattern17in which the color of the subframe period SF1is set to red (R), the color of the subframe period SF2is set to “color obtained by adding red to yellow (Y+R)”, the color of the subframe period SF3is set to “color obtained by adding yellow to white (W+Y)”, the color of the subframe period SF4is set to green (G), and the color of the subframe period SF5is set to blue (B). Next, for the (n+2)-th frame period F(n+2), the subframe lighting color transition controller73employs an intermediate pattern18in which the color of the subframe period SF1is set to red (R), the color of the subframe period SF2is set to red (R), the color of the subframe period SF3is set to “color obtained by adding yellow to white (W+Y)”, the color of the subframe period SF4is set to green (G), and the color of the subframe period SF5is set to blue (B). Next, for the (n+3)-th frame period F(n+3), the subframe lighting color transition controller73employs an intermediate pattern19in which the color of the subframe period SF1is set to “color obtained by adding magenta to red (R+M)”, the color of the subframe period SF2is set to red (R), the color of the subframe period SF3is set to “color obtained by adding yellow to white (W+Y)”, the color of the subframe period SF4is set to green (G), and the color of the subframe period SF5is set to blue (B). Then, for the (n+4)-th frame period F(n+4), the subframe lighting color transition controller73sets the color of the subframe period SF1to magenta (M), sets the color of the subframe period SF2to red (R), sets the color of the subframe period SF3to yellow (Y), sets the color of the subframe period SF4to green (G), and sets the color of the subframe period SF5to blue (B), thereby causing the subframe-period lighting colors corresponding to the pattern12to be in an employed state. As described with reference toFIG.20, when there are three intermediate patterns before and after the change, and when a combination of yellow (Y) and white (W) is included in the combinations of colors before and after the change in the subframe periods SF, three intermediate patterns in each of which the “color obtained by adding yellow to white (W+Y)” serves as the subframe-period lighting color in the subframe period SF are provided consecutively. In the pattern12illustrated inFIG.20, the order of colors of the subframe periods SF is magenta (M), red (R), yellow (Y), green (G), and blue (B), and is in the counterclockwise direction OD2in the hue circle200. FIG.21is a diagram illustrating another example of subframe-period lighting colors in each frame period F, in a case of gradually changing the color of the subframe period SF in the second embodiment. For example, when the subframe display order determiner72employs the pattern13for the n-th frame period Fn and employs the pattern15for the (n+1)-th frame period F(n+1), the subframe lighting color transition controller73generates the frame periods F assigned intermediate patterns for gradually changing the color of the subframe period SF, the color of which is changed before and after the change in pattern, before the frame period F in which the colors of the subframe periods SF corresponding to the pattern15are output, that is, between the frame period F corresponding to the pattern13and the frame period F corresponding to the pattern15. More specifically, as illustrated inFIG.21, for the n-th frame period Fn, the subframe lighting color transition controller73follows the pattern13, and sets the color of the subframe period SF1to red (R), sets the color of the subframe period SF2to yellow (Y), sets the color of the subframe period SF3to white (W), sets the color of the subframe period SF4to green (G), and sets the color of the subframe period SF5to blue (B). Next, for the (n+1)-th frame period F(n+1), the subframe lighting color transition controller73employs an intermediate pattern20in which the color of the subframe period SF1is set to red (R), the color of the subframe period SF2is set to “color obtained by adding red to yellow (Y+R)”, the color of the subframe period SF3is set to “color obtained by adding green to white (W+G)”, the color of the subframe period SF4is set to green (G), and the color of the subframe period SF5is set to blue (B). Next, for the (n+2)-th frame period F(n+2), the subframe lighting color transition controller73employs an intermediate pattern21in which the color of the subframe period SF1is set to red (R), the color of the subframe period SF2is set to red (R), the color of the subframe period SF3is set to green (G), the color of the subframe period SF4is set to green (G), and the color of the subframe period SF5is set to blue (B). Next, for the (n+3)-th frame period F(n+3), the subframe lighting color transition controller73employs an intermediate pattern22in which the color of the subframe period SF1is set to “color obtained by adding magenta to red (R+M)”, the color of the subframe period SF2is set to red (R), the color of the subframe period SF3is set to green (G), the color of the subframe period SF4is set to “color obtained by adding cyan to green (G+C)”, and the color of the subframe period SF5is set to blue (B). Then, for the (n+4)-th frame period F(n+4), the subframe lighting color transition controller73sets the color of the subframe period SF1to magenta (M), sets the color of the subframe period SF2to red (R), sets the color of the subframe period SF3to green (G), sets the color of the subframe period SF4to cyan (C), and sets the color of the subframe period SF5to blue (B), thereby causing the subframe-period lighting colors corresponding to the pattern15to be in an employed state. Except as specifically described above, the second embodiment is the same as the first embodiment. In the second embodiment also, the specific concept of the operations of the liquid crystal control signal generator75and the light source control signal generator76is the same as that in the first embodiment. In the frame period F in the second embodiment, the number of subframe periods SF in which the color other than the primary colors is output is two or more. The frame period F includes the subframe period SF in which the color component having the largest proportion among the color components of yellow (Y), cyan (C), magenta (M), and white (W) included in the frame image to be displayed in the frame period F is output, and the subframe period SF in which the color component having the second largest proportion is output. Consequently, more than one subframe period SF in which a mixed color is output can be included in the frame period F, thereby further reducing the occurrence of color breakup. Consequently, it is possible to further reduce the occurrence of a flicker on the image that would be caused by the color breakup. Third Embodiment FIG.22is a diagram illustrating combinations of colors of the subframe periods SF that may be generated when m=6. In a third embodiment, the combination of colors determined by the subframe display order determiner72is a pattern21, a pattern22, a pattern23, or a pattern24illustrated inFIG.22. The pattern21is a pattern in which the first color is red (R), the second color is yellow (Y), the third color is green (G), the fourth color is cyan (C), the fifth color is blue (B), and the sixth color is magenta (M). The pattern22is a pattern in which the first color is red (R), the second color is yellow (Y), the third color is white (W), the fourth color is green (G), the fifth color is cyan (C), and the sixth color is blue (B). The pattern23is a pattern in which the first color is red (R), the second color is yellow (Y), the third color is white (W), the fourth color is green (G), the fifth color is blue (B), and the sixth color is magenta (M). The pattern24is a pattern in which the first color is red (R), the second color is white (W), the third color is green (G), the fourth color is cyan (C), the fifth color is blue (B), and the sixth color is magenta (M). In a similar manner to the first embodiment, the subframe lighting color transition controller73in the third embodiment controls the color transition orders of the subframe periods SF in each frame period F, based on the combinations of colors of the subframe periods SF determined by the subframe display order determiner72. Hereinafter, an example of the control of the subframe-period lighting color performed in the third embodiment when the constraint described above is imposed, will be described with reference toFIG.23andFIG.24. The concept of the control performed for the combination of the patterns described with reference toFIG.23andFIG.24is applicable to any combination other than the combination. FIG.23is a diagram illustrating an example of subframe-period lighting colors in each frame period F, in a case of gradually changing the color of the subframe period SF in the third embodiment. When the subframe display order determiner72employs the pattern21for the n-th frame period Fn and employs the pattern23for the (n+1)-th frame period F(n+1), the subframe lighting color transition controller73generates the frame periods F assigned intermediate patterns for gradually changing the color of the subframe period SF, the color of which is changed before and after the change in pattern, before the frame period F in which the colors of the subframe periods SF corresponding to the pattern23are output, that is, between the frame period F corresponding to the pattern21and the frame period F corresponding to the pattern23. More specifically, as illustrated inFIG.23, for the n-th frame period Fn, the subframe lighting color transition controller73follows the pattern21, and sets the color of the subframe period SF1to red (R), sets the color of the subframe period SF2to yellow (Y), sets the color of the subframe period SF3to green (G), sets the color of the subframe period SF4to cyan (C), sets the color of the subframe period SF5to blue (B), and sets the color of the subframe period SF6to magenta (M). Next, for the (n+1)-th frame period F(n+1), the subframe lighting color transition controller73employs an intermediate pattern31in which the color of the subframe period SF1is set to red (R), the color of the subframe period SF2is set to yellow (Y), the color of the subframe period SF3is set to green (G), the color of the subframe period SF4is set to “color obtained by adding green to cyan (C+G)”, the color of the subframe period SF5is set to blue (B), and the color of the subframe period SF6is set to magenta (M). Next, for the (n+2)-th frame period F(n+2), the subframe lighting color transition controller73employs an intermediate pattern32in which the color of the subframe period SF1is set to red (R), the color of the subframe period SF2is set to yellow (Y), the color of the subframe period SF3is set to green (G), the color of the subframe period SF4is set to green (G), the color of the subframe period SF5is set to blue (B), and the color of the subframe period SF6is set to magenta (M). Next, for the (n+3)-th frame period F(n+3), the subframe lighting color transition controller73employs an intermediate pattern33in which the color of the subframe period SF1is set to red (R), the color of the subframe period SF2is set to yellow (Y), the color of the subframe period SF3is set to “color obtained by adding white to green (G+W)”, the color of the subframe period SF4is set to green (G), the color of the subframe period SF5is set to blue (B), and the color of the subframe period SF6is set to magenta (M). Then, for the (n+4)-th frame period F(n+4), the subframe lighting color transition controller73sets the color of the subframe period SF1to red (R), sets the color of the subframe period SF2to yellow (Y), sets the color of the subframe period SF3to white (W), sets the color of the subframe period SF4to green (G), sets the color of the subframe period SF5to blue (B), and sets the color of the subframe period SF6to magenta (M), thereby causing the pattern23to be in an employed state. FIG.24is a diagram illustrating another example of subframe-period lighting colors in each frame period F, in a case of gradually changing the color of the subframe period SF in the third embodiment. When the subframe display order determiner72employs the pattern22for the n-th frame period Fn and employs the pattern23for the (n+1)-th frame period F(n+1), the subframe lighting color transition controller73generates the frame periods F assigned intermediate patterns for gradually changing the color of the subframe period SF, the color of which is changed before and after the change in pattern, before the frame period F in which the colors of the subframe periods SF corresponding to the pattern23are output, that is, between the frame period F corresponding to the pattern22and the frame period F corresponding to the pattern23. More specifically, as illustrated inFIG.24, for the n-th frame period Fn, the subframe lighting color transition controller73follows the pattern22, and sets the color of the subframe period SF1to red (R), sets the color of the subframe period SF2to yellow (Y), sets the color of the subframe period SF3to white (W), sets the color of the subframe period SF4to green (G), sets the color of the subframe period SF5to cyan (C), and sets the color of the subframe period SF6to blue (B). Next, for the (n+1)-th frame period F(n+1), the subframe lighting color transition controller73employs an intermediate pattern34in which the color of the subframe period SF1is set to red (R), the color of the subframe period SF2is set to yellow (Y), the color of the subframe period SF3is set to white (W), the color of the subframe period SF4is set to green (G), the color of the subframe period SF5is set to “color obtained by adding blue to cyan (C+B)”, and the color of the subframe period SF6is set to blue (B). Next, for the (n+2)-th frame period F(n+2), the subframe lighting color transition controller73employs an intermediate pattern35in which the color of the subframe period SF1is set to red (R), the color of the subframe period SF2is set to yellow (Y), the color of the subframe period SF3is set to white (W), the color of the subframe period SF4is set to green (G), the color of the subframe period SF5is set to blue (B), and the color of the subframe period SF6is set to blue (B). Next, for the (n+3)-th frame period F(n+3), the subframe lighting color transition controller73employs an intermediate pattern36in which the color of the subframe period SF1is set to red (R), the color of the subframe period SF2is set to yellow (Y), the color of the subframe period SF3is set to white (W), the color of the subframe period SF4is set to green (G), the color of the subframe period SF5is set to blue (B), and the color of the subframe period SF6is set to “color obtained by adding magenta to blue (B+M)”. Then, for the (n+4)-th frame period F(n+4), the subframe lighting color transition controller73sets the color of the subframe period SF1to red (R), sets the color of the subframe period SF2to yellow (Y), sets the color of the subframe period SF3to white (W), sets the color of the subframe period SF4to green (G), sets the color of the subframe period SF5to blue (B), and sets the color of the subframe period SF6to magenta (M), thereby causing the pattern23to be in an employed state. Except as specifically described above, the third embodiment is the same as the first embodiment. In the third embodiment also, the specific concept of the operations of the liquid crystal control signal generator75and the light source control signal generator76is the same as that in the first embodiment. With the third embodiment, it is possible to reduce the occurrence of a flicker on the image even when m=6. The modification describe above is also applicable to the second embodiment and the third embodiment. That is, in the second embodiment and the third embodiment also, the subframe display order determiner72may not limit the colors output in the three subframe periods SF of the m subframe periods SF included in one frame period F, to the first primary color, the second primary color, and the third primary color. In the embodiments described above, the number of intermediate patterns is three (three frame periods), when the color of the subframe period assigned a color not corresponding to any of the first primary color, the second primary color, and the third primary color is changed from the mixed color other than white (W) to another mixed color other than white (W). However, the embodiment is not limited thereto. The frame periods generated as the intermediate patterns may be longer than three frames, or may be shorter than three frames. FIG.25is a diagram illustrating another example of subframe-period lighting colors in each frame period F, in a case of gradually changing the color of the subframe period SF, the color of which is changed before and after the change in pattern. In this example, for the n-th frame period Fn, the subframe lighting color transition controller73follows the pattern1, and sets the color of the subframe period SF1to red (R), sets the color of the subframe period SF2to yellow (Y), sets the color of the subframe period SF3to green (G), and sets the color of the subframe period SF4to blue (B). Next, for the (n+1)-th frame period F(n+1), the subframe lighting color transition controller73employs an intermediate pattern41in which the color of the subframe period SF1is set to red (R), the color of the subframe period SF2is set to “one of variations of mixed color of yellow and green (YG1)”, the color of the subframe period SF3is set to green (G), and the color of the subframe period SF4is set to blue (B). Next, for the (n+2)-th frame period F(n+2), the subframe lighting color transition controller73employs an intermediate pattern42in which the color of the subframe period SF1is set to red (R), the color of the subframe period SF2is set to “another variation of mixed color of yellow and green (YG2)”, the color of the subframe period SF3is set to green (G), and the color of the subframe period SF4is set to blue (B). In this example, the ratio of the color component Φ and the color component Ψ in “one of variations of mixed color of Φ and Ψ (ΦΨ1)” is different from that in “another variation of mixed color of Φ and Ψ (ΦΨ2)”. In the “one of variations of mixed colors of Φ and Ψ (ΦΨ1)”, the ratio of the color components satisfies the condition where Φ>Ψ. In the “other variation of mixed colors of Φ and Ψ (ΦΨ2)”, the ratio of the color components satisfies the condition where Φ<Ψ. Next, for the (n+3)-th frame period F(n+3), the subframe lighting color transition controller73employs an intermediate pattern43in which the color of the subframe period SF1is set to red (R), the color of the subframe period SF2is set to green (G), the color of the subframe period SF3is set to green (G), and the color of the subframe period SF4is set to blue (B). Next, for the (n+4)-th frame period F(n+4), the subframe lighting color transition controller73employs an intermediate pattern44in which the color of the subframe period SF1is set to red (R), the color of the subframe period SF2is set to green (G), the color of the subframe period SF3is set to “one of variations of mixed color of green and cyan (GC1)”, and the color of the subframe period SF4is set to blue (B). Next, for the (n+5)-th frame period F(n+5), the subframe lighting color transition controller73employs an intermediate pattern45in which the color of the subframe period SF1is set to red (R), the color of the subframe period SF2is set to green (G), the color of the subframe period SF3is set to “another variation of mixed color of green and cyan (GC2)”, and the color of the subframe period SF4is set to blue (B). Then, for the (n+6)-th frame period F(n+6), the subframe lighting color transition controller73sets the color of the subframe period SF1to red (R), sets the color of the subframe period SF2to green (G), sets the color of the subframe period SF3to cyan (C), and sets the color of the subframe period SF4to blue (B), thereby causing the pattern2to be in an employed state. FIG.26is a diagram illustrating another example of subframe-period lighting colors in each frame period F, in a case of gradually changing the color of the subframe period SF, the color of which is changed before and after the change in pattern. In this example, for the n-th frame period Fn, the subframe lighting color transition controller73follows the pattern1, and sets the color of the subframe period SF1to red (R), sets the color of the subframe period SF2to yellow (Y), sets the color of the subframe period SF3to green (G), and sets the color of the subframe period SF4to blue (B). Next, for the (n+1)-th frame period F(n+1), the subframe lighting color transition controller73employs an intermediate pattern46in which the color of the subframe period SF1is set to red (R), the color of the subframe period SF2is set to green (G), the color of the subframe period SF3is set to green (G), and the color of the subframe period SF4is set to blue (B). Then, for the (n+2)-th frame period F(n+1), the subframe lighting color transition controller73sets the color of the subframe period SF1to red (R), sets the color of the subframe period SF2to green (G), sets the color of the subframe period SF3to cyan (C), and sets the color of the subframe period SF4to blue (B), thereby causing the pattern2to be in an employed state. As described with reference toFIG.25andFIG.26, the number of the intermediate patterns is not limited to three (three frame periods) in changing the color from the mixed color other than white (W) to another mixed color other than white (W).FIG.25andFIG.26each exemplify an example in which m=4, which corresponds to the first embodiment. However, the number of the intermediate patterns can also be changed even in a case where m=5 or m=6, in the same manner as in a case where m=4. The light source in the light source device L is not limited to the first light source11R, the second light source11G, and the third light source11B. The light source device L may also include a light source of a mixed color or another color. In this case, the frame period F includes at least one subframe period SF in which light in mixed color obtained by combining at least two colors from among the light sources included in the light source device L is emitted. The display panel P is not limited to the liquid crystal display panel using a polymer-dispersed liquid crystal. The display panel may be any display panel that uses a drive control method to which the FSC method is applicable. For example, the liquid crystal display panel may be a transmissive, transflective, or reflective panel. Other functions and effects brought about by the aspects described in the embodiments and modification described above, which are apparent from the description of the present specification or can be appropriately conceived by those skilled in the art, are naturally understood to be brought about by the present disclosure. | 132,767 |
11862120 | DESCRIPTION OF REFERENCE NUMBERS 100: display device;101: display panel;102: backlight source;1011: array substrate;1012: color filter substrate;1013: liquid crystal layer. DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS The preferred embodiments of the present disclosure will be described in detail below with reference to the accompanying drawings, so as to fully introduce the technical content of the present disclosure to those skilled in the art, to exemplify that the present disclosure may be implemented, to make the technical content disclosed in the present disclosure clearer, and to make those skilled in the art understand how to implement the present disclosure more readily. However, the present disclosure can be embodied in many different forms of embodiments, the protection scope of the present disclosure is not limited to the embodiments mentioned herein, and the description of the following embodiments is not intended to limit the scope of the present disclosure. The directional terms mentioned in the present disclosure, such as “up”, “down”, “front”, “rear”, “left”, “right”, “inside”, “outside”, and “side”, are only directions in the drawing. The directional terms used herein are used to explain and describe the present disclosure, rather than to limit the protection scope of the present disclosure. In the drawings, structurally same components are denoted by a same numeral, and structurally or functionally similar components are denoted by similar numerals throughout. In addition, for ease of understanding and description, the size and thickness of each component shown in the accompanying drawings are arbitrarily shown, and the present disclosure does not limit the size and thickness of each component. Embodiment 1 As shown inFIG.1, the present embodiment provides a display device100. The display device100includes: a display panel101and a backlight source102. As shown inFIG.1, the display panel101includes an array substrate1011, a color filter substrate1012, and a liquid crystal layer1013. The array substrate1011includes a film layer such as a thin film transistor (not shown in figure). The color filter substrate1012is disposed opposite to the array substrate1011. The color filter substrate1012includes: a black matrix (not shown in figure), a color filter (not shown in figure), and other film layers. The liquid crystal layer1013is disposed between the array substrate1011and the color filter substrate1012. The backlight source102is disposed on a side of the array substrate1011away from the color filter substrate1012. The backlight source102can be an edge-type backlight source or a direct-type backlight source. As shown inFIG.2, the present embodiment also provides a method for adjusting chromaticity of the display device of the present embodiment, which includes following steps: S10: acquiring initial spectrum values of a first sub-pixel, a second sub-pixel, a third sub-pixel, and a backlight source of an initial display device, and calculating an initial transmittance spectrum value of a white screen of a display panel of the initial display device; S20: acquiring initial aperture ratios of the first sub-pixel, the second sub-pixel, and the third sub-pixel of the initial display device, presetting test aperture ratios of the first sub-pixel, the second sub-pixel, and the third sub-pixel of a test display device, and calculating relative variations of the test aperture ratios of the first sub-pixel, the second sub-pixel, and the third sub-pixel relative to the initial aperture ratios of the first sub-pixel, the second sub-pixel, and the third sub-pixel, respectively; S30: calculating a test transmittance spectrum value of a white screen of a display panel of the test display device according to the initial transmittance spectrum value of the white screen and the relative variations of the test aperture ratios relative to the initial aperture ratios; S40: calculating tristimulus values of the test display device according to the test transmittance spectrum value of the white screen of the display panel; and S50: calculating a chromaticity value of a white screen of the test display device according to the tristimulus values. In the present embodiment, in the step S10, the initial spectrum values of the first sub-pixel, the second sub-pixel, and the third sub-pixel of the initial display device at a gray scale of 255 are obtained. In other embodiments, the initial spectrum values of the first sub-pixel, the second sub-pixel, and the third sub-pixel of the initial display device at other gray scales may also be acquired. The first sub-pixel, the second sub-pixel, and the third sub-pixel are respectively one of a red sub-pixel, a green sub-pixel, and a blue sub-pixel. The colors of the first sub-pixel, the second sub-pixel, and the third sub-pixel are different from each other. In the present embodiment, the first sub-pixel, the second sub-pixel, and the third sub-pixel are respectively the red sub-pixel, the green sub-pixel, and the blue sub-pixel. In the step S10, the initial transmittance spectrum value of the white screen of the display panel of the initial display device is equal to dividing a sum of the initial spectrum value of the first sub-pixel, the initial spectrum value of the second sub-pixel, and the initial spectrum value of the third sub-pixel by the initial spectrum value of the backlight source. For example, the initial spectrum value of the first sub-pixel of the initial display device is 0.000383, the initial spectrum value of the second sub-pixel of the initial display device is 0.000028, and the initial spectrum value of the third sub-pixel of the initial display device is 0.000374. The initial spectrum value of the backlight source of the initial display device is 0.000328. At this time, the initial transmittance spectrum value of the white screen of the display panel of the initial display device=(0.000383+0.000028+0.000374)/0.000328=2.393. In the step S20, the relative variation of the test aperture ratio of the first sub-pixel relative to the initial aperture ratio of the first sub-pixel is equal to dividing the test aperture ratio of the first sub-pixel by the initial aperture ratio of the first sub-pixel. The relative variation of the test aperture ratio of the second sub-pixel relative to the initial aperture ratio of the second sub-pixel is equal to dividing the test aperture ratio of the second sub-pixel by the initial aperture ratio of the second sub-pixel. The relative variation of the test aperture ratio of the third sub-pixel relative to the initial aperture ratio of the third sub-pixel is equal to dividing the test aperture ratio of the third sub-pixel by the initial aperture ratio of the third sub-pixel. For example, the initial aperture ratio of the first sub-pixel of the initial display device is 60%, the initial aperture ratio of the second sub-pixel of the initial display device is 60%, and the initial aperture ratio of the third sub-pixel of the initial display device is 60%. The test aperture ratio of the first sub-pixel of the test display device is 62%, the test aperture ratio of the second sub-pixel of the test display device is 64%, and the test aperture ratio of the third sub-pixel of the test display device is 66%. The relative variation of the test aperture ratio of the first sub-pixel relative to the initial aperture ratio of the first sub-pixel=62%/60%=103.3%; the relative variation of the test aperture ratio of the second sub-pixel relative to the initial aperture ratio of the second sub-pixel=64%/60%=106.7%; the relative variation of the test aperture ratio of the third sub-pixel relative to the initial aperture ratio of the third sub-pixel=66%/60%=110%. In the step S30, the test transmittance spectrum value of the white screen of the display panel of the test display device is equal to dividing a sum of the initial spectrum value of the first sub-pixel multiplied by the relative variation of the test aperture ratio of the first sub-pixel relative to the initial aperture ratio of the first sub-pixel, the initial spectrum value of the second sub-pixel multiplied by the relative variation of the test aperture ratio of the second sub-pixel relative to the initial aperture ratio of the second sub-pixel, and the initial spectrum value of the third sub-pixel multiplied by the relative variation of the test aperture ratio of the third sub-pixel relative to the initial aperture ratio of the third sub-pixel, by the initial spectrum value of the backlight source. The test transmittance spectrum value of the white screen of the display panel of the test display device=(0.000383*103.3%+0.000028*106.7%+0.000374*110%)/0.000328=2.552. In the step S40, the tristimulus values comprise a red primary color tristimulus value X, a green primary color tristimulus value Y, and a blue primary color tristimulus value Z, and the red primary color tristimulus value X, the green primary color tristimulus value Y, and the blue primary color tristimulus value Z are calculated according to following formulas:X=k∫λS(λ)·P(λ)·x(λ)dλ, Y=k∫λS(λ)·P(λ)·z(λ)dλ, Z=k∫λS(λ)·P(·)·z(λ)dλ; a wherein k is a tuning coefficient, λ is a wavelength, S(λ) is the initial spectrum value of the backlight source of the initial display device, P(λ) is the test transmittance spectrum value of the white screen of the display panel of the test display device,x(λ),y(λ), andz(λ) are respectively three spectrum tristimulus values of Standard Observer, and dλ represents a differential of the wavelength. In the step S50, the chromaticity value of the white screen of the test display device comprises a horizontal coordinate Wx and a vertical coordinate Wy. The horizontal coordinate Wx of the chromaticity value of the white screen of the test display device is calculated according to the following formula Wx=XX+Y+Z. The vertical coordinate Wy of the chromaticity value of the white screen of the test display device is calculated according to the following formula Wy=YX+Y+Z. The method for calculating the chromaticity value of the white screen of the display device of the present embodiment can calculate a corresponding chromaticity value of the white screen, when the aperture ratios of the first sub-pixel, the second sub-pixel, and the third sub-pixel of the display device are adjusted. Then, the aperture ratios of the first sub-pixel, the second sub-pixel, and the third sub-pixel of the display device can be adjusted, so as to meet user demand for chromaticity of the white screen, ameliorate a color cast problem of current display devices, and improve a display effect of the display device. Further, the method for calculating the chromaticity value of the white screen of the display device provided by the present application has been described in detail above. Specific examples are used herein to describe the principle and implementation of the present application. The description is only used to help to understand the method of the present application and the core idea thereof. Meanwhile, for those skilled in the art, according to the spirit of the present application, there will be changes in specific embodiments and application scopes. In summary, the content of the present application should not be construed to limit the present application. | 11,392 |
11862121 | DESCRIPTION OF EXEMPLARY EMBODIMENTS An embodiment of the present disclosure is described below with reference to the accompanying drawings. In the following drawings, the members are not drawn to scale to illustrate the members in recognizable sizes. In addition, in the following drawings, XYZ axes are illustrated as mutually orthogonal coordinate axes as necessary. In the drawings, the directions pointed by the arrows along the axes are + directions, and the directions opposite to the + directions are − directions. Note that the +Z direction and the −Z direction may be referred to as upper side and lower side, respectively, and the view in the +Z direction is referred to as plan view or planar. Further, in the following description, a phrase “on a substrate” for the substrate means placement on the substrate in contact with the substrate, placement on the substrate with another structure therebetween, or placement on the substrate partially with another structure therebetween, for example. 1. Embodiment 1 FIG.1is a plan view of a liquid crystal apparatus as viewed from an opposed substrate side. In this embodiment, as a liquid crystal apparatus100, the liquid crystal apparatus100of an active drive type including a thin film transistor (TFT) serving as a pixel switching element for each pixel is described as an example. The liquid crystal apparatus100can be favorably used as a light modulation apparatus in a projection-type display apparatus1000serving as an electronic apparatus described later, for example. 1.1. Overview of Liquid Crystal Apparatus As illustrated inFIG.1, the liquid crystal apparatus100includes an element substrate10and an opposed substrate20indicated by the broken line L1. Note that the configurations indicated by the solid line inside the outer edge of the opposed substrate20indicated by the broken line L1are configurations provided between the opposed substrate20and the element substrate10. A sealing material60is provided in a frame shape along the outer edge of the opposed substrate20. The sealing material60is an adhesive agent composed of a light curing resin, a thermosetting resin or the like, and contains a gap material, such as glass fibers and glass beads, for setting the distance between the substrates to a predetermined value. The region surrounded by the broken line L2is a display region E, in which pixels P are disposed in a matrix. The region between the display region E and the sealing material60is a light-shielded peripheral region F. A frame-shaped first electrode108is disposed at the innermost periphery of the peripheral region F. Specifically, the frame-shaped first electrode108is provided in the light blocking region at the outer periphery of the display region E. The first electrode108is electrically connected to an external terminal104, and an AC signal is supplied from the outside. The AC signal generates at the first electrode108an electric field for attracting and holding ionic impurities from the display region E, and pushing the ionic impurities attracted to the first electrode108away to the display region E. In the light-shielded peripheral region F, a scan line driving circuit not illustrated in the drawing is disposed. At a portion of the element substrate10protruded from the opposed substrate20on the lower side in drawing outside the sealing material60, a data line driving circuit101and a plurality of the external terminals104are disposed. An inter-substrate conduction part106for electrical conduction between the element substrate10and the opposed substrate20is disposed at each corner of the opposed substrate20. A liquid crystal layer50described later is disposed between the element substrate10and the opposed substrate20in the display region E, a first alignment layer18is disposed at the surface of the element substrate10on the liquid crystal layer50side, and a second alignment layer22is disposed at the surface of the opposed substrate20on the liquid crystal layer50side. The first alignment layer18and the second alignment layer22include an oblique vapor deposition layer formed by an oblique vapor deposition method using inorganic materials such as silicon oxide, aluminum oxide and magnesium oxide. The oblique vapor deposition direction of the first alignment layer18is the direction indicated by arrow Y1that intersects the Y direction at a predetermined orientation angle from the upper right to the lower left in the drawing on the element substrate10side, and is the direction indicated by the arrow Y2that intersects the Y direction at a predetermined orientation angle from the lower left to the upper right in the drawing on the opposed substrate20side. The predetermined angle is 45 degrees, for example. Note that the oblique vapor deposition direction illustrated in the drawing is a direction when the liquid crystal apparatus100is viewed from the opposed substrate20side. In addition, the predetermined orientation angle is not limited to the orientation angle illustrated in the drawing. When an electric field is applied to the liquid crystal layer50, a liquid crystal molecules50adescribed later behave or vibrate, and the flow in the oblique directions indicated by the arrows Y1and Y2is generated in the liquid crystal layer50. When the liquid crystal layer50contains ionic impurities of positive polarity or negative polarity, the ionic impurities move toward corner portions C1and C2of the display region E along the flow in the liquid crystal layer50and aggregate around the corner portions C1and C2. A possible reason that the ionic impurities aggregate at the corner portions C1and C2of the display region E is that the mobility of the ionic impurities in the light-shielded peripheral region F is lower than the mobility in the display region E irradiated with light and as a result the ionic impurities tend to stay at the corner portions C1and C2in front of the peripheral region F. When the insulation resistance of the liquid crystal layer50is reduced at the pixels P located at the corner portions C1and C2due to the ionic impurities aggregated at the corner portion C1and the corner portion C2, the driving potential is reduced at the pixels P, which is observed as unevenness in the display. 1.2. Overview of Cross-Sectional Configuration of Liquid Crystal Apparatus FIG.2is a sectional view illustrating a schematic configuration of the liquid crystal apparatus taken along line H-H′ ofFIG.1. Note thatFIG.2further illustrates an outline of a configuration of the projection-type display apparatus1000using the liquid crystal apparatus100. As illustrated inFIG.2, the liquid crystal apparatus100includes the liquid crystal layer50between the element substrate and the opposed substrate20, with the element substrate10and the opposed substrate20bonded to each other with the sealing material at the outer edge. The element substrate10includes, between its substrate10wand the liquid crystal layer50, an optically transparent pixel electrode9aprovided for each pixel P, the first electrode108, and the first alignment layer18disposed to cover the pixel electrode9aand the first electrode108. The opposed substrate20includes, between its substrate and the liquid crystal layer50, a parting line24, an insulation layer25, an opposing electrode21, and the second alignment layer22disposed to cover the opposing electrode21. The parting line24is a light shield layer composed of a light-shielding material that surrounds the display region E in the peripheral region F, and is provided at a position overlapping circuits such as an inspection circuit and a scan line driving circuit disposed in the peripheral region F, in plan view. The parting line24achieves shielding to prevent light L entering these circuits from the opposed substrate20side, and prevents erroneous operations of circuits. In addition, the parting line24achieves shielding to prevent unnecessary stray light from entering the display region E, and ensures high contrast in the display of the display region E. The pixel electrode9aand the opposing electrode21are formed with a transparent conductive material such as indium tin oxide (ITO), for example. Each of the substrate10wand the substrate20wis an optically transparent substrate, and a glass substrate or a quartz substrate is used, for example. The liquid crystal layer50is composed of liquid crystals with negative dielectric anisotropy, for example. The liquid crystal molecules50awith negative dielectric anisotropy are substantially vertically aligned (VA: Vertical Alignment) at a predetermined pre-tilt angle with respect to the normal direction of the alignment layer surface of the first alignment layer18and the second alignment layer22. The projection-type display apparatus1000includes a laser light source1001, an incident side polarization plate1002, the liquid crystal apparatus100, an emission side polarization plate1003, and a projection lens1004. The laser light source1001is a high-light flux and high-luminance light source whose output light flux is 5000 lumen to 20000 lumen. By using the laser light source1001having such a large light emission region, the radiation angle of the light L emitted from the laser light source1001can be reduced. Thus, the display region E of the liquid crystal apparatus100can be irradiated with the light L without using a condenser lens, a rod integrator or the like between the laser light source1001and the liquid crystal apparatus100. In addition, a projection lens with a small diameter and a large F-number may be used for the projection lens1004. Thus, the downsizing of the projection-type display apparatus1000can be achieved. Note that in the case where the light L emitted from the laser light source1001is linear polarized light, the incident side polarization plate1002may not be provided. For example, in the case where a photonic crystal laser including a photonic crystal structure is used as the laser light source1001, linear polarized light can be output. The liquid crystal apparatus100may be a monochromatic panel or a color panel. 1.3. Overview of Pixel Circuit FIG.3is an equivalent circuit diagram illustrating an electrical configuration of the display region E. In the display region E of the liquid crystal apparatus100, a scan line3, a data line6and a capacitance line8are disposed. The pixels P are located at the intersections of the scan line3and the data line6. The pixel P includes the pixel electrode9a, a thin film transistor (TFT)30and a capacitive element16. One electrode of the capacitive element16is electrically connected to the pixel electrode9a, and the other electrode is electrically connected to the capacitance line8. The gate electrode of the TFT30is electrically connected to the scan line3, the source electrode is electrically connected to the data line6, and the drain electrode is connected to the pixel electrode9a. A scanning signal from a scan line driving circuit is supplied to a plurality of the scan lines3in a predetermined order. A plurality of the pixels P electrically connected to the same scan line3are controlled to be turned on or off at the same time with the same scanning signal. An image signal is supplied to a plurality of the data lines6from the data line driving circuit101in a predetermined order, and the image signal is supplied to the pixel electrode9aof the pixel P selected by the scanning signal. 1.4. Overview of Peripheral Region FIG.4is a sectional view taken along line J-J′ ofFIG.1, and illustrates a configuration of the peripheral region F. Note that in the drawing, configurations of some parts such as the scan line driving circuit are omitted. As illustrated in the drawing, the first electrode108is disposed on the same layer as the pixel electrode9aand a dummy pixel electrode9b, and is formed with the same material as the pixel electrode9a. The first alignment layer18includes a first vapor deposition film18a, and a second vapor deposition film18bdisposed between the first vapor deposition film18aand the liquid crystal layer50. The first vapor deposition film18acoves the pixel electrode9aof the display region E and the dummy pixel electrode9band the first electrode108of the peripheral region F, and is disposed above them. The second alignment layer22includes a third vapor deposition film22a, and a fourth vapor deposition film22bdisposed between the third vapor deposition film22aand the liquid crystal layer50. The first vapor deposition film18ais formed by a vapor deposition method from the direction along the normal of the plane of the element substrate10, and includes a plurality of columns whose longitudinal axial direction is aligned along the Z axis. The column is a columnar crystalline form made of an inorganic material such as silicon oxide. The third vapor deposition film22ais formed by a vapor deposition method from the direction along the normal of the plane of the opposed substrate20, and includes a plurality of columns whose longitudinal axial direction is aligned with the Z axis as with the first vapor deposition film18a. The second vapor deposition film18bis provided to cover the +Z direction side of the first vapor deposition film18a. The thickness of the second vapor deposition film18b, i.e., the distance in the direction along the Z axis is smaller than the thickness of the first vapor deposition film18a. The second vapor deposition film18bincludes a plurality of columns whose longitudinal axial direction intersects the normal direction of the plane of the element substrate10, at an angle α. The column is formed by an oblique vapor deposition method. More specifically, the column of the second vapor deposition film18bis formed through oblique vapor deposition of an inorganic material such as silicon oxide from the direction along the direction of the angle α. The fourth vapor deposition film22bis disposed to cover the −Z direction side of the third vapor deposition film22a. The thickness of the fourth vapor deposition film22b, i.e., the distance in the direction along the Z axis is smaller than the thickness of the third vapor deposition film22a. The fourth vapor deposition film22bincludes a plurality of columns whose longitudinal axial direction intersects the normal direction of the plane of the opposed substrate20, at an angle β. The column is formed by an oblique vapor deposition method. More specifically, the column of the fourth vapor deposition film22bis formed through oblique vapor deposition of an inorganic material such as silicon oxide from the direction along the direction of the angle β. Note that the angle β and the angle α may be equal to each other. Note that while the pre-tilt angle of the liquid crystal molecules50ais not necessarily identical to the inclination angle α of the column of the second vapor deposition film18band the inclination angle β of the column of the fourth vapor deposition film22b, the pre-tilt angle of the liquid crystal molecules50acan be controlled at a desired angle by controlling the inclination angle α of the column of the second vapor deposition film18band the inclination angle β of the column of the fourth vapor deposition film22b. A surface treatment using a silane coupling agent is provided on the surfaces of the first alignment layer18and the second alignment layer22. More specifically, at the surfaces of the second vapor deposition film18bof the element substrate10and the fourth vapor deposition film22bof the opposed substrate20, an organo polysiloxane film is formed by using a silane coupling agent. The silane coupling agent dehydration-condenses with the silanol groups bonded to the silicon oxide of the second vapor deposition film18band the fourth vapor deposition film22b. In this manner, an organo polysiloxane film with oriented hydrophobic groups is formed at the interface with the liquid crystal layer50. This surface treatment increases the contact angle with respect to water of the surfaces of the second vapor deposition film18band the fourth vapor deposition film22b, and can improve the light resisting property of the liquid crystal apparatus100. Note that publicly known methods may be employed for the method of the surface treatment using silane coupling agent. A light shield layer19is disposed at the element substrate10. As with the parting line24, the light shield layer19is disposed to overlap the first electrode108in plan view. The light shield layer19prevents light reflected by the emission side polarization plate1003or the like from entering the peripheral region F, and suppresses entry of unnecessary stray light into the display region E. 1.5. Overview of Voltage Waveform Supplied to First Electrode FIG.5is a waveform diagram of an analog voltage signal supplied to a pixel electrode and a first electrode. A signal waveform A represents a voltage waveform of a gradation signal supplied to the pixel electrode9a. The signal waveform A includes a positive polarity period S1and a negative polarity period S2, and has a waveform in which the positive polarity period S1and the negative polarity period S2alternately appear. The positive polarity period S1is a period in which a positive gradation potential with a high potential with respect to a common potential Vcom is supplied, and the negative polarity period S2is a period in which a negative gradation potential with a low potential with respect to the common potential Vcom, which is a predetermined potential supplied to the opposing electrode21, is supplied. In the positive polarity period S1, a positive gradation potential corresponding to the gradation information of the image signal is supplied to the pixel electrode9a, and the pixel electrode9ais set to the positive gradation potential. A positive gradation potential Vs_H is a gradation potential corresponding to white gradation in a normally black system. In the negative polarity period S2, a negative gradation potential corresponding to the gradation information of the image signal is supplied to the pixel electrode9a, and the pixel electrode9ais set to the negative gradation potential. A negative gradation potential Vs_L is a gradation potential corresponding to a white gradation in a normally black system. A positive gradation potential or a negative gradation potential is supplied to the pixel electrode9aat a first frequency that is the refresh rate of the pixel electrode9a, and the pixel electrode9ais set to the positive gradation potential or the negative gradation potential. In this embodiment, the first frequency is 240 Hz, and a refresh period R1based on the first frequency is approximately 4.2 ms. Thus, the pixel electrode9ais rewritten to a positive gradation potential or a negative gradation potential for each refresh period R1. The refresh period R1of the pixel electrode9ahas the same length for the positive polarity period S1and the negative polarity period S2. Note that in the case where an interval period is provided between the positive polarity period S1and the negative polarity period S2, the sum of the negative polarity period S2or the positive polarity period S1and the interval period has the same length as the refresh period R1. A signal waveform B represents a voltage waveform supplied to the first electrode108. The signal waveform B is a waveform including a positive polarity period T1and a negative polarity period T2, in which the positive polarity period T1and the negative polarity period T2alternately appear. The positive polarity period T1is a period in which a positive polarity potential Va with a high potential with respect to the common potential Vcom is supplied, and the negative polarity period T2is a period in which a negative polarity potential Vb with a low potential with respect to the common potential Vcom is supplied. In the positive polarity period T1, a positive polarity potential Va is supplied to the first electrode108, and the potential of the first electrode108is set to the positive polarity potential Va. In addition, in the negative polarity period T2, the negative polarity potential Vb is supplied to the first electrode108, and the potential of the first electrode108is set to the negative polarity potential Vb. Note that the positive polarity potential Va is preferably set to a potential higher than the common potential Vcom by approximately 1.5 V. In addition, the negative polarity potential Vb is preferably set to a potential lower than the common potential Vcom by approximately 1.5 V. A reason for this is that bubble may be generated at the first electrode108when the potential difference between the positive polarity potential Va and the negative polarity potential Vb, and the common potential Vcom exceeds 3 V. The positive polarity potential Va or the negative polarity potential Vb is supplied to the first electrode108at a second frequency that is the refresh rate of the first electrode108, and the first electrode108is set to the positive polarity potential Va or the negative polarity potential Vb. In this embodiment, the second frequency is 0.1 Hz, and a refresh period R2based on the second frequency is 10 s. Thus, the first electrode108is rewritten to the positive polarity potential Va or the negative polarity potential Vb for each refresh period R2. The refresh period R2of the first electrode108is longer than the refresh period R1of the pixel electrode9a, and preferably, the refresh period R2of the first electrode108is 100 times to 100000 times the refresh period R1of the pixel electrode9a. More specifically, in the case where the refresh period R1is approximately 4.2 ms, the refresh period R2is preferably approximately 420 ms to 420 s. The refresh period R2of the first electrode108is the same for the positive polarity period T1and the negative polarity period T2. Note that in the case where an interval period is provided between the positive polarity period T1and the negative polarity period T2, the sum of the positive polarity period T1or the negative polarity period T2and the interval period has the same length as the refresh period R2. The present inventors have confirmed that with the refresh period R2of the first electrode108longer than the refresh period R1of the pixel electrode9aand the positive polarity period T1and the negative polarity period T2of the first electrode108having the same length, generation of blemish in the display region E and generation of burn-in display unevenness can be suppressed. Furthermore, it is also confirmed that with the refresh period R2of the first electrode108set to 100 times to 100000 times the refresh period R1of the pixel electrode9a, generation of the display unevenness can be more effectively suppressed. As described above, with the liquid crystal apparatus100of this embodiment or the projection-type display apparatus1000as an electronic apparatus including the liquid crystal apparatus100, the following effects can be achieved. The liquid crystal apparatus100includes the pair of substrates10and20opposite to each other with the liquid crystal layer50therebetween, the pixel electrode9aprovided in the display region E of the pair of substrates10and20and configured to be supplied with an image signal at the first frequency, and the first electrode108provided in the peripheral region F as a region outside the display region E and configured to be alternately supplied with the positive polarity potential Va with a potential higher than the predetermined potential and the negative polarity potential Vb with a potential lower than the predetermined potential at the second frequency lower than the first frequency such that the positive polarity period T1for setting a positive polarity potential and the negative polarity period T2for setting a negative polarity potential have the same length. With this configuration, the positive polarity potential and the negative polarity potential are alternately supplied in the same length to the first electrode108provided in the peripheral region F as a region outside the display region E at the second frequency that is a refresh rate lower than the first frequency that is the refresh rate of the pixel electrode9a, and thus ionic impurities can be attracted and held at the peripheral region F. In this manner, generation of burn-in can be suppressed while suppressing generation of blemish in the display region E. Thus, even in the case where a high-luminance light source is used, the liquid crystal apparatus100with excellent display quality can be provided. The second frequency that is the refresh rate of the first electrode108is set to ten-thousandth to one-hundredth of the first frequency. With this configuration, in comparison with the case with other frequencies, the effect of suppressing generation of blemish in the display region E can be increased. Light from the light source1001with an output light flux of 5000 lumen to 20000 is incident on the liquid crystal layer50. With this configuration, in the case where the laser light source1001that is a high-luminance light source with an output light flux of 5000 lumen to 20000 is employed as the light source combined with the liquid crystal apparatus100, generation of blemish in the display region E that tends to be generated due to irradiation with the high-light flux light L from the laser light source1001can be suppressed. Specifically, when the high-light flux light L from the laser light source1001is incident on the liquid crystal layer50of the liquid crystal apparatus100, the mobility of ionic impurities in the liquid crystal layer50increases and the ionic impurities easily move in the liquid crystal layer50. Then, if the driving frequency of the peripheral electrode is set to a value higher than the driving frequency of the pixel electrode as in a known configuration in the case where the mobility of ionic impurities is increased, the ionic impurities are more largely affected than in the known configuration by the electric field inverted in a short cycle, and repelled before attracted to the peripheral region. Consequently, the ionic impurities stay in the display region, making it difficult to attract and hold the ionic impurities in the peripheral region. Note that it can be considered that in the case where the mobility of ionic impurities is low as in the known configuration, the ionic impurities can be attracted to the peripheral region by the flow in the liquid crystal layer generated by setting the driving frequency of the peripheral electrode to a value higher than the driving frequency of the pixel electrode. However, in the case where the positive polarity potential Va and the negative polarity potential Vb are supplied to the first electrode108at the second frequency lower than the first frequency that is the refresh rate of the pixel electrode9aas in this embodiment, the inversion cycle of the polarity of the electric field of the first electrode108is lengthened. Then, when the inversion cycle of the polarity of the electric field of the first electrode108is lengthened, the time to attract the ionic impurities to the peripheral region F is also lengthened, and thus the ionic impurities can be attracted to the peripheral region F, while lengthening the time to hold at the ionic impurities the first electrode108. Thus, generation of blemish in the display region E can be suppressed even when the mobility of the ionic impurities of the liquid crystal layer50is increased by using the laser light source1001. Furthermore, since the positive polarity period T1for setting to the positive polarity potential Va and the negative polarity period T2for setting to the negative polarity potential Vb at the first electrode108have the same length, generation of burn-in due to application of a DC signal to the first electrode108can also be suppressed. The projection-type display apparatus1000includes the liquid crystal apparatus100. With this configuration, an excellent projection-type display apparatus1000including the liquid crystal apparatus100that can suppress generation of blemish in the display region E can be provided. The projection-type display apparatus1000includes the laser light source1001with an output light flux of 5000 lumen to 20000 lumen, and the liquid crystal apparatus100that modulates the light L from the laser light source1001. With this configuration, it is possible to provide an excellent projection-type display apparatus1000including the liquid crystal apparatus100that can suppress generation of blemish in the display region E in the case where the laser light source1001that is a high-luminance light source with an output light flux of 5000 lumen to 20000 lumen is employed as the light source combined with the liquid crystal apparatus100. 2. Embodiment 2 FIG.6is a plan view of a liquid crystal apparatus as viewed from an opposed substrate side. A liquid crystal apparatus200of this embodiment is different from the liquid crystal apparatus100of Embodiment 1 in that a second electrode109is provided. Note that in the following description, the same configurations as those of Embodiment 1 are denoted with the same reference numerals, and overlapping descriptions will be omitted. As illustrated in the drawing, the liquid crystal apparatus200includes the second electrode109disposed between the first electrode108and the sealing material60in plan view. As with the first electrode108, the second electrode109is disposed on the same layer as the pixel electrode9aand the dummy pixel electrode9b, and formed with the same material as the pixel electrode9a. The second electrode109is electrically connected to the external terminal104, and is supplied with a DC signal for generating the electric field for holding ionic impurities from the outside. The DC signal is a signal of positive polarity with a potential higher than the common potential Vcom, and generates an electric field for holding, at the second electrode109, the negative ionic impurities attracted by the first electrode108in the second electrode109. The potential supplied to the second electrode109is the same potential as the positive polarity potential Va supplied to the first electrode108. Alternatively, the effect of the second electrode109for holding negative ionic impurities at the second electrode109may be increased by setting the potential supplied to the second electrode109to a potential higher than the positive polarity potential Va supplied to the first electrode108. As described above, according to the liquid crystal apparatus200of this embodiment, the following effects can be achieved in addition to the effects of Embodiment 1. The liquid crystal apparatus200includes the second electrode109provided outside the first electrode108in the peripheral region F as a region outside the display region E, and is supplied with a DC signal. With this configuration, ionic impurities can be more reliably held in the peripheral region F with the second electrode109, and the effect of suppressing generation of blemish in the display region E can be increased. Thus, even in the case where a high-luminance light source is used, the liquid crystal apparatus200with excellent display quality can be provided. The DC signal is a signal of positive polarity with a potential higher than a predetermined potential. With this configuration, negative ionic impurities, which are considered to be a main cause of blemish, can be more reliably held in the peripheral region F, and thus the effect of suppressing generation of blemish in the display region E can be increased. 3. Embodiment 3 3.1. Overview of Electronic Apparatus FIG.7is a schematic configuration diagram illustrating a configuration of a projection-type display apparatus serving as an electronic apparatus according to this embodiment. As illustrated in the drawing, a projection-type display apparatus2000includes a laser light source2001, dichroic mirrors2011and2012serving as light splitting members, three liquid crystal apparatuses,100B,100G and100R, each of which is the liquid crystal apparatus100, three reflection mirrors2111,2112and2113, three relay lenses2121,2122and2123, a dichroic prism2130serving as a color synthesis optical system, and a projection lens2140serving as a projection optical system. The laser light source2001is a high-light flux and high-luminance light source whose output light flux is 5000 lumen to 20000 lumen. As the laser light source2001, a surface-emission type semiconductor laser having a light emission region with an area equal to or greater than that of the display region E may be employed, for example. The light L emitted from the laser light source2001is separated by the two dichroic mirrors2011and2012into color light of three colors of respective different wavelength ranges. The color light of three colors is substantially red light, which is light in the wavelength range including the red wavelength band, substantially green light, which is light in the wavelength range including the green wavelength band, and substantially blue light, which is light in the wavelength range including the blue wavelength band. In the following description, the above-mentioned substantially red light, substantially green light and substantially blue light are also referred to as green light G, red light R, and blue light B, respectively. The dichroic mirror2011transmits the red light R, and reflects the green light G and blue light B with shorter wavelength than the red light R. The red light R transmitted through the dichroic mirror2011is reflected by reflection mirror2111, and enters the liquid crystal apparatus100R. The green light G reflected by the dichroic mirror2011enters the liquid crystal apparatus100G after being reflected by the dichroic mirror2012. The blue light B reflected by the dichroic mirror2011is transmitted through the dichroic mirror2012and is emitted to a relay lens system2120. The relay lens system2120includes relay lenses2121,2122and2123and reflection mirrors2112and2113. The blue light B has a light path longer than that of the green light G and the red light R, and tends to have a large light flux. In view of this, increase of the light flux is suppressed by using the relay lens2122. The blue light B incident on the relay lens system2120is reflected by the reflection mirror2112and converged by the relay lens2121in the vicinity of the relay lens2122. Then, the blue light B enters the liquid crystal apparatus100B through the reflection mirror2113and the relay lens2123. The liquid crystal apparatus100according to Embodiment 1 is applied to the liquid crystal apparatuses100R,100G and100B serving as light modulation apparatuses in the projection-type display apparatus2000. In addition, the liquid crystal apparatus200according to Embodiment 2 may be applied to the liquid crystal apparatuses100R,100G and100B serving as light modulation apparatuses. Alternatively, the liquid crystal apparatus200may be applied only to the liquid crystal apparatus100B, or only to the liquid crystal apparatuses100G and100B. Each of the liquid crystal apparatuses100R,100G and100B is electrically connected to a higher-level circuit of the projection-type display apparatus2000. In this manner, image signals for setting the gradation levels of the red light R, the green light G and the blue light B are supplied from an external circuit, and processed in the higher-level circuit. In this manner, the liquid crystal apparatuses100R,100G and100B are driven, and the light of respective colors is modulated. The red light R, the green light G and the blue light B modulated by the liquid crystal apparatuses100R,100G and100B are incident on the dichroic prism2130from the three directions. The dichroic prism2130synthesizes the incident red light R, green light G and blue light B. The dichroic prism2130reflects the red light R and the blue light B at 90 degrees, and transmits the green light G. Thus, the red light R, the green light G and the blue light B are synthesized as display light for displaying a color image, and emitted toward the projection lens2140. The projection lens2140is disposed to face the outside of the projection-type display apparatus2000. The display light is emitted in an enlarged manner through the projection lens2140, and is projected on a screen2200serving as a projection object. While the light L from the laser light source2001is split by the dichroic mirrors2011and2012into the color light of three colors of respective different wavelength ranges and applied to the liquid crystal apparatuses100R,100G and100B in the above-mentioned embodiment, the configuration of the light source is not limited to this, and the laser light source1001may be disposed in each of the liquid crystal apparatuses100R,100G and100B. 3.2. Overview of Voltage Waveform Supplied to First Electrode FIG.8is a waveform diagram of an analog voltage signal supplied to a pixel electrode and a first electrode. In the following description, the same configurations as those of the above-described embodiment are denoted with the same reference numerals, and overlapping descriptions will be omitted. The signal waveform A represents a voltage waveform of a gradation signal supplied to the pixel electrode9aas with the signal waveform A ofFIG.5. The signal waveform B represents the voltage waveform supplied to the first electrode108of the liquid crystal apparatus100G into which the green light G is entered, and the signal waveform C represents the voltage waveform supplied to the first electrode108of the liquid crystal apparatus100B into which the blue light B is entered. The signal waveform B is a voltage waveform in which the positive polarity period T1and the negative polarity period T2with the same length as the positive polarity period T1are alternately repeated for each refresh period R2as with the signal waveform B ofFIG.5. The signal waveform C is a waveform including a positive polarity period T3and a negative polarity period T4, in which the positive polarity period T3and the negative polarity period T4alternately appear. The length of the positive polarity period T3is longer than the negative polarity period T4, and more specifically, the ratio of the length of the positive polarity period T3to the length of the negative polarity period T4is preferably 3:1. The positive polarity period T3is a period in which the positive potential Va with a high potential with respect to the common potential Vcom is supplied as in the positive polarity period T1, and the negative polarity period T4is a period in which the negative polarity potential Vb with a low potential with respect to the common potential Vcom is supplied as in the negative polarity period T2. In the positive polarity period T3, the positive polarity potential Va is supplied to the first electrode108of the liquid crystal apparatus100B, and the potential of the first electrode108is set to the positive polarity potential Va. In the negative polarity period T4, the negative polarity potential Vb is supplied to the first electrode108of the liquid crystal apparatus100B, and the potential of the first electrode108is set to the negative polarity potential Vb. In this embodiment, the contact angle with respect to water at the surfaces of the first alignment layer18and the second alignment layer22of the liquid crystal apparatus100B is greater than the contact angle with respect to water at the surfaces of the first alignment layer18and the second alignment layer22of the liquid crystal apparatus100G. More specifically, the contact angle of the liquid crystal apparatus100G is smaller than 50°, and the contact angle of the liquid crystal apparatus100B is 50° or greater, or preferably, 60° to 90°. A refresh period R3and a refresh period R4of the first electrode108of the liquid crystal apparatus100B are longer than the refresh period R1of the pixel electrode9a. Note that as illustrated in the drawing, the refresh period R3and the refresh period R4of the first electrode108of the liquid crystal apparatus100B have different lengths. Thus, a third frequency that is the refresh rate of the first electrode108of the liquid crystal apparatus100B contains two types of frequencies, namely a third a-frequency that is the inverse of the refresh period R3and a third b-frequency that is the inverse of the refresh period R4, but each of the third a-frequency and the third b-frequency is a frequency lower than the first frequency that is the refresh rate of the pixel electrode9a. As described above, with the projection-type display apparatus2000including the liquid crystal apparatus100that is the electronic apparatus of this embodiment, the following effects can be achieved. The projection-type display apparatus2000includes the laser light source2001that is a light source with an output light flux of 5000 lumen to 20000 lumen, the dichroic mirror2012that is a light splitting member that splits light from the laser light source2001, the liquid crystal apparatus100G that modulates first light G split by the dichroic mirror2012, and the second liquid crystal apparatus100B that modulates second light B split by the dichroic mirror2012. The second liquid crystal apparatus100B includes the pair of substrates10and20opposite to each other with the liquid crystal layer50therebetween, the pixel electrode9aprovided in the display region E of the pair of substrates10and20and configured to be supplied with an image signal at the first frequency, and the first electrode108of the liquid crystal apparatus100B provided in the peripheral region F as a region outside the display region E and is alternately supplied with the positive polarity potential Va with a potential higher than the predetermined potential and the negative polarity potential Vb with a potential lower than the predetermined potential at the third frequency lower than the first frequency such that the positive polarity period T3for setting the positive polarity potential Va and the negative polarity period T4for setting the negative polarity potential have different lengths. With this configuration, it is possible to provide an excellent projection-type display apparatus2000including the liquid crystal apparatuses100G and100B that can suppress generation of blemish in the display region E in the case where the laser light source2001that is a high-luminance light source with an output light flux of 5000 lumen to 20000 lumen is employed as the light source combined with the liquid crystal apparatus100. In the projection-type display apparatus2000, the first light G is green light that is light in the wavelength range including the green wavelength band, and the second light B is blue light that is light in the wavelength range including the blue wavelength band. In this manner, with the driving pattern of the first electrode108that differs between the liquid crystal apparatus100G and the liquid crystal apparatus100B, an excellent projection-type display apparatus2000that can suppress generation of blemish in the display region E in the same manner for the liquid crystal apparatus100G and the liquid crystal apparatus100B can be provided. In addition, the projection-type display apparatus2000includes the first light source with an output light flux of 5000 lumen to 20000 lumen, the second light source with an output light flux of 5000 lumen to 20000 lumen, the liquid crystal apparatus100G that modulates the first light G from the first light source, and the second liquid crystal apparatus100B that modulates the second light B from the second light source. The second liquid crystal apparatus100B includes the pair of substrates10and20opposite to each other with the liquid crystal layer50therebetween, the pixel electrode9aprovided in the display region E of the pair of substrates10and20and configured to be supplied with an image signal at the first frequency, and the first electrode108of the liquid crystal apparatus100B provided in a region corresponding to the peripheral region F outside the display region E and alternately supplied with the positive polarity potential Va with a potential higher than the predetermined potential and the negative polarity potential Vb with a potential lower than the predetermined potential at the third frequency lower than the first frequency such that the positive polarity period T3for setting the positive polarity potential Va and the negative polarity period T4for setting the negative polarity potential Vb have different lengths. With this configuration, it is possible to provide an excellent projection-type display apparatus1000including the liquid crystal apparatuses100G and100B that can suppress generation of blemish in the display region E in the case where the laser light source1001that is a high-luminance light source with an output light flux of 5000 lumen to 20000 lumen is employed as the light source combined with the liquid crystal apparatus100. In the projection-type display apparatus2000, the first light G is green light that is light in the wavelength range including the green wavelength band, and the second light B is blue light that is light in the wavelength range including the blue wavelength band. In this manner, with the driving pattern of the first electrode108that differs between the liquid crystal apparatus100G and the liquid crystal apparatus100B, an excellent projection-type display apparatus2000that can suppress generation of blemish in the display region E in the same manner for the liquid crystal apparatus100G and the liquid crystal apparatus100B can be provided. While the projection-type display apparatuses1000and2000are exemplified as the electronic apparatus in the above-mentioned embodiment, the electronic apparatus to which the liquid crystal apparatus100is applied is not limited to this. For example, it may be applied to electronic apparatuses such as a head-up display (HUD), a head mounted display (HMD), a personal computer, a digital camera, and a liquid crystal television. In addition, while a transmission type liquid crystal apparatus is exemplified as the liquid crystal apparatuses100and200in the above-mentioned embodiment, the liquid crystal apparatuses100and200may be a reflection type liquid crystal apparatus or a liquid crystal on silicon (LCOS) type liquid crystal apparatus. In addition, while the laser light sources1001and2001are exemplified as the high-luminance light source in the above-mentioned embodiment, a high-luminance light source such as an LED light source may also be employed as the high-luminance light source. In addition, while each of the first alignment layer18and the second alignment layer22has a two-layer structure in the above-mentioned embodiment, each of them may have a configuration composed only of an oblique layer. In addition, a micro lens that corresponds to the pixel electrode9ain a one-to-one relationship may be provided between the substrate20wand the opposing electrode21of the opposed substrate20in the above-mentioned embodiment. In addition, a micro lens that corresponds to the pixel electrode9ain a one-to-one relationship may be provided between the substrate10wand the pixel electrode9aof the element substrate in the above-mentioned embodiment. In addition, while the case where the opposing electrode21is disposed on the opposed substrate20side is exemplified in the above-mentioned embodiment, the position where the opposing electrode21is disposed is not limited to this. For example, it may be disposed between the pixel electrode9aand the substrate10w. | 47,743 |
11862122 | DESCRIPTION OF EMBODIMENTS The disclosure can be understood with reference to the following detailed description in conjunction with the accompanying drawings. It should be noted that, for ease of understanding by readers and for the concision of the illustration, multiple drawings in the disclosure only depict a part of the display device, and certain elements in the drawings are not drawn according to actual scale. In addition, the number and size of each element in the drawings are for illustration only, and are not intended to limit the scope of the disclosure. In the following description and claims, the words “comprising” and “including” are open-ended words, and thus should be interpreted as meaning “including but not limited to.” It will be understood that when an element or layer is referred to as being “on” or “connected to” another element or layer, it may be directly on or directly connected to another element or layer, or there may be an intervening element or layer in between (being indirectly on or connected to another element or layer). In contrast, when an element is referred to as being “directly on” or “directly connected to” another element or layer, there is not any intervening element or layer in between. Although the terms “first,” “second,” “third” and the like may be used to describe various composing elements, the composing elements are not limited by the terms. Such a term is only used to distinguish a single composing element from another composing elements in the specification. The same terms may not be used in the claims, and may be replaced by first, second, third and the like in the order in which the elements are recited in the claims. Therefore, in the following description, a first composing element may be a second composing element in the claims. In the description, the term “substantially” usually means within 10%, or within 5%, or within 3%, or within 2%, or within 1%, or within 0.5% of a given value or range. In some embodiments of the disclosure, terms related to bonding and connection, such as “connection,” “interconnection,” and the like, unless otherwise specified, may mean that two structures are in direct contact, or may also mean that two structures are not in direct contact, in which there are other structures provided between these two structures. And the terms related to bonding and connection may also include the case where both structures are movable, or both structures are fixed. Furthermore, the term “coupled” includes any direct and indirect method of electrical connection. In the disclosure, the length, width, thickness, height or area, or the distance or spacing between elements may be measured by using an optical microscope (OM), a scanning electron microscope (SEM), a surface profiler (α-step), an ellipsometer, or other suitable measurement methods; in detail, according to some embodiments, a scanning electron microscope may be used to obtain a cross-sectional structure image including the element to be measured, and measure the width, thickness, height or area of each element, or the distance or spacing between elements, but the disclosure is not limited thereto. In addition, any two values or directions used for comparison may have certain errors. The electronic device of the disclosure may include a display device, an antenna device (such as a liquid crystal antenna), a sensing device, a light emitting device, a touch control device, or a splicing device, but the disclosure is not limited thereto. The electronic device may include bendable and flexible electronic devices. The shape of the electronic device may be rectangular, circular, polygonal, a shape with curved edges, or other suitable shapes. The electronic device may include, for example, light emitting diodes (LEDs), liquid crystal, fluorescence, phosphor, quantum dots (QDs), other suitable display medium or a combination of the above, but the disclosure is not limited thereto. Light emitting diodes may include, for example, organic light emitting diodes (OLEDs), inorganic light emitting diodes, mini LEDs, micro LEDs, or quantum dot light emitting diodes (QLED or QDLED), other suitable materials or any combination of the above, but the disclosure is not limited thereto. The display device may also include, for example, a splicing display device, but is not limited to. The antenna device may be, for example, a liquid crystal antenna, but the disclosure is not limited thereto. The antenna device may include, for example, an antenna splicing device, but the disclosure is not limited thereto. It should be noted that the electronic device may be any arrangement or combination of the foregoing, but the disclosure is not limited thereto. In addition, the shape of the electronic device may be rectangular, circular, polygonal, a shape with curved edges, or other suitable shapes. The electronic device may have peripheral systems such as a driving system, a control system, and a light source system to support a display device, an antenna device or a splicing device. Hereinafter, the disclosure will be described with a display device, but the disclosure is not limited thereto. It should be noted that, in the following embodiments, features in several different embodiments may be replaced, recombined, and mixed to complete other embodiments without departing from the spirit of the disclosure. As long as the features of the various embodiments do not depart from the spirit of the invention or conflict with each other, they may be mixed and matched as desired. Reference will now be made in detail to the exemplary embodiments of the disclosure, examples of which are illustrated in the accompanying drawings. Wherever possible, the same reference numerals are used in the drawings and description to refer to the same or similar parts. FIG.1Ais a schematic top view of a display device according to a first embodiment of the disclosure. A display device10includes a display panel11.FIG.1Bis a schematic top view of a pixel of a functional display area in the display panel ofFIG.1A. With reference toFIG.1A, the display panel11of this embodiment has a functional display area100, a general display area200and a non-display area300. The general display area200is adjacent to the functional display area100and the non-display area300, and the general display area200is disposed between the functional display area100and the non-display area300, but the disclosure is not limited thereto. In some embodiments, the general display area200may, for example, surround at least part of the functional display area100, and the non-display area300may, for example, surround the general display area200, but the disclosure is not limited thereto. That is to say, in other embodiments, the functional display area100, the general display area200, and the non-display area300of the display device10may adopt other configurations as required. The shape of the functional display area in this embodiment is only an example, and in other embodiments, it may be adjusted according to actual design requirements, and the disclosure is not limited thereto. In this embodiment, the general display area200has a first side201, a second side202, a third side203and a fourth side204. The first side201and the third side203are opposite to each other, and the second side202and the fourth side204are opposite to each other. The second side202connects the first side201and the third side203, and the fourth side204connects the first side201and the third side203. In addition, in this embodiment, a first direction X, a second direction Y and a third direction Z are respectively different directions. The first direction X is, for example, the extending direction of the second side202and the fourth side204. The second direction Y is, for example, the extending direction of the first side201and the third side203. The first direction X is, for example, perpendicular to the second direction Y, and the third direction Z is, for example, perpendicular to the first direction X and the second direction Y However, the disclosure is not limited to the above. In this embodiment, the display panel11further includes multiple gate on panel (GOP) drivers420and outer pin bonding areas430, and the outer pin bonding area430may include driver chips and/or be used for bonding with external lines, but the disclosure is not limited thereto. The GOP circuit420and the outer pin bonding area430may be disposed corresponding to the non-display area300. The GOP driver420may be disposed outside the first side201and the third side203of the general display area200, and the outer pin bonding area430may be disposed outside the fourth side204of the general display area200. In some embodiments, the display device10further includes an optical sensor410. The optical sensor410may be correspondingly disposed in the functional display area100and may be disposed under the display panel11to provide functions such as photography, video recording, or biometric identification (such as fingerprint identification). The optical sensor410may include an optical camera or an infrared sensor. In other embodiments, the optical sensor410further includes a flash light, an infrared (IR) light source, other sensors, electronic components, or a combination thereof, but the disclosure is not limited thereto. In some embodiments, the area of the functional display area100may be larger than the area of the optical sensor410when viewed in a top view direction (for example, along the third direction Z), but the disclosure is not limited thereto. Please refer toFIG.1AandFIG.1Bsimultaneously. In this embodiment, the functional display area100includes multiple pixels110. At least a part of the pixels110include a white pixel111and multiple display pixels (for example, including a display pixel112R, a display pixel112G, and a display pixel112B). The pixel110may have an edge1101, an edge1102, an edge1103, and an edge1104. The edge1101and the edge1103are opposite to each other, and the edge1102and the edge1104are opposite to each other. The edge1102connects the edge1101and the edge1103, and the edge1104connects the edge1101and the edge1103. In the disclosure, the functional display area100is an area defined by the pixels110including the white pixel111and multiple display pixels112R,112G and112B. In this embodiment, the white pixel111may be regarded as a light transmitting area, so that the outside light may penetrate the white pixel111and reach the optical sensor410when the optical sensor410is in the sensing mode (for example, when the optical sensor410is sensing and/or acquiring images of the outside world). The white pixel111may include a pixel electrode (not shown) and a part of a common electrode (not shown), and the pixel transmittance may be adjusted by the voltage supplied to the pixel electrode. In this way, when the functional display area100is displaying an image, by adjusting the voltage of the pixel electrode of the white pixel111, the white pixel111may be opaque or non-display, thereby improving the display quality. In this embodiment, the transmittance of the functional display area100when the optical sensor410is in the sensing mode may be, for example, greater than the transmittance of the functional display area100when the optical sensor410is in the non-sensing mode, so that the user cannot see the optical sensor410through the display device10when the optical sensor410is in the non-sensing mode. For example, the transmittance in the disclosure refers to the percentage the light intensity of the transmitted light measured after the ambient light penetrates the display panel11(for example, the functional display area100of the display panel11) divided by the light intensity of the ambient light not penetrating the display panel11. The “light intensity” mentioned above refers to the spectral integral value of the light source (for example, display light or ambient light). In some embodiments, the light source may include visible light (for example, wavelengths between 380 nm and 780 nm) or ultraviolet light (for example, wavelengths less than 365 nm), but the disclosure is not limited thereto. That is, when the light source is visible light, the light intensity is the spectral integral value of wavelengths in the range of 380 nm to 780 nm. In this embodiment, the display pixel112R may display a red image, and the display pixel112G may display a green image, and the display pixel112B may display a blue image, so that the functional display area100may display an image when the optical sensor410is in the non-sensing mode, but the disclosure is not limited thereto. In the disclosure, a “pixel” may be a stacked structure that includes all relevant layers, relevant components, or relevant parts configured to emit light with brightness and color. For a liquid crystal display, a pixel may include relevant parts of the liquid crystal layer, relevant parts of the polarizer, relevant parts of the backlight, and the relevant substrate, driving circuit and color filter. For self-luminous displays (such as inorganic light emitting diode (LED) displays and organic light emitting diode (OLED) displays), a pixel may include relevant self-luminous sources, relevant light conversion layers, relevant parts of the polarizer, the relevant substrate and the relevant driving circuit. In other embodiments, when the general display area200and the functional display area100are displaying images, the white pixel111in the functional display area100may be turned off, that is, the white pixel111does not display an image or displays a black image. In this embodiment, the multiple display pixels112R,112G and112B may surround at least a part of the white pixel111. Specifically, in this embodiment, the shape of the white pixel111is, for example, a square, but the disclosure is not limited thereto. The white pixel111may have a first side1111, a second side1112, a third side1113and a fourth side1114. The first side1111and the third side1113are opposite to each other, and the second side1112and the fourth side1114are opposite to each other. The second side1112connects the first side1111and the third side1113, and the fourth side1114connects the first side1111and the third side1113. The first side1111, the second side1112, the third side1113and the fourth side1114of the white pixel111may all be straight lines, but the disclosure is not limited thereto. In addition, in this embodiment, the multiple display pixels112R,112G and112B may be arranged in sequence and surround any two adjacent sides of the white pixel111(for example, the first side1111and the fourth side1114), but the disclosure is not limited thereto. In some embodiments, the display pixel112R, the display pixel112G, and the display pixel112B may be arranged in other orders (or irregularly arranged) and surround any two adjacent sides of the white pixel111. In this embodiment, the white pixel111is, for example, disposed at one corner of the pixel110. For example, the first side1111, the second side1112, the third side1113and the fourth side1114of the white pixel111may be, for example, the edges of the pixel electrode of the white pixel111. In other embodiments, these edges may be, for example, the edges of an opening of a black matrix, and the opening may expose at least a part of the pixel electrode of the white pixel, but the disclosure is not limited thereto. In this embodiment, there is a distance D1between the first side1111and the third side1113, and there is a distance D2between the second side1112and the fourth side1114. The distance D1may be, for example, substantially equal to the distance D2, but the disclosure is not limited thereto. The distance D1is, for example, the maximum distance measured along the first direction X between the first side1111and the third side1113, and the distance D2is, for example, the maximum distance measured along the second direction Y between the second side1112and the fourth side1114. In this embodiment, since the distance D1between the first side1111and the third side1113of the white pixel111(that is, the maximum length of the white pixel111in the first direction X) may be substantially equal to the distance D2between the second side1112and the fourth side1114of the white pixel111(that is, the maximum length of the white pixel111in the second direction Y), the positions of the diffracted rays may be the same or the problem of serious diffraction in one direction may be reduced, which makes it easier for the software to correct the diffraction phenomenon caused by the light passing through the panel, so a better optical sensing effect may be obtained. In some embodiments, the difference in aperture among the display pixels112R,112G and112B may be, for example, less than 1% to reduce the problem of white point shift, but the disclosure is not limited thereto. In this embodiment, the edges1101,1102,1103and1104of the pixel110and the edges of the white pixel111(that is, the first side1111, the second side1112, the third side1113and the fourth side1114) are all straight lines; however, the disclosure does not limit the line forms of these edges. For example, in some embodiments, the edges1101,1102,1103and1104of the pixel110and the edges of the white pixel111(that is, the first side1111, the second side1112, the third side1113and the fourth side1114) may be arcs, as shown inFIG.2AandFIG.2B. In this embodiment, although the shape of the white pixel111is a square, the disclosure does not limit the shape of the white pixel111. For example, in some embodiments, the shape of the white pixel111may be an octagon (not shown) or other polygons (not shown), as long as the distance D1may be substantially equal to the distance D2of the white pixel111. In some embodiments, the shape of the white pixel111may be a circle (not shown). Other examples are described below for illustration. It should be noted here that the following embodiments use the reference numerals and part of the contents of the previous embodiments, and the same reference numerals are used to designate the same or similar elements, and the description of the same technical contents is omitted. For the description of the omitted part, reference may be made to the foregoing embodiments, which will not be repeated in the following embodiments. FIG.2Ais a schematic partial top view of a display device according to another embodiment of the disclosure.FIG.2Bis a schematic view of a circuit configuration of the display device ofFIG.2A. Please refer toFIGS.1B,2A and2Bat the same time. The display device10aof this embodiment is substantially similar to the display device10ofFIG.1B, so the same and similar components in the two embodiments will not be repeated here. The display device10aof this embodiment is different from the display device10mainly in that, in the display device10aof this embodiment, the line form of the pixel110is designed as an arc, so as to further reduce the diffraction of light in the first direction X and the second direction Y. Specifically, please refer toFIGS.2A and2Bat the same time. In this embodiment, the edges1101,1102,1103and1104of the pixel110, the first side1111, the second side1112, the third side1113and the fourth side1114of the white pixel111, and the boundaries between the multiple display pixels112R,112G and112B (that is, a boundary1121between the display pixel112R and the display pixel112G, and a boundary1122between the display pixel112G and the display pixel112B) are all arcs. With reference toFIG.2B, in this embodiment, the functional display area100further includes a signal line120, a signal line130, a transistor140and a light shielding layer (not shown). The signal line120and the signal line130may be electrically connected to the transistor140, respectively, and the light shielding layer may be used to shield the signal line120, the signal line130and the transistor140. For example, as shown inFIG.2B, four signal lines120, two signal lines130and three transistors140are schematically shown. The four signal lines120extend substantially along the first direction X, and are respectively disposed at the edge1102of the pixel110, the boundary1121and the fourth side1114, a place near the edge1104of the pixel110, and the edge1104of the pixel110. The two signal lines130extend substantially along the second direction Y, and are respectively disposed at the edge1101and the edge1103of the pixel110. The transistors140are disposed corresponding to the display pixels112R,112G and112B. The light shielding layer (not shown) is disposed corresponding to the signal lines120, the signal lines130and the transistors140. In this embodiment, the signal line120is, for example, a scan line, and the signal line130is, for example, a data line, but the disclosure is not limited thereto. In this embodiment, the signal line130may have a branch131and a trunk130a. A signal may be transmitted from the trunk130ato the branch131, and, for example, the branch131extends from the trunk130aat the edge1101along the edge1104to the boundary1122of two display pixels, so that the branch131may be electrically connected to one of the display pixels112R,112G, and112B (for example, to the display pixel112B, but the disclosure is not limited thereto). In addition, the “pixel” in the disclosure may be defined by, for example, the trunks130aof two adjacent signal lines130and two adjacent signal lines120electrically connected to the same color display pixel. For example, the two adjacent signal lines120are, for example, the signal line120electrically connected to the display pixel112B inFIG.2Band another signal line120at the top ofFIG.2Bthat may be electrically connected to a display pixel (not shown) that emits the same color as the display pixel112B in another pixel, but the disclosure is not limited thereto. FIG.3is a schematic partial top view of a display device according to a second embodiment of the disclosure. Please refer toFIGS.1B and3at the same time. The display device10bof this embodiment is substantially similar to the display device10ofFIG.1B, so the same and similar components in the two embodiments will not be repeated here.FIG.3is a schematic top view of another embodiment of a pixel in the functional display area ofFIG.1A. The display device10bof this embodiment is different from the display device10mainly in that, in the display device10bof this embodiment, the white pixel111bis disposed in the center of the pixel110. Specifically, with reference toFIG.3, in this embodiment, multiple display pixels112R,112G, and112B may be arranged in sequence and surround the first side1111, the second side1112, the third side1113and the fourth side1114of the white pixel111b, for example, but the disclosure is not limited thereto. In some embodiments, the display pixel112R, the display pixel112G and the display pixel112B may be arranged in other orders (or irregularly arranged) and surround the first side1111, the second side1112and the third side1113and the fourth side1114of the white pixel111b. In this embodiment, the multiple display pixels112R,112G and112B may completely surround the white pixel111b. In this embodiment, since the distance D1between the first side1111and the third side1113of the white pixel111bmay be substantially equal to the distance D2between the second side1112and the fourth side1114of the white pixel111b, the positions of the diffracted rays may be the same or the problem of serious diffraction in one direction may be reduced, which makes it easier for the software to correct the diffraction phenomenon caused by the light passing through the panel, so a better optical sensing effect may be obtained. In this embodiment, the edges1101,1102,1103and1104of the pixel110and the edges of the white pixel111b(that is, the first side1111, the second side1112, the third side1113and the fourth side1114) are all straight lines; however, the disclosure does not limit the line forms of these edges. For example, in some embodiments, the edges1101,1102,1103and1104of the pixel110and the edges of the white pixel111b(that is, the first side1111, the second side1112, the third side1113and the fourth side1114) may be arcs (not shown). In this embodiment, although the shape of the white pixel111bis a square, the disclosure does not limit the shape of the white pixel111b. For example, in some embodiments, the shape of the white pixel111cmay be an octagon (as shown inFIG.4) or other polygons (not shown), as long as the distance D1may be substantially equal to the distance D2of the white pixel111c. In some embodiments, the shape of the white pixel111dmay be a circle (as shown inFIGS.5A and5B). The measurement methods of the distance D1and the distance D2in this embodiment may be the same as those in the first embodiment, which will not be repeated here. FIG.4is a schematic partial top view of a display device according to a third embodiment of the disclosure. Please refer toFIGS.3and4at the same time. The display device10cof this embodiment is substantially similar to the display device10bofFIG.3, so the same and similar components in the two embodiments will not be repeated here. The display device10cof this embodiment is different from the display device10bmainly in that, in the display device10cof this embodiment, the shape of the white pixel111cis, for example, an octagon. Specifically, with reference toFIG.4, the white pixel111cfurther has a fifth side1115, a sixth side1116, a seventh side1117and an eighth side1118. The fifth side1115and the seventh side1117are opposite to each other, and the sixth side1116and the eighth side1118are opposite to each other. The fifth side1115connects the first side1111and the second side1112; the sixth side1116connects the second side1112and the third side1113; the seventh side1117connects the third side1113and the fourth side1114; and the eighth side1118connects the fourth side1114and the first side1111. In this embodiment, since the distance D1between the first side1111and the third side1113of the white pixel111cmay be substantially equal to the distance D2between the second side1112and the fourth side1114of the white pixel111c, the position of the diffracted rays may be the same or the problem of serious diffraction in one direction may be reduced, which makes it easier for the software to correct the diffraction phenomenon caused by the light passing through the panel, so a better optical sensing effect may be obtained. The measurement methods of the distance D1and the distance D2in this embodiment may be the same as those in the first embodiment, which will not be repeated here. In this embodiment, the edges1101,1102,1103and1104of the pixel110and the edges of the white pixel111c(that is, the first side1111, the second side1112, the third side1113, the fourth side1114, the fifth side1115, the sixth side1116, the seventh side1117and the eighth side1118) are all straight lines; however, the disclosure does not limit the line forms of these edges. For example, in some embodiments, the edges1101,1102,1103and1104of the pixel110and the edges of the white pixel111b(that is, the first side1111, the second side1112, the third side1113, the fourth side1114, the fifth side1115, the sixth side1116, the seventh side1117and the eighth side1118) may be arcs (not shown). FIG.5Ais a schematic partial top view of a display device according to a fourth embodiment of the disclosure.FIG.5Bis a schematic view of a circuit configuration of the display device ofFIG.5A. Please refer toFIGS.3,5A and5Bat the same time. The display device10dof this embodiment is substantially similar to the display device10bofFIG.3, so the same and similar components in the two embodiments will not be repeated here. The display device10dof this embodiment is different from the display device10bmainly in that, in the display device10dof this embodiment, the shape of the white pixel111dis a circle. Specifically, with reference toFIG.5A, in this embodiment, since the shape of the white pixel111dis a circle, the diameter of the white pixel111dis equal or similar in all directions, and the positions of the diffracted rays are the same, which makes it easier for the software to correct the diffraction phenomenon caused by the light passing through the panel, so a better optical sensing effect may be obtained. With reference toFIG.5B, in this embodiment, the functional display area100further includes a signal line120, a signal line130, a transistor140and a light shielding layer (not shown). The signal line120and the signal line130may be electrically connected to the transistor140, respectively, and the light shielding layer may be used to shield the signal line120, the signal line130and the transistor140. For example, as shown inFIG.5B, three signal lines120, two signal lines130and three transistors140are schematically shown. The three signal lines120extend substantially along the first direction X, and are respectively disposed at the edge1102of the pixel110, a place near the edge1104of the pixel110, and the edge1104of the pixel110. The two signal lines130extend substantially along the second direction Y, and are respectively disposed at the edge1101and the edge1103of the pixel110. The transistors140are disposed corresponding to the display pixels112R,112G and112B. The light shielding layer (not shown) is disposed corresponding to the signal lines120, the signal lines130and the transistors140. In this embodiment, the signal line120is, for example, a scan line, and the signal line130is, for example, a data line, but the disclosure is not limited thereto. In this embodiment, the signal line130may have a branch131dand a trunk130a. For example, the branch131dextends from the signal line130at the edge1101to the boundary1122along the edge1104, so that the branch131dmay be electrically connected to one of the display pixels112R,112G and112B (for example, to the display pixel112B), but the disclosure is not limited thereto. In this embodiment, the edges1101,1102,1103and1104of the pixel110are all straight lines, but the disclosure does not limit the line forms of these edges. For example, in some embodiments, the edges1101,1102,1103and1104of the pixel110may be arcs (not shown). FIG.6is a schematic partial top view of a display device according to a fifth embodiment of the disclosure. Please refer toFIGS.5A and6at the same time. The display device10eof this embodiment is substantially similar to the display device10dofFIG.5A, so the same and similar components in the two embodiments will not be repeated here. The display device10eof this embodiment is different from the display device10dmainly in that, in the display device10eof this embodiment, the white pixel111eis disposed between the pixel110e1and the pixel110e2. Specifically, with reference toFIG.6, in this embodiment, the pixels110e1and110e2include a white pixel111e1, a white pixel111e2and multiple display pixels112R,112G and112B. The white pixel111e1and the white pixel111e2are, for example, semi-circular, and are disposed adjacent to the edge1101and the edge1103of the pixels110e1and110e2, respectively. The multiple display pixels112R,112G and112B are arranged in sequence and disposed between the white pixel111e1and the white pixel111e2, but the disclosure is not limited thereto. In some embodiments, the display pixel112R, the display pixel112G, and the display pixel112B may be arranged in other orders (or irregularly arranged) and disposed between the white pixels111e1and111e2. In this embodiment, since the pixel110e1and the pixel110e2are disposed adjacent to each other, and there are no other pixels between the pixel110e1and the pixel110e2, the white pixel111e2of the pixel110e1may be combined with the white pixel111e1of the pixel110e2to form the circular white pixel111e. The circular white pixel111emay be disposed in the center of the image after the pixel110e1and the pixel110e2are combined. In this embodiment, the edges1101,1102,1103and1104of the pixels110e1and110e2are all straight lines, but the disclosure does not limit the line forms of these edges. For example, in some embodiments, the edges1101,1102,1103and1104of the pixels110e1and110e2may be arcs (not shown). FIG.7is a schematic partial top view of a display device according to a sixth embodiment of the disclosure. Please refer toFIGS.1B and7at the same time. The display device10fof this embodiment is substantially similar to the display device10ofFIG.1B, so the same and similar components in the two embodiments will not be repeated here. The display device10fof this embodiment is different from the display device10mainly in that, in the display device10fof this embodiment, the multiple display pixels112R,112G and112B of the pixels110f1,110f2and110f3are disposed in the white pixel111f. Specifically, with reference toFIG.7, in this embodiment, the pixel110f1is adjacent to the pixel110f2, and the pixel110f2is adjacent to the pixel110f3. That is, there are no other pixels between the pixel110f1and the pixel110f2, and there are no other pixel between the pixel110f2and the pixel110f3. In this embodiment, only three pixels are shown as an example. In other embodiments, more than three pixels adjacent to each other may be included, and the disclosure is not limited thereto. In this embodiment, the first side1111, the second side1112, the third side1113and the fourth side1114of the white pixel111fmay be regarded as the edge1101, the edge1102, the edge1103and the edge1104of the pixels110f1,110f2and110f3. In this embodiment, the distance D1between the first side1111and the third side1113of the white pixel111f(that is, the length of the white pixel111fin the first direction X) may be substantially equal to the distance between the edge1101and the edge1103of the pixels110f1,110f2, and110f3(that is, the length of the pixels110f1,110f2,110f3in the first direction X), and the distance D2between the second side1112and the fourth side1114of the white pixel111f(that is, the maximum length of the white pixel111fin the second direction Y) may be substantially equal to the distance between the edge1102and the edge1104of the pixels110f1,110f2and110f3(that is, the maximum length of the pixels110f1,110f2and110f3in the second direction Y). Therefore, the display device10fof this embodiment may reduce the problem of diffraction or have a better optical sensing effect. The distance between the edge1101and the edge1103is, for example, the maximum distance measured along the first direction X between the edge1101and the edge1103, and the distance between the edge1102and the edge1104is, for example, the maximum distance measured along the second direction Y between the edge1102and the edge1104. In this embodiment, the multiple display pixels112R,112G and112B may, for example, be sequentially dispersed and arranged in the white pixel111f, but the disclosure is not limited thereto. In some embodiments, the display pixel112R, the display pixel112G, and the display pixel112B may be dispersed and arranged in other orders (or irregularly dispersed and arranged) in the white pixel111f. The multiple display pixels112R,112G and112B may be separated from each other. The display pixel112R may not overlap the display pixel112G and the display pixel112B in the first direction X and the second direction Y. The display pixel112G may not overlap the display pixel112R and the display pixel112B in the first direction X and the second direction Y. The display pixel112B may not overlap the display pixel112R and the display pixel112G in the first direction X and the second direction Y. The above configuration may enable the display device10fof this embodiment to reduce the problem of diffraction or have a better optical sensing effect. In this embodiment, the display pixel112R (or the display pixel112G or the display pixel112B) of the pixel110f1and the display pixel112R (or the display pixel112G or the display pixel112B) of the pixel110f2are arranged adjacent to each other in the first direction X between the adjacent pixels110f1and110f2, and there are no other display pixels between the display pixel112R (or the display pixel112G or the display pixel112B) of the pixel110f1and the display pixel112R (or the display pixel112G or the display pixel112B) of the pixel110f2. The display pixel112R (or the display pixel112G or the display pixel112B) of the pixel110f1may overlap the display pixel112R (or the display pixel112G or the display pixel112B) of the pixel110f2in the first direction X. The distance D3between the display pixel112R (or the display pixel112G or the display pixel112B) of the pixel110f1and the display pixel112R (or the display pixel112G or the display pixel112B) of the pixel110f2may be, for example, greater than or equal to ⅕ of the distance D1and less than or equal to ⅘ of the distance D1(that is, ⅕×D1≤D3≤⅘×D1), but the disclosure is not limited thereto. The distance D3is, for example, the minimum distance measured along the first direction X between the display pixel112R (or the display pixel112G or the display pixel112B) of the pixel110f1and the display pixel112R (or the display pixel112G or the display pixel112B) of the pixel110f2. In this embodiment, the display pixel112R (or the display pixel112G or the display pixel112B) of the pixel110f2and the display pixel112R (or the display pixel112G or the display pixel112B) of the pixel110f3are arranged adjacent to each other in the second direction Y between the adjacent pixels110f2and110f3, and there are no other display pixels between the display pixel112R (or the display pixel112G or the display pixel112B) of the pixel110f2and the display pixel112R (or the display pixel112G or the display pixel112B) of the pixel110f3. The display pixel112R (or the display pixel112G or the display pixel112B) of the pixel110f2may overlap the display pixel112R (or the display pixel112G or the display pixel112B) of the pixel110f2in the second direction Y. The distance D4between the display pixel112R (or the display pixel112G or the display pixel112B) of the pixel110f2and the display pixel112R (or the display pixel112G or the display pixel112B) of the pixel110f3may be, for example, greater than or equal to ⅕ of the distance D2and less than or equal to ⅘ of the distance D2(that is, ⅕×D2≤D4≤⅘×D2), but the disclosure is not limited thereto. The distance D4is, for example, the minimum distance measured along the second direction Y between the display pixel112R (or the display pixel112G or the display pixel112B) of the pixel110f2and the display pixel112R (or the display pixel112G or the display pixel112B) of the pixel110f3. In this embodiment, the edges1101,1102,1103and1104of the pixels110f1,110f2and110f3and the edges of the white pixel111f(that is, the first side1111, the second side1112, the third side1113and the fourth side1114) are all straight lines; however, the disclosure does not limit the line forms of these edges. For example, in some embodiments, the edges1101,1102,1103and1104of the pixels110f1,110f2and110f3and the edges of the white pixel111f(that is, the first side1111, the second side1112, the third side1113and the fourth side1114) may be arcs (not shown). In this embodiment, the multiple display pixels112R,112G and112B in the white pixel111fare arranged in the order of the display pixel112R, the display pixel112G and the display pixel112B in the first direction X, and are arranged in the order of the display pixel112R, the display pixel112G and the display pixel112B in the second direction Y, but the disclosure does not limit the arrangement order of the multiple display pixels112R,112G and112B. Any arrangement order may be adopted as long as the display pixel112R may not overlap the display pixel112G and the display pixel112B in the first direction X and the second direction Y, and the display pixel112G may not overlap the display pixel112R and the display pixel112B in the first direction X and the second direction Y, and the display pixel112B may not overlap the display pixel112R and the display pixel112G in the first direction X and the second direction Y. FIG.8is a schematic partial top view of a display device according to a seventh embodiment of the disclosure. Please refer toFIGS.7and8at the same time. The display device10gof this embodiment is substantially similar to the display device10fofFIG.7, so the same and similar components in the two embodiments will not be repeated here. The display device10gof this embodiment is different from the display device10fmainly in that, in the display device10gof this embodiment, the multiple display pixels112R,112G and112B disposed in the white pixels111of the pixels110g1,110g2, and110g3have different configurations. Specifically, with reference toFIG.8, in this embodiment, the arrangement order of the multiple display pixels112R,112G and112B in the first direction X is the display pixel112R, the display pixel112B, and the display pixel112G, which is different from the arrangement order of the multiple display pixels112R,112G and112B in the first direction X inFIG.7(that is, the display pixel112R, the display pixel112G, and the display pixel112B). In this embodiment, the arrangement order of the multiple display pixels112R,112G and112B in the second direction Y is the display pixel112R, the display pixel112G, and the display pixel112B, which is the same as the arrangement order of the multiple display pixels112R,112G and112B in the first direction X inFIG.7(that is, the display pixel112R, the display pixel112G, and the display pixel112B). FIG.9is a schematic partial top view and a schematic view of a circuit configuration of a display device according to an eighth embodiment of the disclosure. Please refer toFIGS.7and9at the same time. The display device10hof this embodiment is substantially similar to the display device10fofFIG.7, so the same and similar components in the two embodiments will not be repeated here. The display device10hof this embodiment is different from the display device10fmainly in that, in the display device10hof this embodiment, the multiple display pixels112R,112G and112B disposed in the white pixel111of the pixel110are connected together. Specifically, with reference toFIG.9, in this embodiment, the functional display area100further includes a signal line120, a signal line130, a transistor140and a light shielding layer (not shown). The signal line120and the signal line130may be electrically connected to the transistor140, respectively, and the light shielding layer may be used to shield part of the signal line120, part of the signal line130and the transistor140. For example, as shown inFIG.9, three signal lines120, one signal line130and three transistors140are schematically shown. The signal line120is, for example, a scan line, and the signal line130is, for example, a data line, but the disclosure is not limited thereto. The three signal lines120extend substantially along the first direction X, and are respectively disposed at the lower edge of the display pixel112R, the lower edge of the display pixel112G, and the lower edge of the display pixel112B. The signal line130extends substantially along the second direction Y, and is disposed at the left side of the display pixel112R. The signal line130may include a trunk130a, a branch131h, and a branch132h. The branch131h, for example, extends from the signal line130at the left side of the display pixel112R along the edge1102to the left side of the display pixel112G. The branch132h, for example, extends from the signal line130at the left side of the display pixel112R along the edge1102to the left side of display pixel112B. In this way, the branch131hand the branch132hmay be electrically connected to one of the multiple display pixels112R,112G and112B, respectively (for example, to the display pixel112G or the display pixel112B, but the disclosure is not limited thereto). The transistors140are disposed corresponding to the display pixels112R,112G and112B. In this embodiment, the signal line120may be divided into a signal line1201and a signal line1202according to the materials used. The trunk130aof the signal line130may be divided into a trunk130a1and a trunk130a2according to the materials used, and the branch131h(or the branch132h) of the signal line130may also be divided into a branch131h1(or a branch132h1) and a branch131h2(or a branch132h2) according to the materials used. The materials of the signal line1201, the trunk130a1, the branch131h1and the branch132h1include transparent conductive materials (such as indium tin oxide, indium zinc oxide, indium oxide, zinc oxide, tin oxide, other suitable materials, or a combination of the above, but the disclosure is not limited thereto). The materials of the signal line1202, the trunk130a2, the branch131h2, and the branch132h2include metals (for example, aluminum, molybdenum, copper, silver, other suitable materials, or a combination of the above, but the disclosure is not limited thereto). The signal line1202, the trunk130a2, the branch131h2, and the branch132h2are adjacent to the transistor140. The light shielding layer (not shown) may be disposed corresponding to the signal line1202, the trunk130a2, the branch131h2, the branch132h2and the transistor140, but the disclosure is not limited thereto. FIG.10AandFIG.10Bare schematic partial top views of a display device according to a ninth embodiment of the disclosure. Please refer toFIGS.7,10A and10Bat the same time. The display device10iand the display device10jof the embodiments are substantially similar to the display device10fofFIG.7, so the same and similar components in the two embodiments will not be repeated here. The display device10iof this embodiment is different from the display device10fmainly in that, in the display device10iof this embodiment, the display pixel112R may not overlap the display pixel112G and the display pixel112B in the second direction Y, and the display pixel112R may partially overlap the display pixel112G in the first direction X. The display pixel112G may not overlap the display pixel112R and the display pixel112B in the second direction Y, and the display pixel112G may partially overlap the display pixel112R and/or the display pixel112B in the first direction X. The display pixel112B may not overlap the display pixel112R and the display pixel112G in the second direction Y, and the display pixel112B may partially overlap the display pixel112G in the first direction X. The display device10jof this embodiment is different from the display device10fmainly in that, in the display device10jof this embodiment, the display pixel112R may not overlap the display pixel112G and the display pixel112B in the first direction X, and the display pixel112R may partially overlap the display pixel112G in the second direction Y. The display pixel112G may not overlap the display pixel112R and the display pixel112B in the first direction X, and the display pixel112G may partially overlap the display pixel112R and/or the display pixel112B in the second direction Y. The display pixel112B may not overlap the display pixel112R and the display pixel112G in the first direction X, and the display pixel112B may partially overlap the display pixel112G in the second direction Y. The configuration in this embodiment may enable the display device10fof this embodiment to reduce the problem of diffraction or have a better optical sensing effect. FIG.11is a schematic partial top view of a display device according to a tenth embodiment of the disclosure. Please refer toFIGS.7and11at the same time. The display device10kof this embodiment is substantially similar to the display device10fofFIG.7, so the same and similar components in the two embodiments will not be repeated here. The display device10kof this embodiment is different from the display device10fmainly in that there are two pixel pitches in the first direction X and the second direction Y respectively. For example, with reference toFIG.11, in the display device10kof this embodiment, the pixel110k1, the pixel110k2, the pixel110k3, and the pixel110k4may be a pixel group, and the pixel group may be repeatedly arranged along the first direction X and the second direction Y. In the pixel group, the display pixels in the first direction X and the second direction Y may have two pixel pitches. Specifically, the display pixel112R in the pixel110k4and the display pixel112R in the pixel110k1have a minimum distance D5in the second direction Y, and the display pixel112R in the pixel110k1and a display pixel of another adjacent pixel in the second direction Y (not shown, for example, a pixel of the same configuration as the pixel110k4) have a minimum distance D7in the second direction Y. Therefore, the display pixels may have two pixel pitches in the second direction Y. With continual reference toFIG.11, the display pixel112R and the display pixel112G in the pixel110k4have a minimum distance D6in the first direction X, and the display pixel112G in the pixel110k4and a display pixel of another adjacent pixel in the first direction X (not shown, for example, a pixel of the same configuration as the pixel110k4) have a minimum distance D1in the first direction X. Therefore, the display pixels may have two pixel pitches in the first direction X. In this embodiment, the distance D1between the first side1111and the third side1113of the white pixel111f(that is, the length of the white pixel111fin the first direction X) may be substantially equal to the distance between the edge1101and the edge1103of the pixels110k1,110k2,110k3and110k4(that is, the length of the pixels110k1,110k2,110k3and110k4in the first direction X), and the distance D5between the display pixel112R of the pixel110k4(or the display pixel112R of the pixel110k3) and the display pixel112R of the pixel110k1(or the display pixel112R of the pixel110k2) may be substantially equal to the distance between the edge1102and the edge1104of the pixels110k1,110k2,110k3and110k4(that is, the maximum length of the pixels110k1,110k2,110k3and110k4in the second direction Y). Therefore, the display device10kof this embodiment may reduce the problem of diffraction or have a better optical sensing effect. In this embodiment, the distance D6between the display pixel112R and the display pixel112G of the pixel110k4may be, for example, less than the distance D1. For example, the distance D6is substantially equal to ⅓ of the distance D1(that is, D6≈⅓×D1), but the disclosure is not limited thereto. In this embodiment, the distance D7between the display pixel112R of the pixel110k1and the fourth side1114of the white pixel111f(or the edge1104of the pixel110k1) may be, for example, less than the distance D5. For example, the distance D7is substantially equal to ⅓ of the distance D5(that is, D7≈⅓×D5), but the disclosure is not limited thereto. In summary, in the display devices of the embodiments of the disclosure, since the distance between the first side and the third side of the white pixel (that is, the length of the white pixel in the first direction) may be substantially equal to the distance between the second side and the fourth side of the white pixel (that is, the length of the white pixel in the second direction), the positions of the diffracted rays may be the same or the problem of serious diffraction in one direction may be reduced, which makes it easier for the software to correct the diffraction phenomenon caused by the light passing through the panel, so a better optical sensing effect may be obtained. In the display device of some embodiments, since the line form of the pixel is designed as an arc, the diffraction of light in the first direction X or the second direction Y may be further reduced. In the display device of some embodiments, since the shape of the white pixel is a circle, the diameter of the white pixel is equal or similar in all directions, and the positions of the diffracted rays are the same, which makes it easier for the software to correct the diffraction phenomenon caused by the light passing through the panel, so a better optical sensing effect may be obtained. In the display device of some embodiments, the distance between the first side and the third side of the white pixel (that is, the length of the white pixel in the first direction) may be substantially equal to the length of the pixel in the first direction), and the distance between the second side and the fourth side of the white pixel (that is, the length of the white pixel in the second direction) may be substantially equal to the length of the pixel in the second direction Y. Therefore, the problem of diffraction may be reduced, or a better optical sensing effect may be achieved. In the end, it should be noted that the above embodiments are only used to describe the technical solutions of the disclosure rather than to limit the disclosure. Although the disclosure has been described in detail with reference to the foregoing embodiments, those skilled in the art should understand that combinations or modifications to the technical solutions described in the foregoing embodiments may be made, or some or all of the technical features therein may be replaced with equivalents; however, such combinations, modifications or replacements do not cause the spirit of the corresponding technical solutions to depart from the scope of the technical solutions of the embodiments of the disclosure. | 53,635 |
11862123 | DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of this application. Obviously, the described embodiments are part of the embodiments of this application, but not all of them. Based on the embodiments in this application, all other embodiments obtained by those of ordinary skill in the art without creative effort belong to the protection scope of this application. The application provides a display driving method, which is used in the display system environment shown inFIG.1. As shown, the display system includes a liquid crystal controller, a liquid crystal module and a plurality of LCD drivers, the liquid crystal controller is electrically connected with the plurality of LCD drivers, and each of the LCD drivers is connected with the liquid crystal module. Illustratively, the number of LCD drivers inFIG.1is N, and N is at least one (it is worth noting that in the case of color screen, at least two LCD drivers are needed to speed up the writing time if it is expected to continue to improve the display refresh rate). Since the monochrome screen can display 3 colors of red, green and blue in time division, if the writing time and flipping time of the screen itself are sufficient, the correct backlight illumination time of each monochrome may exceed 50% by changing the time sequence of the monochrome backlight in the way shown inFIG.10, and it can be displayed at a minimum refresh rate of 60 Hz, hence no more LCD drivers are needed. For example, the number of the LCD drivers may be 3, 4 or 5, and may be determined according to requirements. In some application scenarios, specifically, the number of the LCD drivers may be comprehensively considered according to the requirements of hardware cost and/or desired refresh rate, and is not specifically limited. It should be noted that the above-mentioned multiple LCD drivers may be integrated in the liquid crystal module or arranged separately from the liquid crystal module, and there is no specific limitation. Specifically, the above-mentioned LCD driver may simply be a liquid crystal driving chip (liquid crystal driving IC), i.e., the display system may adopt a plurality of liquid crystal driving chips to drive the liquid crystal module to realize display, and the details are not limited. The liquid crystal module refers to a module with a liquid crystal display screen, and the liquid crystal module may also include other structural modules or be a complete liquid crystal module, which is not detailed here. In addition, it should be noted that the display system provided by the embodiment of this application may have various application scenarios. For example, this display system may be applied to projection devices, including single-chip LCD projection devices and 3-chip LCD projection devices, and may also be applied to other general display devices. As long as it is an electronic device with a liquid crystal module, it is applicable, and it is not limited in this application. The above-mentioned liquid crystal controller may be a Field Programmable Gate Array (FPGA) or other types of controllers, and it is not limited here. In the prior art, the display refresh rate is fixed after the liquid crystal module and the driver are determined and cannot be changed flexibly. In order to improve the display refresh rate, the embodiment of the application provides a new display driving method in combination with the above display system, which can improve the display refresh rate and flexibly configure the display refresh rate. As shown inFIG.2, in an embodiment, a display driving method is provided, which includes the following steps: S10: acquiring, by the liquid crystal controller, driving configuration information of the liquid crystal module, the driving configuration information is used for indicating the number of configured LCD drivers, and the display resolution of the display area to be driven by each LCD driver. The liquid crystal module includes an LCD display module, a liquid crystal and other components. In this embodiment of the application, the number of LCD drivers configured for indicating the display system, and the display resolution of the display area to be driven by each LCD driver would be acquired first, so as to serve as the basis for image segmentation in subsequent area display. For example, as shown inFIG.1, the number of LCD drivers configured in the display system may be 3, 4 or other numbers. For example, the number of LCD drivers is 3, and the display resolutions of the display areas to be driven by the 3 LCD drivers are A1, A2 and A3, respectively, which are related to the resolution of the gray image data to be finally displayed. In the embodiment of the present application, the driving configuration may be sent or written to the LCD driver in advance, so that the LCD driver can obtain the driving configuration information. S20: receiving, by the liquid crystal controller, a color image frame, and decomposing the color image frame into 3 monochrome frame images. After receiving the color image frame, the drive controller decomposes the color (red, green and blue) image frame into three monochrome frame images. The color image frame input to the liquid crystal controller refers to the complete display image to be displayed in the display module of the liquid crystal module, and the liquid crystal controller can read the color image frame from the main control processor or directly from the video memory, which may be different according to the application scenarios, and will not be described in detail here. The color image frame is then decomposed into image, referred as 3 monochrome frames. S30: performing, by the liquid crystal controller, segmentation processing on each monochrome frame image according to the driving configuration information to obtain monochrome frame segmentation image data corresponding to each display area of the liquid crystal module. S40: packaging, by the liquid crystal controller, the monochrome frame segmentation image data corresponding to each display area, and sending data to the LCD driver corresponding to each display area; when the liquid crystal controller starts to output each monochrome frame, a monochrome frame color indication signal is simultaneously output to a backlight module to indicate a current output color frame. In an embodiment, the step of performing, by the liquid crystal controller, segmentation processing on each monochrome frame image according to the driving configuration information to obtain monochrome frame segmentation image data corresponding to each display area of the liquid crystal module, including: determining, by the liquid crystal controller, the number of the display areas and display resolution according to the driving configuration information; and performing, by the liquid crystal controller, segmentation processing on each monochrome frame image according to the number of the display areas and the display resolution, to obtain monochrome frame segmentation image data corresponding to each display area of the liquid crystal module. Specifically, the number of LCD drivers corresponds to the number of display areas of the liquid crystal module, or the number of display areas of the liquid crystal module is smaller than the number of LCD drivers, and there is no specific limitation. For example, if the number of LCD drivers is 3, the number of display areas of the liquid crystal module may also be 3, and then each decomposed monochrome frame image is segmented to obtain monochrome frame segmentation image data corresponding to the above 3 display areas, and for the 3 monochrome frame segmentation image data corresponding to the 3 display areas, each display area corresponds to the segmented monochrome frame segmentation image data. For example, if the number of LCD drivers is 3, the number of display areas of the liquid crystal module may be more than 3, which is not specifically limited. It should be noted that the display area mentioned in the embodiment of the present application may refer to an interval display line or a continuous display line, and is not specifically limited. This content will be described in detail below. It is worth noting that in the traditional solution, there is no process of image segmentation and packaging and sending each LCD driver. However, the embodiment of this application is provided with multiple LCD drivers, and the required image frames would be segmented according to the divided display areas, which may also be understood as the image division, so as to obtain the image division corresponding to each display area, i.e., 3 monochrome frame segmentation image data corresponding to each display area. S50: after the LCD driver receives each monochrome frame segmentation image data, the LCD driver writes the received monochrome frame segmentation image data into the liquid crystal module according to the display control information of the corresponding display area and the received monochrome frame segmentation image data, and drives the display area corresponding to the liquid crystal module to display, and meanwhile after writing a line, outputting a synchronization signal to instruct the corresponding backlight module to start synchronously controlling a backlight delay circuit action. It can be understood that the writing time of each line is basically fixed. For example, the fastest time to write a line is 2.5 μs for the fastest display screen at present. Taking 1080P image as an example, it takes 1080*2.5 μs=2.7 ms to display a 1080P image, i.e., it takes 2.7 ms to write a frame of 1080P image. However, according to the embodiment of the present application, for example, if the display is divided into three display areas, each area has 360 lines, and three display areas are written to the LCD screen at the same time, then a frame only needs a writing time of 360*2.5 μs=0.9 ms. It can be seen that if more LCD drivers are adopted to drive the liquid crystal, a shorter writing time may be obtained. After the display image data of the liquid crystal is written, it starts to flip to the flip time of the corresponding display voltage. For example, from the darkest to the brightest. For a liquid crystal, the flip time is also fixed. Hence, for a frame of image, the writing time plus the full time of liquid crystal flip is the maximum time required for the whole liquid crystal to display a frame of image. The shorter the time, the higher the display refresh rate of the liquid crystal. Therefore, the writing time can be effectively shortened and the improvement can be achieved with the embodiment of the application. It should be noted that for each LCD driver, display control information for controlling writing is generated according to the sub-block image data that each LCD driver needs to drive and display. The display control information includes certain synchronization information, etc. The process of writing image data by each LCD driver is similar to that of a single LCD driver. Therefore, the process of timing control will not be described here. The difference is that there are differences in display control because multiple LCD drivers are provided, and images are segmented and driven in blocks. Image data is divided into three corresponding monochrome frames according to each display area and sent to the corresponding LCD driver, so that the LCD driver can divide the image data according to the display control information of the corresponding display area and the received three monochrome frames. It can be understood that the display process of the display module of the liquid crystal module is driven by the LCD driver, and it is realized by a certain scanning mode, one whole line is displayed at a time, and each line is scanned once in a frame, hence a circuit is needed to control the output voltages on the lines and columns, and this circuit is the LCD driver. In the embodiment of the application, each LCD driver needs to receive the display control information from the liquid crystal controller according to the display area, so as to change the output voltage of the line/column of the pixel corresponding to the display in the corresponding display area. And after receiving the written image data, the liquid crystal controller converts it into the display control signal corresponding to each LCD driver for each LCD driver according to the processing method of the application. It can be seen that in the display driving method provided by the embodiment of the present application, the color image frames input to the liquid crystal controller are segmented to obtain three sets of monochrome frame segmentation image data corresponding to each display area of the liquid crystal module. Then, according to the three sets of monochrome frame segmentation image data corresponding to each display area of the liquid crystal module, each LCD driver drives the display areas corresponding to the liquid crystal module in parallel according to the received three sets of monochrome frame segmentation image data and the display control information. Compared with the traditional solutions, the display refresh rate can be greatly improved because the image input to the liquid crystal controller is segmented, and a plurality of different LCD drivers are controlled according to the segmentation image, and the display area of the liquid crystal module is driven in parallel to display the segmentation image data. Moreover, as there are a plurality of LCD drivers provided, in the embodiments of the application, each monochrome frame image of the liquid crystal module can be segmented flexibly according to the expected number of sets, so that the number of LCD drivers configured for driving can also be flexibly determined. Thereby, the display refresh rate of the image displayed by the liquid crystal module can be flexibly configured or conveniently altered, which is highly practical. It should be noted that each LCD driver drives the display area corresponding to the LCD module in parallel according to the received target display configuration information, which can be realized in a variety of ways, two of which are listed in the embodiment of this application, and are introduced respectively below. Implementation way 1: each display area includes continuous display lines, i.e., each display area includes continuous drive lines, and the number of display areas corresponding to the liquid crystal module matches the number of LCD drivers, each LCD driver is used to drive a display area correspondingly, and the display areas driven by each of the LCD drivers are different. The number of display areas corresponding to the liquid crystal module matches the number of LCD drivers, which means that the number of display areas corresponding to the liquid crystal module is the same as that of the LCD drivers. For the convenience of understanding Implementation way 1, please refer toFIG.3. In the example ofFIG.3, the number of display areas corresponding to the liquid crystal module is 3, each display area includes continuous display lines, and the number of LCD drivers is also 3. The LCD drivers include LCD driver 1, LCD driver 2 and LCD driver 3, and the display areas of the liquid crystal module include Display area 1, Display area 2 and Display area 3. Display area 1, Display area 2 and Display area 3 all include continuous display lines, and the continuous display lines of three display areas constitute a complete display line. When the liquid crystal controller divides three monochrome frame images into three, the three LCD drivers are used to drive one of the three display areas, i.e., the display areas driven by each of the three LCD drivers are different. For example, the LCD driver 1 is used to drive the display in Display area 1, the LCD driver 2 is used to drive the display in Display area 2, and the LCD driver 3 is used to drive the display in Display area 3. Then, after the LCD controller divides the three monochrome frame segmentation image data corresponding to the color image frame, the three LCD drivers respectively drive and display the three monochrome frame segmentation image data corresponding to the three display areas. More specifically, as shown inFIG.4, taking LCD driver 1 as an example, when driving Display area 1 to display, the LCD driver 1 takes the segmentation image data corresponding to Display area 1 as the written image data, and controls each pixel in Display area 1 to write corresponding gray-scale image data according to the display control information, so as to drive and display the display area. In addition, it should be noted that in some embodiments, the number of image lines displayed in each of the three display areas may also be different, and there is no specific limitation. It should be noted that because the writing time of the display area with more lines is longer, it will take more time than the lines with average number, and the refresh rate will be lower than the lines with average number. Therefore, the method of dividing the display area into lines according to the number of LCD drivers would achieve a higher refresh rate. Its application value is higher and it is convenient to configure. Implementation way 2: the display area includes interval display lines divided according to certain interval. According to a certain line interval, the corresponding driving lines are sequentially allocated to each LCD driver, and each LCD driver is used for driving the allocated driving lines. For the understanding of Implementation way 2, please refer toFIG.5, the liquid crystal modules are divided according to a certain line interval, for example, the line interval may be 3 lines. According to the line interval, each LCD driver is assigned a corresponding driving line (for example, block line 11, block line 12, and block line 13; block line 21, block line 22, block line 23 . . . ). In this uniform way, the number of LCD drivers is also three, and the LCD drivers include LCD driver 1, LCD driver 2 and LCD driver 3. When the gray-scale image data is divided, taking the line interval of 1 as an example, Lines 1, 4, 7 . . . are assigned to LCD driver 1; Lines 2, 5, 8 . . . are assigned to LCD driver 2, and Lines 3, 6, 9 . . . are assigned to LCD driver 3. It should be noted that the above examples are only illustrative. In other embodiments, taking the line interval of 3 as an example, apparently, Lines 1, 2, 3, 10, 11, 12, 21, 22, 23 . . . may also be allocated to LCD driver 1, Lines 4, 5, 6, 13, 14, 15, 24, 25, 26 . . . may be allocated to LCD driver 2, and similarly, Lines 7, 8, 9, 16, 17, 18, 27, 28, 29 . . . may be allocated to LCD driver 4. There is no need to list all. More specifically, as shown inFIG.6, taking the upper display area of the display area as an example, it can be seen that among the three LCD drivers, when driving display, LCD driver 1 controls each line to write the corresponding segmentation image data for each pixel of Lines 1, 2, 3 according to the corresponding segmentation image data. Similarly, when driving display, LCD driver 2 controls each line to write the corresponding segmentation image data for each pixel of Lines 4, 5, 6 according to the segmentation image data. When driving display, LCD driver 3 controls each line to write the corresponding segmentation image data for each pixel of Lines 7, 8, 9 according to the segmentation image data. It should be noted thatFIGS.5-6are only for illustration. In other embodiments, multiple LCD drivers may also adopt other cross-drive display methods. For example, LCD driver 1 drives the whole display area of a certain part, while LCD driver 2 and LCD driver 3 perform cross-drive display, and LCD driver 1 does not participate in cross-drive display. It is not limited specifically in this embodiment. For the LCD drivers, the display control information of the segmentation image data corresponding to the three display areas will be generated according to the cross drive, so that the three LCD drivers can drive and display the corresponding areas of the three display areas. In the foregoing embodiments, specific embodiments in which a plurality of LCD drivers drive a plurality of display areas of the liquid crystal module are provided, which provides the feasibility of the solution. It should be noted that in Implementation way 1, because one LCD driver drives all areas of a certain display area correspondingly, it is more conducive to the wiring of LCD-driven display and more convenient. It should be noted that for LCD driver, it is necessary to cooperate with the control of the liquid crystal controller to drive the display areas corresponding to the liquid crystal modules in parallel according to the received target display configuration information. The parallel driving may refer to simultaneous driving or driving in time division according to a certain time interval, and the embodiment of the application is not limited to this, as long as the display refresh rate can be improved without affecting the display effect. In one embodiment, three monochrome frame images include respective monochrome image data of red, green and blue. In the display system provided in the embodiment of the application, the liquid crystal module can display the gray-scale image data segmentation processing mode according to the embodiment of the application, and display the gray-scale image display effect. It can be understood that when each LCD driver drives the display area corresponding to the LCD module, it writes each monochrome image data to each pixel point of the corresponding sub-area according to the written image data. Or the process of a red channel data signal (R), a green channel data signal (G) and a blue channel data signal (B) displayed in time division that makes the pixel point to present a corresponding image. Each LCD driver can directly write data of the red channel data signal (R), green channel data signal (G) and blue channel data signal (B) to each pixel when driving the pixel points corresponding to the display area, so that the display screen of the LCD module can display images. As shown inFIG.7, for each pixel, RGB image data (i.e., red channel data signal (R), green channel data signal (G) and blue channel data signal (B)) are generally written to the corresponding pixel position of the corresponding sub-area through a plurality of drive controllers. When driving pixel points, the embodiment of the application has also made a first further optimization, as shown inFIG.8, i.e., the liquid crystal controller puts the monochrome data of each line into a color data bit of a pixel point, such as the data bit of G channel, and when the LCD driver refreshes the LCD, it refreshes an RGB image data to the pixel point of the same line for display. The liquid crystal itself only connects the line of this color channel (G) to the driver, and the driver drives the other two colors (R). When the display area corresponding to the liquid crystal module is driven, the following methods are adopted: according to the display control signal corresponding to the LCD driver, each pixel point of the display area corresponding to the LCD driver is controlled by the same color channel to give a red channel data signal, a green channel data signal and a blue channel data signal in time division, and the color channel is one of a red channel, a green channel line or a blue channel. As shown inFIG.8, a schematic diagram of the writing process of partial pixel points in the display area. In the specific implementation, the LCD driver connects one of the color lines in the monochrome frame segmentation image data to the corresponding pixel points. For example, only the green channel transmits the red channel data signal, the green channel data signal and the blue channel data signal, i.e., the LCD driver only connects the line number of the green channel G to the monochrome point of the LCD for display.FIG.8merely shows the green channel as an example. In other embodiments, the red channel R may be used to transmit RGB image data, or the red channel R may be used to transmit RGB image data in time division, which is not specifically limited. In this embodiment, only one color channel is used to input RGB data, which can reduce the use of signal lines of other color channels, simplify circuit complexity and reduce LCD wiring. When driving pixel points, a second further optimization may be made. The liquid crystal controller packs data, and in each line signal of image data divided by monochrome frames for the corresponding LCD driver, three consecutive pixel points are regarded as RGB bits in one point of standard color protocol. If the number of points in the current line of the display area is not exactly a multiple of three, the RGB position of the last protocol point may be filled with “0”. That is, when the LCD driver drives the display area corresponding to the liquid crystal module, it is driven in the following ways: the liquid crystal controller encodes the data of three monochrome image points per line of a monochrome frame into RGB image data of one image point, and sends it to the LCD driver to control the pixel points of the display area corresponding to the LCD driver to be output to the liquid crystal module, and between the LCD driver and the liquid crystal module, the lines of three color points of each pixel point are connected to three monochrome points adjacent to the liquid crystal. That is, the liquid crystal module routes three groups of RGB control lines of the same image of the driving controller into three monochrome pixel points, as shown inFIG.9, which is a schematic diagram of writing partial pixel points in the display area. In the specific implementation, only one color channel data signal is written for each pixel point. As shown inFIG.9, for each line of pixel points, in the order of R\G\B \R\G\BR\G\B . . . R\G\B, only one color channel data is written for each pixel point, and only one third of the original frame data needs to be sent, and the pixel points of the screen are wired in a manner that one color corresponds to three monochrome points. In this way, while realizing the display, it can also reduce the amount of data transmitted and improve the system capability. It should be noted thatFIG.9is only an example here, and other color arrangement sequences are also possible, so there is no limitation here, for example, G\R\B \G\R\BG\R\B . . . G\R\B, etc., the details are not limited. In the embodiment of the application, compared with the general color data solution in which one pixel is written into the corresponding color channel in time division, only the color data of one image is written into every three pixel points in the embodiment of the application. That is, the color data of one color channel written into each frame pixel point in time division is only ⅓ of the original pixel point, which can greatly reduce the data writing amount, improve the overall performance of the system, take less transmission time, and be more effective for improving the refresh rate, and has higher practicability. It can be understood that the liquid crystal module is a display module with liquid crystal as the basic material. When driving the display, the rotation direction of liquid crystal molecules is controlled by controlling the voltages at both ends of the liquid crystal molecules through the LCD driver, and then the polarized light projection of each pixel can be controlled to achieve the purpose of display. That is, the LCD driver drives each pixel point, including data writing time and deflection time of liquid crystal molecules. The inventor further found that when driving monochromatic color image data according to the above two optimization methods, due to monochromatic display in time division, each monochromatic color appears in time division, and in the case of gray-scale display, it is impossible to use white backlight for long-term display like a color screen, and each monochromatic color would also appear in time division, but due to the limitation of the time taken by each monochromatic color (writing time and flipping time), in some periods, the corresponding backlight would not appear, because it would lead to display color disorder. In this embodiment of the application, when driving the gray-scale display based on the above-mentioned method, when displaying the monochromatic light in time division, the time sequence of the backlight monochromatic lamp would be correspondingly controlled when driving the gray-scale screen, so that the backlight would appear at an appropriate time. When the monochrome screen outputs R, G, B and the corresponding backlight in time division, the control solution of dividing multiple LCD drivers is used to control the monochrome backlight timing to improve the display effect. As shown inFIG.11, a schematic diagram of timing control for controlling backlight timing, taking sub-area display as an example. InFIG.11, for red channel data signal (R) frame, green channel data signal (G) frame and blue channel data signal (B) frame, when controlling the backlight time of each monochrome image data, “Top” means the first line of a driver, Low level indicates that writing is in progress, Low to High indicates the writing completion time. “Middle” refers to the middle line, Low level indicates that writing is in progress, Low to High indicates the writing completion time. “Bottom” refers to the bottom line, Low level indicates that writing is in progress, Low to High indicates the writing completion time. The top indicates the corresponding color frame, and the bottom represents the backlight control turn-on time after the maximum flipping time of the liquid crystal is delayed after the uppermost line is written. The backlight control timing of each area is shown inFIG.11. It can be seen that when the embodiment of the application drives the display, the time sequence of the backlight monochromatic lamp is correspondingly controlled, so that the backlight of monochromatic light appears at an appropriate time. As shown inFIG.11, the display system also includes a backlight control module. After each LCD driver receives three monochrome frame segmentation image data of the corresponding display area, when the first line of each frame is displayed, the LCD driver gives a synchronization signal that the corresponding monochrome lamp is ready to switch, and the backlight control module delays the time corresponding to the maximum flipping of the liquid crystal to switch to the corresponding single-lamp backlight according to the synchronization signal sent by the LCD drivers of each display area. Finally, the LCD controller completes three monochrome frames to the LCD driver to complete the driving operation, and the backlight driving module completes the switching of the backlight, which is a complete display cycle of a frame of image data. It can be understood that when driving each line, there is a writing time of image data in each line. For example, the writing time of each line is 2.5p, and the maximum liquid crystal switching time corresponding to this display line is 2.5 ms, so for a certain display area, the total writing time of this display area is =(2.5 μs)*the number of display area lines. Although it can be assumed that the display lines in different display areas are different. However, the liquid crystal controller synchronously transmits data to each LCD driver, and the difference of the first line writing time of each LCD driver can be ignored. The delay of writing data in the first line of the display area to the liquid crystal flipping time of this line may be different, but the first line writing and the delay to the maximum liquid crystal flipping time are the same. Therefore, it is necessary to delay turning on the monochrome frame color backlight until the first line of the next monochrome frame is written and wait for the maximum liquid crystal flipping time to ensure that the color is normal during this time. Furthermore, according to the writing time of a complete frame and the liquid crystal flipping time, the backlight lighting time sequence of the whole monochromatic light is controlled. For example, when the red channel data signal (R) frame is time-divided, the red backlight is displayed, and when the blue backlight is displayed in time division, it is delayed to display the blue backlight, thus each monochromatic color also needs to appear in time division. Hence, the problem of display color disorder would not occur, and the refresh rate is also improved, i.e., the monochrome screen is output in time division. In an embodiment, a display driving device is provided, which includes:an acquisition module, configured to acquire driving configuration information of a liquid crystal module, and receive a color image frame, the driving configuration information is used for indicating the number of configured LCD drivers;a processing module, configured to decompose the color image frame into 3 monochrome frame images; and perform segmentation processing on each monochrome frame image according to the driving configuration information to obtain monochrome frame segmentation image data corresponding to each display area of the liquid crystal module; when each monochrome frame is output, a monochrome frame color indication signal is simultaneously output to a backlight module to indicate a current output color frame; package the monochrome frame segmentation image data corresponding to each display area, and send data to the LCD driver corresponding to each display area, so that after the LCD driver receives each monochrome frame segmentation image data; according to display control information of the corresponding display area and the received monochrome frame segmentation image data, the LCD driver writes the received monochrome frame segmentation image data into the liquid crystal module to drive the display area corresponding to the liquid crystal module to display, and meanwhile after writing a line, output a synchronization signal to instruct the corresponding backlight module to start synchronously controlling a backlight delay circuit action; and the liquid crystal controller packages data, in each line of signal of the monochrome frame segmentation image data of the corresponding LCD driver, 3 consecutive pixel points are taken as one RGB protocol point in a standard color protocol; if the number of pixel points of a current line in the display area is not a multiple of 3, then it is filled with “0”, until it is enough to form a full protocol point. For the specific description of the display driving device, please refer to the description of the display driving method above, and will not repeat it here. Each module in the above display driving device may be realized in whole or in part by software, hardware and their combinations. The above modules may be integrated with or separate from the controller in the form of hardware, and may also be stored in the memory of the controller in the form of software, so that the processor can call and execute the operations corresponding to the above modules. It can be seen that in the display driving device provided by the embodiment of the application, compared with the traditional solution, the display refresh rate may be greatly improved because the display image of the liquid crystal module is segmented, and a plurality of different LCD drivers are controlled according to the segmented image, and the display area of the liquid crystal module is driven in parallel to display the segmented image data. Moreover, as there are a plurality of LCD drivers provided, in the embodiments of the application, the display of the liquid crystal module may be flexibly segmented and packaged according to the expected number of sets, so that the number of LCD drivers configured for driving can also be flexibly determined. Thereby, the display refresh rate of the image displayed by the liquid crystal module can be flexibly configured or conveniently altered, which is highly practical. In an embodiment, a liquid crystal controller is provided, which may be a Field Programmable Gate Array (FPGA), and the liquid crystal controller is used to realize the functions or steps of the liquid crystal controller of the above embodiment. For the specific description of the liquid crystal controller, please refer to the description of the display driving method above, and will not repeat it here. Each module in the above-mentioned liquid crystal controller may be realized in whole or in part by software, hardware and their combinations. The above modules may be integrated with or separate from the controller in the form of hardware, and may also be stored in the memory of the controller in the form of software, so that the processor can call and execute the operations corresponding to the above modules. In an embodiment, a system-on-chip or integrated circuit module is provided, and the system-on-chip or integrated circuit module includes the liquid crystal controller provided by the embodiment of the application. In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored, and when the computer program is executed by a controller, a display driving method provided in the above embodiment is realized. For more details about the solution implemented by the liquid crystal controller and the computer-readable storage medium, please refer to the aforementioned method embodiment, and the description will not be repeated here. In some embodiments, the embodiment of the application also provides a projection device, which includes the display system provided by the embodiment of the application. In addition, the terms “first”, “second”, “third” and “fourth” in the description of the foregoing embodiments are used to distinguish similar objects, and are not used to define a specific order or sequence. A person of ordinary skill in the art can understand that all or part of the processes in the method of the foregoing embodiments can be implemented by instructing related hardware through a computer program, which can be stored in a nonvolatile computer readable storage medium, and the computer program can include the steps of the above embodiments when executed. Wherein, any reference to memory, storage, database or other medium used in the embodiments provided in this application may include nonvolatile and/or volatile memory. The nonvolatile memory may include read-only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory. The volatile memory may include random access memory (RAM) or external cache memory. As an illustration and not a limitation, RAM is available in many forms, such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous link (Synchlink) DRAM (SLDRAM), memory bus, (Rambus), direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), etc. A person of ordinary skill in the art can clearly understand that, for the convenience and conciseness of description, the division of the above functional units and modules are only used as examples. In practical applications, the above functions may be implemented by different functional units and modules as needed. That is, the internal structure of the device may be divided into different functional units or modules to complete all or part of the functions described above. The above embodiments are only used to illustrate the technical solutions of the present application, but not to limit it. Although the present application has been described in detail with reference to the foregoing embodiments, those skilled in the art would understand that it is possible to modify the technical solutions described in the foregoing embodiments, or to replace some technical features with equivalents. However, these modifications or substitutions do not make the essence of the corresponding technical solutions deviate from the spirit and scope of the technical solutions of various embodiments of the present application, and shall be included in the protection scope of the present application. Further, unless otherwise required by context, singular terms shall include pluralities and plural terms shall include the singular. | 40,956 |
11862124 | MODE FOR IMPLEMENTING THE INVENTION Hereinafter, some embodiments of the present disclosure will be described in detail with reference to the accompanying drawings. With regard to the reference numerals of the components of the respective drawings, it should be noted that the same reference numerals are assigned to the same components even though they are shown in different drawings. In addition, in describing the present disclosure, a detailed description of a well-known configuration or function related the present disclosure, which may obscure the subject matter of the present disclosure, will be omitted. In addition, terms, such as “1st”, “2nd”, “A”, “B”, “(a)”, “(b)”, or the like, may be used in describing the components of the present disclosure. These terms are intended only for distinguishing a corresponding component from other components, and the nature, order, or sequence of the corresponding component is not limited to the terms. In the case where a component is described as being “coupled”, “combined”, or “connected” to another component, it should be understood that the corresponding component may be directly coupled or connected to another component or that the corresponding component may also be “coupled”, “combined”, or “connected” to the component via another component provided therebetween. FIG.1is a configuration diagram of a display device according to an embodiment. Referring toFIG.1, a display device100may comprise a display panel110, a gate driving device120, a data driving device130, and a data processing device140. In the display panel110, a plurality of data lines DL and a plurality of gate lines GL may be disposed and a plurality of pixels P may also be disposed. A pixel may comprise a plurality of sub-pixels SP. Here, each of the sub-pixels may be a red sub-pixel R, a green sub-pixel G, a white sub-pixel W, etc. A pixel may comprise RGB sub-pixels, RGBG sub-pixels or RGBW sub-pixels. The gate driving device120, the data driving device130, and the data processing device140are to generate signals to display an image in the display panel110. The gate driving device120may supply a gate driving signal of a turn-on voltage or a turn-off voltage to a gate line GL. When a gate driving signal of a turn-on voltage is suppled to a sub-pixel SP, the sub-pixel SP may be connected with a data line DL. When a gate driving signal of a turn-off voltage is supplied to the sub-pixel SP, the sub-pixel SP may be disconnected from the data line DL. The gate driving device120may be referred to as a gate driver. The data driving device130may supply a data voltage Vp to a sub-pixel through a data line DL. The data voltage Vp supplied through the data line DL may be supplied to the sub-pixel SP according to a gate driving signal. The data driving device130may be referred to as a source driver. The data driving device130may comprise at least one integrated circuit, and this at least one integrated circuit may be connected to a bonding pad of a display panel110in a tape automated bonding (TAB) method or a chip-on-glass (COG) method, directly formed on a display panel110, or integrated on a display panel110depending on a case. In addition, a data driving device130may be formed in a chip-on-film (COF) type. According to an embodiment, when driving voltages are applied to the data driving device130and the data processing device140, the data driving device130may perform a low-speed communication with the data processing device140in order to configure an environment for a high-speed communication with the data processing device140. Here, the high-speed communication may have a clock frequency of several giga bps and the low-speed communication may have a clock frequency lower than that of the high-speed communication (for example, several mega bps). The configuration of an environment for a high-speed communication may comprise the configuration of a frequency bandwidth for a high-speed communication, the configuration of an equalizer comprised in the data driving device130, etc. After having configured the environment for a high-speed communication by performing a low-speed communication with the data processing device140, the data driving device130may receive from the data processing device140a clock pattern indicating a communication clock for the communication with the data processing device140and perform a clock training. Here, the clock training may be to synchronize a clock inside the data driving device130with the communication clock. When the clock training is normally completed, the data driving device130may output a first signal indicating that the communication status of the data driving device130is stable and transmit it to the data processing device140. The first signal may be referred to as a lock signal. Subsequently, the data driving device130may receive from the data processing device140an initial configuration value regarding the environment for a high-speed communication and store the initial configuration value as a configuration restoration value. Here, the data driving device130may store the initial configuration value, that is, the configuration restoration value in a volatile memory (for example, RAM) comprised therein. According to an embodiment, the initial configuration value may comprise a frequency bandwidth for a high-speed communication, a configuration value of an equalizer comprised in the data driving device130, etc. Here, the low-speed communication between the data driving device130and the data processing device140may be performed until the configuration of the environment for a high-speed communication and the clock training are completed and, after the clock training has been completed, a high-speed communication between the data driving device130and the data processing device140may be performed. In other words, the data driving device130may receive an initial configuration value from the data processing device140by using the high-speed communication. After storing the initial configuration value as the configuration restoration value, the data driving device130may periodically receive image data from the data processing device140by the high-speed communication and process the image data. In other words, the data driving device130may generate a data voltage Vp according to image data and supply the data voltage Vp to a sub-pixel SP. If any abnormality occurs in the high-speed communication due to noise such as static inside the display device100while the data driving device130periodically receives and processes image data, the data driving device130may detect the abnormality. For example, the data driving device130may detect the abnormality in the high-speed communication by checking the desynchronization of an inner clock with a communication clock due to noise, the change of the configuration of an environment for a high-speed communication due to noise or the like. Here, the data driving device130may change a first signal into a second signal and transmit the second signal to the data processing device140. The second signal may indicate that the communication status is unstable. The second signal may be referred to as a lock fail signal or as an unlock signal. Conventionally, when any abnormality is detected in a high-speed communication of the data driving device130, a low-speed communication needs to be performed again in order to re-configure the environment for the high-speed communication. However, according to an embodiment, the data driving device130may rapidly restore the configuration of the environment for a high-speed communication by using a stored configuration restoration value. Descriptions in detail in this regard will be presented below referring toFIGS.6A and6B. After having restored the configuration of the environment for the high-speed communication by using the stored configuration restoration value, the data driving device130may receive a clock pattern from the data processing device140by the high-speed communication and re-perform a clock training. If the clock training is not completed, the data driving device130may re-perform the low-speed communication with the data processing device in order to re-configure the environment for the high-speed communication and re-perform a clock training. In other words, in a case when the configuration of the environment for the high-speed communication is not properly restored because an error occurs in a stored configuration restoration value due to noise, the data driving device130performs a process of configuring the environment for the high-speed communication. In order to prevent in advance such a situation described above, the data driving device130may periodically check whether there is any error in the stored configuration restoration value due to an external influence such as noise. The data driving device130may periodically receive an initial configuration value from the data processing device140by the high-speed communication and compare the received initial configuration value with the stored configuration restoration value. In a case when the received initial configuration value is identical to the stored configuration restoration value, the data driving device130may keep the stored configuration restoration value. In a case when the received initial configuration value is different from the stored configuration restoration value, the data driving device130may perform the low-speed communication with the data processing device140in order to re-configure the environment for the high-speed communication. The received initial configuration value being different from the stored configuration restoration value means that there is abnormality in the data driving device130or the data processing device140due to an external influence. Accordingly, the environment for the high-speed communication between the data driving device130and the data processing device140may be re-configured. In addition, the data driving device130may receive a clock pattern from the data processing device140and re-perform a clock training. Subsequently, the data driving device130may receive a re-configuration value for re-configuring the environment for the high-speed communication from the data processing device140and update the stored configuration restoration value with the re-configuration value. The data driving device130may periodically check whether any error occurs in the stored configuration restoration value by using at least one of a parity check method, a cyclical redundancy check (CRC) method, and a checksum method. In a case when any error is confirmed in the stored configuration restoration value, the data driving device130may re-receive the initial configuration value from the data processing device140and update the configuration restoration value having an error, that is, the stored configuration restoration value with the re-received initial configuration value. According to an embodiment, the configuration of the environment for the high-speed communication, the transmission and reception of the clock pattern, the transmission and reception of the image data, and the transmission and reception of the initial configuration value between the data driving device130and the data processing device140may be performed through a main line ML shown inFIG.1. Here, the configuration of the environment for the high-speed communication and the transmission and reception of the clock pattern may be performed by the low-speed communication and the transmission and the reception of the image data and the transmission and reception of the initial configuration value may be performed by the high-speed communication. The transmission of the first signal or the second signal from the data driving device130may be performed through a first auxiliary line AL1and the transmission of a third signal indicating that the initial configuration value and the configuration restoration value are different, the transmission of a fourth signal to request the initial configuration value from the data processing device140, the transmission of the stored configuration restoration value, and the reception of the initial configuration value may be performed through a second auxiliary line AL2. Here, the second auxiliary line AL2may be a low-voltage differential signaling (LVDS) bus line. A LVDS bus line may have a good noise-resistance. The data processing device140may supply control signals to the gate driving device120and the data driving device130. For example, the data processing device140may transmit a gate control signal GCS to initialize a scan to the gate driving device120, output image data to the data driving device130, and transmit a data control signal to control the data driving device130to supply a data voltage Vp to each sub-pixel SP. The data processing device140may be referred to as a timing controller. An image processing device150may generate image data IMG and transmit the image data IMG to the data processing device140. The image processing device150may be referred to as a host. According to an embodiment, when driving voltages VCC are supplied to the data driving device130and the data processing device140, the data processing device140may perform a low-speed communication with the data driving device130through the main line ML to configure the environment for a high-speed communication with the data driving device130. After having completed the configuration of the environment for a high-speed communication by the low-speed communication with the data driving device130, the data processing device140may store an initial configuration value regarding the configuration of the environment for the high-speed communication. The data driving device130may store the initial configuration value in a volatile memory (for example, RAM) comprised therein. After having configured the environment for the high-speed communication, the data processing device140may transmit a clock pattern indicating a communication clock to the data driving device130so that a clock training may be performed in the data driving device130. The data processing device140may transmit the clock pattern to the data driving device130through the main line ML. When the clock training in the data driving device130is completed, the data processing device140may transmit the stored initial configuration value to the data driving device130. The data processing device140may transmit the stored initial configuration value to the data driving device130by the high-speed communication after having transmitted the clock pattern to the data driving device130by the low-speed communication. When receiving a first signal from the data driving device130through the first auxiliary line AL1, the data processing device140may identify that the clock training in the data driving device130is completed. Subsequently, the data processing device140may periodically transmit image data to the data driving device130by the high-speed communication. The image data may be transmitted through the main line ML. According to an embodiment, in a case when the data processing device140receives a second signal from the data driving device130through the first auxiliary line AL1, the data processing device140may transmit a clock pattern to the data driving device130through the main line ML without performing the re-configuration of the environment for the high-speed communication. Here, the data processing device140may transmit the clock pattern by the high-speed communication. Even after having transmitted the clock pattern to the data driving device130, in a case when the data processing device140receives the second signal from the data driving device130, the data processing device140may re-perform the low-speed communication with the data driving device130in order to re-configure the environment for the high-speed communication. According to an embodiment, even in a case when receiving from the data driving device130a third signal indicating that the initial configuration value is different from the configuration restoration value, the data processing device140may re-perform the low-speed communication with the data driving device130in order to re-configure the environment for the high-speed communication. Here, the data processing device140may receive the third signal through the second auxiliary line AL2. In a case when receiving a fourth signal from the data driving device130through the second auxiliary line AL2, the data processing device140may transmit the stored initial configuration value to the data driving device130through the second auxiliary line AL2. According to an embodiment, the data processing device140as well may periodically check whether any error occurs in the stored initial configuration value by using at least one of the parity check method, the cyclical redundancy check method, and the checksum method. When confirming that there is an error in the stored initial configuration value, the data processing device140may receive the configuration restoration value from the data driving device130and update the stored initial configuration value with the configuration restoration value. Here, the data processing device140may transmit a fifth signal to the data driving device130through the second auxiliary line AL2in order to request the configuration restoration value and receive the configuration restoration value transmitted from the data driving device130through the second auxiliary line AL2. In other words, the data processing device140may receive the configuration restoration value in a form of a low-voltage differential signaling. FIG.2is a configuration diagram of a data transmission and reception system according to an embodiment. Referring toFIG.2, a data transmission and reception system may comprise at least one data processing device140and a plurality of data driving devices130a,130b,130c,130d. The data processing device140may be disposed on a first printed circuit board (PCB) PCB1and connected with the plurality of data driving devices130a,130b,130c,130dthrough main lines ML, first auxiliary lines AL1, and second auxiliary lines AL2. The main lines ML, the first auxiliary lines AL1, and the second auxiliary lines AL2may respectively reach the plurality of data driving devices130a,130b,130c,130dvia the first printed circuit board PCB1and a second printed circuit board PCB2. The first printed circuit board PCB1and the second printed circuit board PCB2may be connected by a first film FL1formed of a flexible material. The main lines ML, the first auxiliary lines AL, and the second auxiliary lines AL2may be extended from the first printed circuit board PCB1to the second printed circuit board PCB2via the first film FL1. Each of the data driving devices130a,130b,130c,130dmay be disposed on a second film FL2in a chip-on-film (COF) form. The second film FL2may be a support substrate formed of a flexible material connecting the second printed circuit board PCB2and the display panel110. The main lines ML, the first auxiliary lines AL1, and the second auxiliary lines AL2may be extended from the second printed circuit board PCB2to the data driving devices130a,130b,130c,130drespectively via second films FL2. The main lines ML may connect the data processing device140and the respective data driving devices130a,130b,130c,130din a one-on-one way. The first auxiliary lines AL1may connect adjacent data driving devices130a,130b,130c,130dor the data driving device130dand the data processing device140without the first auxiliary line AL1and the main line ML overlapping each other in a plane. For example, a first data driving device130amay be connected with a second data driving device130bby a first auxiliary line AL1and the second data driving device130bmay be connected with a third data driving device130cby a first auxiliary line AL1. FIG.3is a configuration diagram of a data processing device and a data driving device according to an embodiment. Referring toFIG.3, the data processing device140may comprise a control circuit for data processing342, a first communication circuit for data processing344, a second communication circuit for data processing346, and a third communication circuit for data processing348. The data driving device130may comprise a control circuit for data driving332, a first communication circuit for data driving334, a second communication circuit for data driving336, and a third communication circuit for data driving338. The first communication circuit for data processing344and the first communication circuit for data driving334may be connected through a main line ML. The first communication circuit for data processing344may transmit information, a clock pattern, image data, and an initial configuration value for configuring a high-speed communication environment to the first communication circuit for data driving334through the main line ML. Here, information and a clock pattern for configuring a high-speed communication environment may be transmitted by a low-speed communication and image data and an initial configuration value may be transmitted by a high-speed communication. The second communication circuit for data processing346and the second communication circuit for data driving336may be connected through a first auxiliary line AL1. The second communication circuit for data driving336may transmit a first signal and a second signal to the second communication circuit for data processing346through the first auxiliary line AL1. The third communication circuit for data processing348and the third communication circuit for data driving338may be connected through a second auxiliary line AL2. The third communication circuit for data driving338may transmit a third signal or a fourth signal to the third communication circuit for data processing348through the second auxiliary line AL2. The third communication circuit for data driving338may also transmit a configuration restoration value to the third communication circuit for data processing348through the second auxiliary line AL2. The third communication circuit for data processing348may transmit a fifth signal or an initial configuration value to the third communication circuit for data driving338through the second auxiliary line AL2. Here, the third signal may be a signal indicating that the initial configuration value and the configuration restoration value are different from each other, the fourth signal may be a signal to request the initial configuration value from the data processing device140, and the fifth signal may be a signal to request the configuration restoration value from the data driving device130. According to an embodiment, the second auxiliary line AL2may be a low-voltage differential signaling (LVDS) bus line. Since a low-voltage differential signaling bus line has a high noise-resistance, when data is transmitted or received through the second auxiliary line AL2, it is possible to prevent errors in data transmission/reception due to noise. FIG.4andFIG.5are diagrams respectively illustrating data transmission and reception sequences in a main line and a first auxiliary line according to an embodiment. When driving voltages VCC are supplied to the data driving device130and the data processing device140, the environment for a high-speed communication between the data driving device130and the data processing device140may be configured. Subsequently, the data processing device140may transmit a clock pattern to the data driving device130. The data driving device130may receive a clock pattern and perform a training of a communication clock according to the clock pattern. After completing the training of the communication clock, the data driving device130may change a voltage of a signal formed in the first auxiliary line AL1from a second level (for example, a low level) to a first level (for example, a high level). The data processing device140and the data driving device130may communicate with each other in a phase locked loop (PLL) method. In such a method, the data driving device130may generate an internal clock in conformity with a frequency and a phase of a clock pattern. The data driving device130may complete a clock training within a time limit T1for training. The data processing device140may transmit a clock pattern during an initial clock training (ICT) time section comprising a predetermined margin time so as to be longer than the time limit T1. The clock training may be performed in an early stage of data transmission. In addition, when a link between the data processing device140and the data driving device130is lost, the clock training may be performed again. After the clock training has been completed, the data processing device140may transmit an initial configuration value for configuring the environment for a high-speed communication to the data driving device130, and subsequently, transmit image data to the data driving device130through the main line ML. According to an embodiment, a low-speed communication may be performed between the data driving device130and the data processing device140while an environment for a high-speed communication is configured and a clock training is performed and a high-speed communication may be performed therebetween after the clock training has been completed. Meanwhile, the image data may be transmitted in every frame and there may be a frame blank time section (vertical blank: VB) between two adjacent frames, each for image data transmission. A time section remaining after excluding the frame blank time section may be referred to as a frame active time section. As described above, the data processing device140may transmit image data to the data driving device130in every frame and transmit a stored initial configuration value thereto in a frame blank time section between one frame and another frame as shown inFIG.5. Here, one frame may comprise a plurality of sub time sections and image data may be transmitted during one sub time section. For example, one frame may comprise a plurality of H (H: horizontal) time sections1-H (horizontal periods) respectively corresponding to a plurality of lines of pixels in the display panel. The data processing device140may transmit image data corresponding to each line during every H time section1-H. An H time section1-H, for example, may comprise a configuration transmission section and an image transmission section with respect to the data processing device140. The data processing device140may transmit image data in the image transmission section of an H time section1-H. An H time section1-H may comprise a configuration reception section CFG and an image reception section DATA with respect to the data driving device130. The data driving device130may receive image data in an image reception section DATA. Meanwhile, the data driving device130may check configuration data and image data and, in a case when the configuration data or the image data is beyond predetermined regulations, for example, in a case when there is any abnormality in a high-speed communication of the device due to noise such as static, the data driving device130may generate a second signal, which is a lock fail signal. In other words, the data driving device130may change the level of a voltage of a signal formed in the first auxiliary line AL1from a first level (for example, a high level) into a second level (for example, a low level). The lock fail signal may indicate that the link between the data processing device140and the data driving device130is lost. In such a case, a general data driving device and a general data processing device perform again the configuration of an environment for a high-speed communication by using a low-speed communication as shown inFIG.6A. For this reason, it takes a long time T2until the link between the data driving device and the data processing device is restored to its normal state. However, according to an embodiment, the data driving device130receives an initial configuration value from the data processing device140and stores it as a configuration restoration value. When the link between the data processing device140and the data driving device130is lost, that is, when a first signal is changed to a second signal as shown inFIG.6B, the data driving device130may rapidly restore the environment for a high-speed communication by using the stored configuration restoration value. Accordingly, it is possible to reduce a time T3that it takes for the restoration of the link between the data driving device130and the data processing device140. Hereinafter, the process of transmitting and receiving data between the data driving device and the data processing device will be described. FIG.7is a flow diagram illustrating a process of transmitting and receiving data in a data driving device. Referring toFIG.7, when driving voltages VCC are applied to the data driving device130and the data processing device140, the data driving device130may perform a low-speed communication with the data processing device140for configuring an environment for a high-speed communication with the data processing device140(S710). After having configured the environment for a high-speed communication, the data driving device130may receive from the data processing device140a clock pattern indicating a communication clock for the communication with the data processing device140and perform a clock training (S720). After having normally completed the clock training, the data driving device130may output a first signal indicating that the state of communication is stable and transmit it to the data processing device140. After S720, the data driving device130may receive from the data processing device140an initial configuration value regarding the configuration of the environment for a high-speed communication and store it as a configuration restoration value (S730). Here, in S710and S720, a low-speed communication may be performed between the data driving device130and the data processing device140and a high-speed communication may be performed between the data driving device130and the data processing device140from S730. After having stored the initial configuration value as a configuration restoration value as described above, the data driving device130may periodically receive image data from the data processing device140by a high-speed communication and process the image data (S740). In a case when there is any abnormality in a high-speed communication due to noise such as static occurring inside the display device100while the data driving device130periodically receives image data and processes the same, the data driving device130may restore the configuration of the environment for a high-speed communication according to the stored configuration restoration value (S750, S760). In a case when no abnormality occurs in the high-speed communication in S750, the data driving device130may perform the operation of S740. Meanwhile, the data driving device130may change the first signal to the second signal and transmit the second signal to the data processing device140in S760. Here, the second signal may be a signal indicating that the state of communication is unstable. According to an embodiment, the data driving device130may receive a clock pattern from the data processing device140by a high-speed communication and re-perform a clock training after S760. When the re-performed clock training is not completed, the data driving device130may re-perform a low-speed communication with the data processing device140for re-configuring the environment for a high-speed communication and perform the clock training again. After S730, the data driving device130may receive an initial configuration value from the data processing device140in every predetermined period by a high-speed communication and compare the received initial configuration value with a stored configuration restoration value. In a case when the initial configuration value and the stored configuration restoration value are identical when comparing the initial configuration value and the stored configuration restoration value, the data driving device130may keep the stored configuration value. In a case when the initial configuration value and the stored configuration restoration value are different, the data driving device130may perform a low-speed communication with the data processing device140for re-configuring the environment for a high-speed communication. In addition, the data driving device130may receive a clock pattern from the data processing device140and re-perform a clock training. Subsequently, the data driving device130may receive from the data processing device140a re-configuration value for re-configuring the environment for a high-speed communication and update the stored configuration restoration value with the re-configuration value. After S730, the data driving device130may periodically check if there is any error in the stored configuration restoration value by using at least one of the parity check method, a cyclical redundancy check method, and a checksum method. When confirming that there is an error in the stored configuration restoration value, the data driving device130may re-receive an initial configuration value from the data processing device140. The data driving device130may update the configuration restoration value comprising an error, that is, the stored configuration restoration value with the re-received initial configuration value. The above-described process may be repeated while driving voltages VCC are applied to the data driving device130and the data processing device140. When the driving voltages VCC stops being applied thereto, the above-described process may be ended. FIG.8is a flow diagram illustrating a process of transmitting and receiving data in a data processing device. Referring toFIG.8, when driving voltages VCC are supplied to the data driving device130and the data processing device140, the data processing device140may perform a low-speed communication with the data driving device130to configure an environment for a high-speed communication with the data driving device130(S810). After having completed the configuration of the environment for a high-speed communication with the data driving device130by using a low-speed communication, the data processing device140may store an initial configuration value for the configuration of the environment for a high-speed communication (S820). After having configured the environment for a high-speed communication, the data processing device140may transmit a clock pattern, indicating a communication clock, to the data driving device130so that the data driving device130may perform a clock training (S830). When the clock training in the data driving device130is completed, the data processing device140may transmit the stored initial configuration value to the data driving device130(S840). Here, the data processing device140may transmit the clock pattern to the data driving device130by a low-speed communication, and then, transmit the initial configuration value to the data driving device130by a high-speed communication. Subsequently, the data processing device140may periodically transmit image data to the data driving device130by a high-speed communication (S850). After S850, the data processing device140may periodically check if there is any error in the stored initial configuration value by using at least one of the parity check method, a cyclical redundancy check method, and a checksum method. When confirming that there is an error in the stored initial configuration value, the data processing device140may receive a configuration restoration value from the data driving device130and update the stored initial configuration value with the configuration restoration value. Here, the data processing device140may receive the configuration restoration value in a form of a low-voltage differential signal. The above-described process may be repeated while driving voltages VCC are applied to the data driving device130and the data processing device140. When the driving voltages VCC stops being applied thereto, the above-described process may be ended. | 36,422 |
11862125 | With regard to descriptions of the drawings, the same or similar components will be marked by the same or similar reference signs. DETAILED DESCRIPTION Hereinafter, various embodiments of the disclosure are described with reference to the accompanying drawings. However, it is not intended to limit the disclosure to specific embodiments, and it should be understood that various modifications, equivalents, and/or alternatives of embodiments of the disclosure are included. It should be appreciated that various embodiments of the present disclosure and the terms used therein are not intended to limit the technological features set forth herein to particular embodiments and include various changes, equivalents, or replacements for a corresponding embodiment. With regard to the description of the drawings, similar reference numerals may be used to refer to similar or related elements. It is to be understood that a singular form of a noun corresponding to an item may include one or more of the things, unless the relevant context clearly indicates otherwise. As used herein, each of such phrases as “A or B,” “at least one of A and B,” “at least one of A or B,” “A, B, or C,” “at least one of A, B, and C,” and “at least one of A, B, or C,” may include any one of, or all possible combinations of the items enumerated together in a corresponding one of the phrases. As used herein, such terms as “1st” and “2nd,” or “first” and “second” may be used to simply distinguish a corresponding component from another, and does not limit the components in other aspect (e.g., importance or order). It is to be understood that if an element (e.g., a first element) is referred to, with or without the term “operatively” or “communicatively”, as “coupled with,” “coupled to,” “connected with,” or “connected to” another element (e.g., a second element), it means that the element may be coupled with the other element directly (e.g., wiredly), wirelessly, or via a third element. FIG.1is a block diagram illustrating an electronic device101in a network environment100, according to an embodiment. Referring toFIG.1, the electronic device101in the network environment100may communicate with an external electronic device102via a first network198(e.g., a short-range wireless communication network), or an external electronic device104or a server108via a second network199(e.g., a long-range wireless communication network). According to an embodiment, the electronic device101may communicate with the external electronic device104via the server108. According to an embodiment, the electronic device101may include a processor120, memory130, an input device or module150, a sound output device or module155, a display device or module160, an audio module170, a sensor module176, an interface177, a haptic module179, a camera module180, a power management module188, a battery189, a communication module190, a subscriber identification module (SIM)196, or an antenna module197. In some embodiments, at least one (e.g., the display module160or the camera module180) of the components may be omitted from the electronic device101, or one or more other components may be added in the electronic device101. In some embodiments, some of the components may be implemented as single integrated circuitry. For example, the sensor module176(e.g., a fingerprint sensor, an iris sensor, or an illuminance sensor) may be implemented as embedded in the display module160(e.g., a display). The processor120may execute, for example, software (e.g., a program140) to control at least one other component (e.g., a hardware or software component) of the electronic device101coupled with the processor120, and may perform various data processing or computation. According to one embodiment, as at least part of the data processing or computation, the processor120may store a command or data received from another component (e.g., the sensor module176or the communication module190) in volatile memory132, process the command or the data stored in the volatile memory132, and store resulting data in non-volatile memory134. According to an embodiment, the processor120may include a main processor121(e.g., a central processing unit (CPU) or an application processor (AP)), or an auxiliary processor123(e.g., a graphics processing unit (GPU), a neural processing unit (NPU), an image signal processor (ISP), a sensor hub processor, or a communication processor (CP)) that is operable independently from, or in conjunction with, the main processor121. For example, when the electronic device101includes the main processor121and the auxiliary processor123, the auxiliary processor123may be adapted to consume less power than the main processor121, or to be specific to a specified function. The auxiliary processor123may be implemented as separate from, or as part of the main processor121. The auxiliary processor123may control at least some of functions or states related to at least one component (e.g., the display module160, the sensor module176, or the communication module190) among the components of the electronic device101, instead of the main processor121while the main processor121is in an inactive (e.g., sleep) state, or together with the main processor121while the main processor121is in an active state (e.g., executing an application). According to an embodiment, the auxiliary processor123(e.g., an image signal processor or a communication processor) may be implemented as part of another component (e.g., the camera module180or the communication module190) functionally related to the auxiliary processor123. According to an embodiment, the auxiliary processor123(e.g., the neural processing unit) may include a hardware structure specified for artificial intelligence model processing. An artificial intelligence model may be generated by machine learning. Such learning may be performed, e.g., by the electronic device101where the artificial intelligence is performed or via a separate server (e.g., the server108). Learning algorithms may include, but are not limited to, e.g., supervised learning, unsupervised learning, semi-supervised learning, or reinforcement learning. The artificial intelligence model may include a plurality of artificial neural network layers. The artificial neural network may be a deep neural network (DNN), a convolutional neural network (CNN), a recurrent neural network (RNN), a restricted boltzmann machine (RBM), a deep belief network (DBN), a bidirectional recurrent deep neural network (BRDNN), deep Q-network or a combination of two or more thereof but is not limited thereto. The artificial intelligence model may, additionally or alternatively, include a software structure other than the hardware structure. The memory130may store various data used by at least one component (e.g., the processor120or the sensor module176) of the electronic device101. The various data may include, for example, software (e.g., the program140) and input data or output data for a command related thereto. The memory130may include the volatile memory132or the non-volatile memory134. The program140may be stored in the memory130as software, and may include, for example, an operating system (OS)142, middleware144, or an application146. The input device150may receive a command or data to be used by another component (e.g., the processor120) of the electronic device101, from the outside (e.g., a user) of the electronic device101. The input device150may include, for example, a microphone, a mouse, a keyboard, a key (e.g., a button), or a digital pen (e.g., a stylus pen). The sound output module155may output sound signals to the outside of the electronic device101. The sound output module155may include, for example, a speaker or a receiver. The speaker may be used for general purposes, such as playing multimedia or playing record. The receiver may be used for receiving incoming calls. According to an embodiment, the receiver may be implemented as separate from, or as part of the speaker. The display module160may visually provide information to the outside (e.g., a user) of the electronic device101. The display module160may include, for example, a display, a hologram device, or a projector and control circuitry to control a corresponding one of the display, hologram device, and projector. According to an embodiment, the display module160may include a touch sensor adapted to detect a touch, or a pressure sensor adapted to measure the intensity of force incurred by the touch. The audio module170may convert a sound into an electrical signal and vice versa. According to an embodiment, the audio module170may obtain the sound via the input device150, or output the sound via the sound output module155or a headphone of an external electronic device (e.g., an electronic device102) directly (e.g., wiredly) or wirelessly coupled with the electronic device101. The sensor module176may detect an operational state (e.g., power or temperature) of the electronic device101or an environmental state (e.g., a state of a user) external to the electronic device101, and then generate an electrical signal or data value corresponding to the detected state. According to an embodiment, the sensor module176may include, for example, a gesture sensor, a gyro sensor, an atmospheric pressure sensor, a magnetic sensor, an acceleration sensor, a grip sensor, a proximity sensor, a color sensor, an infrared (IR) sensor, a biometric sensor, a temperature sensor, a humidity sensor, or an illuminance sensor. The interface177may support one or more specified protocols to be used for the electronic device101to be coupled with the external electronic device (e.g., the electronic device102) directly (e.g., wiredly) or wirelessly. According to an embodiment, the interface177may include, for example, a high definition multimedia interface (HDMI), a universal serial bus (USB) interface, a secure digital (SD) card interface, or an audio interface. A connecting terminal178may include a connector via which the electronic device101may be physically connected with the external electronic device (e.g., the electronic device102). According to an embodiment, the connecting terminal178may include, for example, a HDMI connector, a USB connector, a SD card connector, or an audio connector (e.g., a headphone connector). The haptic module179may convert an electrical signal into a mechanical stimulus (e.g., a vibration or a movement) or electrical stimulus which may be recognized by a user via his tactile sensation or kinesthetic sensation. According to an embodiment, the haptic module179may include, for example, a motor, a piezoelectric element, or an electric stimulator. The camera module180may capture a still image or moving images. According to an embodiment, the camera module180may include one or more lenses, image sensors, image signal processors, or flashes. The power management module188may manage power supplied to the electronic device101. According to one embodiment, the power management module188may be implemented as at least part of, for example, a power management integrated circuit (PMIC). The battery189may supply power to at least one component of the electronic device101. According to an embodiment, the battery189may include, for example, a primary cell which is not rechargeable, a secondary cell which is rechargeable, or a fuel cell. The communication module190may support establishing a direct (e.g., wired) communication channel or a wireless communication channel between the electronic device101and the external electronic device (e.g., the electronic device102, the electronic device104, or the server108) and performing communication via the established communication channel. The communication module190may include one or more communication processors that are operable independently from the processor120(e.g., the application processor (AP)) and supports a direct (e.g., wired) communication or a wireless communication. According to an embodiment, the communication module190may include a wireless communication module192(e.g., a cellular communication module, a short-range wireless communication module, or a global navigation satellite system (GNSS) communication module) or a wired communication module194(e.g., a local area network (LAN) communication module or a power line communication (PLC) module). A corresponding one of these communication modules may communicate with the external electronic device via the first network198(e.g., a short-range communication network, such as Bluetooth™, wireless-fidelity (Wi-Fi) direct, or infrared data association (IrDA)) or the second network199(e.g., a long-range communication network, such as a legacy cellular network, a 5G network, a next-generation communication network, the Internet, or a computer network (e.g., LAN or wide area network (WAN)). These various types of communication modules may be implemented as a single component (e.g., a single chip), or may be implemented as multi components (e.g., multi chips) separate from each other. The wireless communication module192may identify and authenticate the electronic device101in a communication network, such as the first network198or the second network199, using subscriber information (e.g., international mobile subscriber identity (IMSI)) stored in the subscriber identification module196. The wireless communication module192may support a 5G network, after a 4G network, and next-generation communication technology, e.g., new radio (NR) access technology. The NR access technology may support enhanced mobile broadband (eMBB), massive machine type communications (mMTC), or ultra-reliable and low-latency communications (URLLC). The wireless communication module192may support a high-frequency band (e.g., the mmWave band) to achieve, e.g., a high data transmission rate. The wireless communication module192may support various technologies for securing performance on a high-frequency band, such as, e.g., beamforming, massive multiple-input and multiple-output (massive MIMO), full dimensional MIMO (FD-MIMO), array antenna, analog beam-forming, or large scale antenna. The wireless communication module192may support various requirements specified in the electronic device101, an external electronic device (e.g., the electronic device104), or a network system (e.g., the second network199). According to an embodiment, the wireless communication module192may support a peak data rate (e.g., 20 Gbps or more) for implementing eMBB, loss coverage (e.g., 164 dB or less) for implementing mMTC, or U-plane latency (e.g., 0.5 ms or less for each of downlink (DL) and uplink (UL), or a round trip of 1 ms or less) for implementing URLLC. The antenna module197may transmit or receive a signal or power to or from the outside (e.g., the external electronic device) of the electronic device101. According to an embodiment, the antenna module197may include an antenna including a radiating element composed of a conductive material or a conductive pattern formed in or on a substrate (e.g., a printed circuit board (PCB)). According to an embodiment, the antenna module197may include a plurality of antennas (e.g., array antennas). In such a case, at least one antenna appropriate for a communication scheme used in the communication network, such as the first network198or the second network199, may be selected, for example, by the communication module190(e.g., the wireless communication module192) from the plurality of antennas. The signal or the power may then be transmitted or received between the communication module190and the external electronic device via the selected at least one antenna. According to an embodiment, another component (e.g., a radio frequency integrated circuit (RFIC)) other than the radiating element may be additionally formed as part of the antenna module197. According to various embodiments, the antenna module197may form a mmWave antenna module. According to an embodiment, the mmWave antenna module may include a printed circuit board, a RFIC disposed on a first surface (e.g., the bottom surface) of the printed circuit board, or adjacent to the first surface and capable of supporting a designated high-frequency band (e.g., the mmWave band), and a plurality of antennas (e.g., array antennas) disposed on a second surface (e.g., the top or a side surface) of the printed circuit board, or adjacent to the second surface and capable of transmitting or receiving signals of the designated high-frequency band. At least some of the above-described components may be coupled mutually and communicate signals (e.g., commands or data) therebetween via an inter-peripheral communication scheme (e.g., a bus, general purpose input and output (GPIO), serial peripheral interface (SPI), or mobile industry processor interface (MIPI)). According to an embodiment, commands or data may be transmitted or received between the electronic device101and the external electronic device104via the server108coupled with the second network199. Each of the external electronic devices102or104may be a device of a same type as, or a different type, from the electronic device101. According to an embodiment, all or some of operations to be executed at the electronic device101may be executed at one or more of the external electronic devices102,104, or108. For example, if the electronic device101should perform a function or a service automatically, or in response to a request from a user or another device, the electronic device101, instead of, or in addition to, executing the function or the service, may request the one or more external electronic devices to perform at least part of the function or the service. The one or more external electronic devices receiving the request may perform the at least part of the function or the service requested, or an additional function or an additional service related to the request, and transfer an outcome of the performing to the electronic device101. The electronic device101may provide the outcome, with or without further processing of the outcome, as at least part of a reply to the request. To that end, a cloud computing, distributed computing, mobile edge computing (MEC), or client-server computing technology may be used, for example. The electronic device101may provide ultra low-latency services using, e.g., distributed computing or mobile edge computing. In another embodiment, the external electronic device104may include an internet-of-things (IoT) device. The server108may be an intelligent server using machine learning and/or a neural network. According to an embodiment, the external electronic device104or the server108may be included in the second network199. The electronic device101may be applied to intelligent services (e.g., smart home, smart city, smart car, or healthcare) based on 5G communication technology or IoT-related technology. The electronic device according to various embodiments may be one of various types of electronic devices. The electronic devices may include, for example, a portable communication device (e.g., a smartphone), a computer device, a portable multimedia device, a portable medical device, a camera, a wearable device, or a home appliance. According to an embodiment of the disclosure, the electronic devices are not limited to those described above. A “module” may include a unit implemented in hardware, software, or firmware, and may interchangeably be used with other terms, for example, “logic,” “logic block,” “part,” or “circuitry”. A module may be a single integral component, or a minimum unit or part thereof, adapted to perform one or more functions. For example, according to an embodiment, the module may be implemented in a form of an application-specific integrated circuit (ASIC). A “device” may be equivalent to a module within another device, or it may be its own structure. Various embodiments as set forth herein may be implemented as software (e.g., the program140) including one or more instructions that are stored in a storage medium (e.g., internal memory136or external memory138) that is readable by a machine (e.g., the electronic device101). For example, a processor (e.g., the processor120) of the machine (e.g., the electronic device101) may invoke at least one of the one or more instructions stored in the storage medium, and execute it, with or without using one or more other components under the control of the processor. This allows the machine to be operated to perform at least one function according to the at least one instruction invoked. The one or more instructions may include a code generated by a complier or a code executable by an interpreter. The machine-readable storage medium may be provided in the form of a non-transitory storage medium. Wherein, the term “non-transitory” simply means that the storage medium is a tangible device, and does not include a signal (e.g., an electromagnetic wave), but this term does not differentiate between where data is semi-permanently stored in the storage medium and where the data is temporarily stored in the storage medium. According to an embodiment, a method according to various embodiments of the disclosure may be included and provided in a computer program product. The computer program product may be traded as a product between a seller and a buyer. The computer program product may be distributed in the form of a machine-readable storage medium (e.g., compact disc read only memory (CD-ROM)), or be distributed (e.g., downloaded or uploaded) online via an application store (e.g., PlayStore™), or between two user devices (e.g., smart phones) directly. If distributed online, at least part of the computer program product may be temporarily generated or at least temporarily stored in the machine-readable storage medium, such as memory of the manufacturer's server, a server of the application store, or a relay server. According to various embodiments, each component (e.g., a module or a program) of the above-described components may include a single entity or multiple entities, and some of the multiple entities may be separately disposed in different components. According to various embodiments, one or more of the above-described components may be omitted, or one or more other components may be added. Alternatively or additionally, a plurality of components (e.g., modules or programs) may be integrated into a single component. In such a case, according to various embodiments, the integrated component may still perform one or more functions of each of the plurality of components in the same or similar manner as they are performed by a corresponding one of the plurality of components before the integration. According to various embodiments, operations performed by the module, the program, or another component may be carried out sequentially, in parallel, repeatedly, or heuristically, or one or more of the operations may be executed in a different order or omitted, or one or more other operations may be added. FIG.2is a block diagram200illustrating the display module160, according to an embodiment. Referring toFIG.2, the display module160may include a display210and a display driver integrated circuit (DDI)230to control the display210. The DDI230may include an interface module231, memory233(e.g., buffer memory), an image processing module235, or a mapping module237. The DDI230may receive image information that contains image data or an image control signal corresponding to a command to control the image data from another component of the electronic device101via the interface module231. For example, according to an embodiment, the image information may be received from the processor120(e.g., the main processor121(e.g., an application processor)) or the auxiliary processor123(e.g., a graphics processing unit) operated independently from the function of the main processor121. The DDI230may communicate, for example, with touch circuitry serving as part or all of the input device150, or the sensor module176, via the interface module231. The DDI230may also store at least part of the received image information in the memory233, for example, on a frame by frame basis. The image processing module235may perform pre-processing or post-processing (e.g., adjustment of resolution, brightness, or size) with respect to at least part of the image data. According to an embodiment, the pre-processing or post-processing may be performed, for example, based at least in part on one or more characteristics of the image data or one or more characteristics of the display210. The mapping module237may generate a voltage value or a current value corresponding to the image data pre-processed or post-processed by the image processing module235. According to an embodiment, the generating of the voltage value or current value may be performed, for example, based at least in part on one or more attributes of the pixels (e.g., an array, such as an RGB stripe or a pentile structure, of the pixels, or the size of each subpixel). At least some pixels of the display210may be driven, for example, based at least in part on the voltage value or the current value such that visual information (e.g., a text, an image, or an icon) corresponding to the image data may be displayed via the display210. According to an embodiment, the display module160may further include the touch circuitry250. The touch circuitry250may include a touch sensor251and a touch sensor IC253to control the touch sensor251. The touch sensor IC253may control the touch sensor251to sense a touch input or a hovering input with respect to a certain position on the display210. To achieve this, for example, the touch sensor251may detect (e.g., measure) a change in a signal (e.g., a voltage, a quantity of light, a resistance, or a quantity of one or more electric charges) corresponding to the certain position on the display210. The touch circuitry250may provide input information (e.g., a position, an area, a pressure, or a time) indicative of the touch input or the hovering input detected via the touch sensor251to the processor120. According to an embodiment, at least part (e.g., the touch sensor IC253) of the touch circuitry250may be formed as part of the display210or the DDI230, or as part of another component (e.g., the auxiliary processor123) disposed outside the display module160. According to an embodiment, the display module160may further include at least one sensor (e.g., a fingerprint sensor, an iris sensor, a pressure sensor, or an illuminance sensor) of the sensor module176or a control circuit for the at least one sensor. In such a case, the at least one sensor or the control circuit for the at least one sensor may be embedded in one portion of a component (e.g., the display210, the DDI230, or touch circuitry serving as part or all of the input device150)) of the display module160. For example, when the sensor module176embedded in the display module160includes a biometric sensor (e.g., a fingerprint sensor), the biometric sensor may obtain biometric information (e.g., a fingerprint image) corresponding to a touch input received via a portion of the display210. As another example, when the sensor module176embedded in the display module160includes a pressure sensor, the pressure sensor may obtain pressure information corresponding to a touch input received via a partial or whole area of the display210. According to an embodiment, the touch sensor251or the sensor module176may be disposed between pixels in a pixel layer of the display210, or over or under the pixel layer. FIG.3is a block diagram300illustrating the program140, according to an embodiment. According to an embodiment, the program140may include an operating system (OS)142to control one or more resources of the electronic device101, middleware144, or an application146executable in the OS142. The OS142may include, for example, Android™ iOS™, Windows™, Symbian™, Tizen™, or Bada™. At least part of the program140, for example, may be pre-loaded on the electronic device101during manufacture, or may be downloaded from or updated by an external electronic device (e.g., the electronic device102or104, or the server108) during use by a user. The OS142may control management (e.g., allocating or deallocation) of one or more system resources (e.g., process, memory, or power source) of the electronic device101. The OS142, additionally or alternatively, may include one or more driver programs to drive other hardware devices of the electronic device101, for example, the input device150, the sound output device155, the display module160, the audio module170, the sensor module176, the interface177, the haptic module179, the camera module180, the power management module188, the battery189, the communication module190, the subscriber identification module196, or the antenna module197. The middleware144may provide various functions to the application146such that a function or information provided from one or more resources of the electronic device101may be used by the application146. The middleware144may include, for example, an application manager301, a window manager303, a multimedia manager305, a resource manager307, a power manager309, a database manager311, a package manager313, a connectivity manager315, a notification manager317, a location manager319, a graphic manager321, a security manager323, a telephony manager325, or a voice recognition manager327. The application manager301, for example, may manage the life cycle of the application146. The window manager303, for example, may manage one or more graphical user interface (GUI) resources that are used on a screen. The multimedia manager305, for example, may identify one or more formats to be used to play media files, and may encode or decode a corresponding one of the media files using a codec appropriate for a corresponding format selected from the one or more formats. The resource manager307, for example, may manage the source code of the application146or a memory space of the memory130. The power manager309, for example, may manage the capacity, temperature, or power of the battery189, and determine or provide related information to be used for the operation of the electronic device101based at least in part on corresponding information of the capacity, temperature, or power of the battery189. According to an embodiment, the power manager309may interwork with a basic input/output system (BIOS) (not shown) of the electronic device101. The database manager311, for example, may generate, search, or change a database to be used by the application146. The package manager313, for example, may manage installation or update of an application that is distributed in the form of a package file. The connectivity manager315, for example, may manage a wireless connection or a direct connection between the electronic device101and the external electronic device. The notification manager317, for example, may provide a function to notify a user of an occurrence of a specified event (e.g., an incoming call, message, or alert). The location manager319, for example, may manage locational information on the electronic device101. The graphic manager321, for example, may manage one or more graphic effects to be offered to a user or a user interface related to the one or more graphic effects. The security manager323, for example, may provide system security or user authentication. The telephony manager325, for example, may manage a voice call function or a video call function provided by the electronic device101. The voice recognition manager327, for example, may transmit a user's voice data to the server108, and receive, from the server108, a command corresponding to a function to be executed on the electronic device101based at least in part on the voice data, or text data converted based at least in part on the voice data. According to an embodiment, the middleware144may dynamically delete some existing components or add new components. According to an embodiment, at least part of the middleware144may be included as part of the OS142or may be implemented as another software separate from the OS142. The application146may include, for example, a home application351, dialer application353, short message service (SMS)/multimedia messaging service (MMS) application355, instant message (IM) application357, browser application359, camera application361, alarm application363, contact application365, voice recognition application367, email application369, calendar application371, media player application373, album application375, watch application377, health application379(e.g., for measuring the degree of workout or biometric information, such as blood sugar), and/or environmental information application381(e.g., for measuring air pressure, humidity, or temperature information). According to an embodiment, the application146may further include an information exchanging application (not shown) that is capable of supporting information exchange between the electronic device101and the external electronic device. The information exchange application, for example, may include a notification relay application adapted to transfer designated information (e.g., a call, message, or alert) to the external electronic device or a device management application adapted to manage the external electronic device. The notification relay application may transfer notification information corresponding to an occurrence of a specified event (e.g., receipt of an email) at another application (e.g., the email application369) of the electronic device101to the external electronic device. Additionally or alternatively, the notification relay application may receive notification information from the external electronic device and provide the notification information to a user of the electronic device101. The device management application may control the power (e.g., turn-on or turn-off) or the function (e.g., adjustment of brightness, resolution, or focus) of the external electronic device or some component thereof (e.g., a display device or a camera module of the external electronic device). The device management application, additionally or alternatively, may support installation, delete, or update of an application running on the external electronic device. Hereinafter, the operation of the electronic device according to an embodiment will be described with reference toFIG.4. FIG.4is a block diagram400illustrating the configuration of an electronic device101, according to an embodiment. According to an embodiment, the same components as those of the above described embodiment will be assigned with the same reference numerals, and the duplication thereof will be omitted. Referring toFIG.4, the electronic device101may include the memory130(e.g., the memory130inFIG.1) and a display module405(e.g., the display module160inFIG.2). According to an embodiment, the memory130may include an application401(e.g., the application146inFIG.3), a graphics module402(e.g., the graphic manager321inFIG.3), a display controller driver403, and a refresh rate managing module404. According to an embodiment, the display module405may be operated by the display driver IC included in the display module405, and the operations of the application401, the graphics module402, the display controller driver403, and the refresh rate managing module404may be performed in the memory, by a processor (e.g., the processor120ofFIG.1) of the electronic device101, which is operatively connected to the memory130of the electronic device101. According to an embodiment, the application401of the electronic device101may include information on a refresh rate required to execute the application401. According to an embodiment, as the application401is executed, the application401may transmit the refresh rate information411required for the application401to the graphics module402. According to an embodiment, the graphics module402may determine a target refresh rate, based on the received refresh rate information. According to an embodiment, the graphics module402may transmit a request412to the display controller driver403to change the refresh rate to the determined the target refresh rate. For example, the graphics module402may receive multiple pieces of refresh rate information from a plurality of applications401and may determine the target refresh rate based on the highest refresh rate of the received multiple pieces of refresh rate information. For another example, when receiving the multiple pieces of refresh rate information from the plurality of applications401, the graphics module402may determine the target refresh rate, based on a refresh rate of an application displayed in the foreground. According to an embodiment, the display controller driver403may be a driver to control a display controller at the hardware stage, and the display controller may transmit information on a frame to the display module405in response to a first synchronization signal (e.g., TE-VSYNC) received from the display driving circuit (e.g., the display driver IC230inFIG.2). According to an embodiment, the display controller driver403may transmit a request413to the refresh rate managing module404to determine a parameter, which is to be controlled, of a first parameter, a second parameter, and a third parameter, to change the refresh rate to the received target refresh rate. According to an embodiment, the first parameter, which serves as a parameter for controlling the refresh rate through the hardware configuration, may represent a frequency (Hz) of the first synchronization signal (e.g., TE-VSYNC). According to an embodiment, the first synchronization signal may be a signal formed based on the hardware synchronization signal (e.g., HW-VSYNC) generated inside the display driving circuit (e.g., the display driver IC230ofFIG.2) included in the display module405. According to an embodiment, the first synchronization signal may be generated from the display driving circuit, may be transmitted to the processor, and may be sensed by the display controller driver403operating in the processor. According to one embodiment, when the frequency of the hardware synchronization signal (e.g., HW-VSYNC) is 120 Hz, and when the refresh rate managing module404changes the first parameter to 60 Hz, even if the hardware synchronization signal is generated 120 times per second, the first synchronization signal (e.g., TE-VSYNC) may be generated 60 times per second. The first synchronization signal is generated with respect to one of every two hardware synchronization signals (e.g., HW-VSYNC) while skipping a remaining one of two hardware synchronization signals (e.g., HW-VSYNC). According to an embodiment, the second parameter, which is a parameter for controlling a refresh rate through a hardware configuration, may indicate the increment or the decrement of a “blank” in information on one frame, to substitute for a portion of an active video area. According to an embodiment, an area or duration for the blank may be controlled, based on at least one of a waiting time (a vertical back porch; VBP) to output a vertical signal, a waiting time (a vertical front porch; VFP) after outputting a vertical signal, a waiting time (horizontal back porch; HBP) to output a horizontal signal, and/or a waiting time (horizontal front portion; HFP) after outputting a horizontal signal. According to an embodiment, the second parameter may indicate the increment or the decrement of the waiting time (VFP) after outputting the vertical signal. According to an embodiment, when one frame information is completely output on the display of the display module405(e.g., the display210inFIG.2), the hardware synchronization signal (e.g., HW-VSYNC) may be generated. According to an embodiment, a period of generating the hardware synchronization signal (e.g., HW-VSYNC) may be increased by increasing the blank area in the one frame information. In other words, the frequency of the hardware synchronization signal (e.g., HW-VSYNC) may be reduced by increasing the second parameter. For example, when the refresh rate managing module404increases the second parameter, the display controller driver403controls the display module405to increase the waiting time (VFP) after outputting the vertical signal, thereby decreasing the frequency of the hardware synchronization signal (e.g., HW-VSYNC). According to an embodiment, the third parameter, which serves as a parameter for controlling the refresh rate through the software configuration, may represent a frequency (Hz) of the second synchronization signal (e.g., SW-VSYNC). According to an embodiment, the second synchronization signal (e.g., SW-VSYNC) may be a signal generated based on the first synchronization signal (e.g., TE-VSYNC) received by the display controller driver403. According to an embodiment, the display controller driver403may generate the second synchronization signal (e.g., SW-VSYNC) and transmit the second synchronization signal to the graphics module402. According to one embodiment, when the frequency of the hardware synchronization signal (e.g., HW-VSYNC) is 120 Hz, when the first parameter is 120 Hz, and when the refresh rate managing module404changes the second parameter to 60 Hz, even if the first synchronization signal (e.g., TE-VSYNC) is generated 120 times per second, the second synchronization signal (e.g., SW-VSYNC) may be generated 60 times per second. The second synchronization signal is generated with respect to one of every two first synchronization signals (e.g., TE-VSYNC) while skipping a remaining one of the two first synchronization signals (e.g., TE-VSYNC), for every two first synchronization signals (e.g., TE-VSYNC). According to one embodiment, when the frequency of the hardware synchronization signal (e.g., HW-VSYNC) is 120 Hz, when the first parameter is 60 Hz, and when the refresh rate managing module404changes the second parameter to 30 Hz, even if the hardware synchronization signal (e.g., HW-VSYNC) is generated 120 times per second, the first synchronization signal (e.g., TE-VSYNC) is generated 60 times per second. The first synchronization signal is generated with respect to one of every two hardware synchronization signals (e.g., HW-VSYNC) while skipping a remaining one of two hardware synchronization signals (e.g., HW-VSYNC). In this case, when the first synchronization signal (e.g., TE-VSYNC) is generated 60 times per second, the second synchronization signal (e.g., SW-VSYNC) may be generated 30 times per second. The second synchronization signal is generated based on one of every two first synchronization signals (e.g., TE-VSYNC) while skipping a remaining one of two first synchronization signals (e.g., TE-VSYNC). According to an embodiment, the refresh rate managing module404may determine a parameter, which is to be controlled, of the first parameter, the second parameter, and the third parameter, to change the refresh rate to the received target refresh rate. According to an embodiment, the first parameter, the second parameter, and the third parameter may have the features as shown in Table 1. TABLE 1Method forVisibilitychanging refreshdegree forrate to target refreshchange ofCurrentSwitchingraterefresh rateconsumptionrateFirst parameterLowHighFast(frequency (Hz) offirst synchronizationsignal) changedSecond parameterHighLowSlow(proportion of blankarea) changedThirdLowHighFastestparameter(frequency (Hz) ofsecondsynchronizationsignal) changed In Table 1, “high”, “low”, “fast”, and “slow” may be values relative to each other. Referring to Table 1, when the first parameter is changed according to an embodiment, the consumption current may be higher. However, the rate of changing the refresh rate is faster when the first parameter is changed, and the change in the display screen is lower (e.g., a screen shuttering phenomenon, a color change, and/or a brightness change) when the refresh rate is changed. Accordingly, the visibility (the degree of visibly recognizing the change of the refresh rate) in the change of the refresh rate is lower. In addition, according to an embodiment, when the first parameter is changed to reduce the refresh rate, as a portion of the first synchronization signal (e.g., TE-VSYNC) is omitted. Accordingly, the operation of the processor (e.g., the processor120ofFIG.1) operating in synchronization with the first synchronization signal slows down. However, referring to Table 1, according to an embodiment, when the second parameter is changed, the rate of changing the refresh rate is slower and the degree of the change (e.g., a screen shuttering phenomenon, a color change, and/or a brightness change) of the display screen is higher when the refresh rate is changed, such that the visibility degree (the degree of visibly recognizing the change of the refresh rate) in the change of the refresh rate is higher. However, the change of the second parameter may reduce the current consumption. In addition, referring to Table 1, according to an embodiment, when the third parameter is changed, the consumption current may be higher. However, when the third parameter is changed, the rate of changing the refresh rate is fastest and the change (e.g., a screen shuttering phenomenon, a color change, and/or a brightness change) of the display screen is lower when the refresh rate is changed, such that the visibility degree (the degree of visibly recognizing the change of the refresh rate) in the change of the refresh rate is lower. According to an embodiment, the refresh rate managing module404may determine a parameter, which is to be controlled, of the first parameter, the second parameter, and the third parameter, depending on the environment of ambient illuminance of the electronic device101, whether the electronic device101displays a still image, the size of the display of the electronic device101, the type of a display panel included in the electronic device101, the interference state of the frequency of a peripheral device of the display panel, the required reactivity, and/or the current consumption. According to an embodiment, the refresh rate managing module404may make a determination of changing at least one of the first parameter, the second parameter, or the third parameter, such that the refresh rate is changed to the received target refresh rate. For example, when the ambient illuminance of the electronic device101is higher, the visibility degree for the change of the refresh rate becomes lowered. Accordingly, when the second parameter is changed, the current consumption may be lowest. Accordingly, the refresh rate managing module404may make a determination of changing the second parameter to correspond to the target refresh rate. To the contrary, according to an embodiment, when the ambient illuminance of the electronic device101is lower, the visibility degree for the change of the refresh rate is higher. Accordingly, the refresh rate managing module404may make a determination of changing the third parameter, which makes the lower visibility degree for the change of the refresh rate, to correspond to the target refresh rate, instead of changing the second parameter making the higher visibility degree for the change of the refresh rate. In addition, thereafter, when the ambient illuminance of the electronic device101is changed to be higher, the refresh rate managing module404may gradually change the first parameter and/or the second parameter to correspond to the target refresh rate to reduce the current consumption. The change of the refresh rate is viewed with higher visibility under the surrounding environment of lower illuminance or lower brightness. Accordingly, a conventional electronic device is restricted from changing the refresh rate. Accordingly, when the higher refresh rate is not required (e.g., while the still screen image is displayed), the refresh rate fails to be lowered under the surrounding environment of the lower illuminance or the lower brightness. Accordingly, the rendering may be more frequently performed, so current is more consumed. However, according to the electronic device101of the embodiment of the disclosure, the refresh rate managing module404may change the third parameter, which makes the lower visibility degree for the change of the refresh rate, to correspond to the target refresh rate under the environment making the lower visibility degree for the change of the refresh rate, thereby decreasing the number times of rendering and reducing current consumption. In addition, when the electronic device101deviates from the environment making the higher visibility for the change of the refresh rate, the refresh rate managing module404may change the first parameter and/or the second parameter to correspond to the target refresh rate, thereby more reducing the number of times of generating the hardware synchronization signal (e.g., HW-VSYNC) and/or the first synchronization signal (e.g., TE-VSYNC). To the contrary, when a higher refresh rate is required under the surrounding environment of the lower illuminance or the lower brightness (e.g., when a more rapid response is required at a higher refresh rate, as a touch input of a user is made), according to the electronic device101of an embodiment of the disclosure, the refresh rate managing module404first changes the third parameter, which makes the lower visibility for the change of the refresh rate, to correspond to the target refresh rate, such that the electronic device101changes the refresh rate even under the environment making the higher visibility for the change of the refresh rate. According to an embodiment, the following description will be made in detail with reference toFIGS.7to9, regarding a condition of determining the change of at least one of the first parameter, the second parameter, or the third parameter. According to an embodiment, the refresh rate managing module404may make a determination of changing at least one of the first parameter, the second parameter, or the third parameter to change the refresh rate to the received target refresh rate, and may transmit information414on the determined parameter to the display controller driver403. According to an embodiment, the display controller driver403may transmit, to the display module405, information415on the first parameter and/or the second parameter changed, when the first parameter and/or the second parameter is changed, based on the information on the determined parameter. According to an embodiment, the display controller driver403may update the frequency of the second synchronization signal (e.g., SW-VSYNC), based on information on the third parameter changed, when the third parameter is changed, based on the information on the determined parameter. According to an embodiment, when the display module405receives the information on the first parameter and/or the second parameter changed, the display module405may generate the first synchronization signal (e.g., TE-VSYNC) to correspond to the first parameter and/or the second parameter changed and may transmit the first synchronization signal416(e.g., TE-VSYNC) to the display controller driver403. According to an embodiment, when the first parameter is changed, the display module405may omit generating the first synchronization signal (e.g., TE-VSYNC), which corresponds to the hardware synchronization signal (e.g., HW-VSYNC) at a specific period, to correspond to the first parameter. According to an embodiment, when the second parameter is changed, the display module405may control at least one of the waiting time (the vertical back porch; VBP) to output a vertical signal, the waiting time (a vertical front porch; VFP) after outputting the vertical signal, the waiting time (horizontal back porch; HBP) to output the horizontal signal, and/or the waiting time (horizontal front portion; HFP) after outputting the horizontal signal, to correspond to the second parameter. According to an embodiment, the display module405may generate the hardware synchronization signal (e.g., HW-VSYNC) and the first synchronization signal (e.g., TE-VSYNC) and may transmit the first synchronization signal416(e.g., TE-VSYNC) to the display controller driver403, when the information on one frame is completely output, by changing the at least one of the waiting time (the vertical back porch; VBP) to output a vertical signal, the waiting time (a vertical front porch; VFP) after outputting the vertical signal, the waiting time (horizontal back porch; HBP) to output the horizontal signal, and/or the waiting time (horizontal front portion; HFP) after outputting the horizontal signal. According to an embodiment, the display controller driver403may generate the second synchronization signal (e.g., SW-VSYNC) to correspond to the received first synchronization signal (e.g., TE-VSYNC) and transmit the second synchronization signal417to the graphics module402. Although the display controller driver403generates the second synchronization signal (e.g., SW-VSYNC) to correspond to the received first synchronization signal (e.g., TE-VSYNC), when the third parameter is changed according to an embodiment, the display controller driver403may generate the second synchronization signal (e.g., SW-VSYNC) to correspond to the changed third parameter. According to an embodiment, when the second parameter is changed, the display controller driver403may omit generating the second synchronization signal (e.g., SW-VSYNC) corresponding to the first synchronization signal (e.g., TE-VSYNC), to correspond to the second parameter. According to an embodiment, the graphics module402may render the information on a frame to correspond to the received second synchronization signal (e.g., SW-VSYNC). Hereinafter, the operation of the electronic device according to an embodiment will be described with reference toFIG.5. FIG.5is a diagram500illustrating an operation of an electronic device, until the electronic device outputs a frame to a display, according to an embodiment. Referring toFIG.5, the electronic device (e.g., the electronic device101ofFIG.1) may include a graphics module501, a display controller driver502, a GRAM503, and a display module504. According to an embodiment, a first duration510will be described below. According to an embodiment, for the first duration510, the refresh rate of the display module504may be 120 Hz, the first parameter may be 120 Hz, the second parameter may be 0, and the third parameter 120 Hz. According to an embodiment, the display controller driver502may receive a first synchronization signal (e.g., TE-VSYNC)511from the display module504. According to an embodiment, the display module504may generate the first synchronization signal511to correspond to the hardware synchronization signal (e.g., HW-VSYNC) generated from the inner part of the display driving circuit (e.g., the display driver IC230ofFIG.2) whenever information on one frame is completely output on the display panel (e.g., the display210ofFIG.2). According to an embodiment, as the display controller driver502receives the first synchronization signal511from the display module504, the display controller driver502may determine that the display module504normally outputs a present frame (FRAME0; not illustrated), and may transmit, to the GRAM503and the display module504, information on a first frame (FRAME1), which is a next frame. AlthoughFIG.5illustrates that the GRAM503and the display module504are separate from each other for the illustrative purpose, the GRAM503may be included in the display module504according to an embodiment. The display module504may store information on the first frame FRAME1, which is received from the display controller driver502, in the GRAM503. The display module504may output the first frame FRAME1information stored in the GRAM503to the display panel. A duration in which frame information is received from the display controller driver502and output to the display panel may be referred to as an address duration ADDR. The display controller driver502may generate the second synchronization signal (e.g., SW-VSYNC) corresponding to the first synchronization signal511, and transmit the signal to the graphics module501, in response to the first synchronization signal511received from the display module504. The graphics module501may render the second frame FRAME2, which is a next frame, in response to a second synchronization signal512received. The display controller driver502may receive information on the second frame FRAME2from the graphics module501. The display controller driver502may transmit information on the second frame FRAME2to the GRAM503and the display module504, based on the second synchronization signal512. The display module504may output the second frame FRAME2information stored in the GRAM503to the display panel. The above operation may be iterated for the first duration510. For the first duration510, the first parameter is 120 Hz which is equal to the refresh rate of the display module504, so the display module504may generate the first synchronization signal511whenever the address duration ADDR is terminated, without omitting the first synchronization signal511. In addition, for the first duration510, the second parameter is zero. Accordingly, the display module504may not increase or decrease the blank area. In addition, for the first duration510, the third parameter is 120 Hz which is equal to the first parameter. Accordingly, the display controller driver502may generate the second synchronization signal512whenever receiving the first synchronization signal511, without omitting the second synchronization signal512. According to an embodiment, although the target refresh rate of the display module504is changed to 30 Hz, the electronic device101may make a determination of changing only the third parameter to 30 Hz, based on the environment of ambient illuminance of the electronic device101, on whether the electronic device101displays a still image, on the size of the display of the electronic device101, on the type of a display panel included in the electronic device101, on the interference state of the frequency of a peripheral device of the display panel, on the required reactivity, and/or on the current consumption. According to an embodiment, for a second duration520, the target refresh rate may be 30 Hz, the first parameter may be 120 Hz, the second parameter may be 0, and the third parameter 30 Hz. According to an embodiment, the second duration520will be described below. According to an embodiment, for the second duration520, the first parameter may be 120 Hz and the third parameter may be 30 Hz, so the display controller driver502generates the second synchronization signal512with respect to one of every four first synchronization signals511received. In other words, the display controller driver502may generate one second synchronization signal512with respect to one of the four first synchronization signals511, without generating the second synchronization signal512with respect to three first synchronization signals511of the four first synchronization signals511. The graphics module501does not perform rendering for a next frame for a duration in which the second synchronization signal512is not formed. Accordingly, the display controller driver502may fail to receive information on a new frame. Accordingly, even if the display controller driver502receives the first synchronization signal511, the display controller driver502may not transmit the information on the frame to the display module504. The display module504may repeatedly output a fifth frame FRAME5stored in the GRAM503. According to an embodiment, as the condition of the electronic device101is changed for a third duration530, the electronic device may make a determination of increasing the second parameter, while considering low current consumption. According to an embodiment, the electronic device may increase the second parameter such that a frequency of the hardware synchronization signal (e.g., HW-VSYNC), which is generated whenever one frame is completely output, becomes 60 Hz. According to an embodiment, the electronic device may increase the waiting time VFP after outputting the vertical signal, such that the frequency of the hardware synchronization signal (e.g., HW-VSYNC) becomes 60 Hz. Hereinafter, the third duration530will be described below. According to an embodiment, the display module504may increase the waiting time VFP after outputting the vertical signal, by the increment of the second parameter, after outputting an active video area when outputting the information on the frame. The duration of the waiting time VFP after outputting the vertical signal, which is increased by the display module504, may be referred to a blank duration (VBLANK). After the blank duration (VBLANK) is terminated, the display controller driver502may generate the hardware synchronization signal. Accordingly, the period of the hardware synchronization signal is increased to be twice by the increment of the blank duration. Accordingly, the period of the first synchronization signal511generated based on the hardware synchronization signal may be increased to be twice. In other words, the number of times of generating the first synchronization signal511may be reduced in half of those the first duration510and the second duration520. According to an embodiment, as the condition of the electronic device101is changed for a fourth duration540, the electronic device may make a determination of increasing the second parameter to correspond to 30 Hz, which is the target refresh rate, while considering current consumption. According to an embodiment, the electronic device may increase the second parameter, such that the frequency of the hardware synchronization signal (e.g., HW-VSYNC) becomes 30 Hz. According to an embodiment, the electronic device may increase the waiting time VFP after outputting the vertical signal, such that the frequency of the hardware synchronization signal (e.g., HW-VSYNC) becomes 30 Hz. Hereinafter, the fourth duration540will be described below. According to an embodiment, the display module504may increase the waiting time VFP after outputting the vertical signal, by the increment of the second parameter, after outputting an active video area when outputting the information on the frame. Accordingly, the period of the hardware synchronization signal is increased to be four times by the increment of the blank duration. Accordingly, the period of the first synchronization signal511generated based on the hardware synchronization signal may be increased to be four times. In other words, the number of times of generating the first synchronization signal511may be reduced ¼ of those of the first duration510and the second duration520. According to an embodiment, when the electronic device changes the target refresh rate from 120 Hz to 30 Hz, the electronic device may first change the third parameter, which is a parameter for controlling the refresh rate through the software configuration, to correspond to the target refresh rate, based on the environment of the ambient illuminance of the electronic device, on whether the electronic device displays a still image, on the size of the display of the electronic device, on the type of a display panel included in the electronic device, on the interference state of the frequency of a peripheral device of the display panel, and/or on the required reactivity. As the above condition of the electronic device is changed, the electronic device may change the second parameter to correspond to the target refresh rate. Accordingly, the target refresh rate is implemented through the hardware configuration at the final stage, thereby reducing current consumption. Hereinafter, the operation of the electronic device according to an embodiment will be described with reference toFIG.6. FIG.6is a diagram600illustrating an operation of an electronic device, until the electronic device outputs a frame to a display, according to an embodiment. Referring toFIG.6, an electronic device (e.g., the electronic device101inFIG.1) may include a graphics module601, a display controller driver602, a GRAM603, and a display module604. According to an embodiment, for a first duration610, the refresh rate of the display module604may be 120 HZ, the first parameter may be 120 Hz, the second parameter may be 0, and the third parameter 120 Hz. According to an embodiment, the description about the first duration610may be the same as the description about the first duration510ofFIG.5. According to an embodiment, although the target refresh rate of the display module604is changed to 30 Hz, the electronic device may make a determination of changing only the third parameter to 30 Hz, based on the environment of the ambient illuminance of the electronic device, on whether the electronic device displays a still image, on the size of the display of the electronic device, on the type of a display panel included in the electronic device, on the interference state of the frequency of a peripheral device of the display panel, on the required reactivity, and/or the current consumption. According to an embodiment, for a second duration620, the target refresh rate may be 30 HZ, the first parameter may be 120 Hz, the second parameter may be 0, and the third parameter 30 Hz. According to an embodiment, the description about the second duration620may be the same as the description about the second duration520ofFIG.5. According to an embodiment, as the condition of the electronic device is changed for a third duration630, the electronic device may make a determination of increasing the first parameter to 60 Hz, while considering current consumption. According to an embodiment, for the third duration630, the refresh rate of the display module604is 120 Hz, and the first parameter is 60 Hz. Accordingly, the display module604may generate a first synchronization signal (e.g., TE-VSYNC)611only with respect to one of every two hardware synchronization signal (e.g., HW-VSYNC) generated by the display driving circuit. In other words, the display module604may generate a first synchronization signal611with respect to one hardware synchronization signal of two hardware synchronization signals, without generating the first synchronization signal611with respect to remaining one of the two hardware synchronization signals. For the duration in which the first synchronization signal611is not formed, even the second synchronization signal (e.g., SW-VSYNC)612is not formed. Accordingly, the graphics module601may not perform the rendering for the next frame. Accordingly, the display controller driver602fails to receive information on a new frame, and the display module504may repeatedly output a sixth frame FRAME6stored in the GRAM603. According to an embodiment, as the condition of the electronic device101is changed for a fourth duration640, the electronic device may make a determination of changing the first parameter to 30 Hz, while considering current consumption. According to an embodiment, for the fourth duration640, the refresh rate of the display module604is 120 Hz, and the first parameter is 30 Hz. The display module504may generate the first synchronization signal611with respect to one of every four hardware synchronization signals generated by the display driving circuit. In other words, the display module604may generate the first synchronization signal611with respect to one of four hardware synchronization signals, without generating the first synchronization signal611with respect to remaining three of the four hardware synchronization signals. For the duration in which the first synchronization signal611is not formed, even a second synchronization signal612is not formed. Accordingly, the graphics module601may not perform the rendering for the next frame. Accordingly, the display controller driver602fails to receive information on a new frame, and the display module604may repeatedly output a seventh frame FRAME7stored in the GRAM603. According to an embodiment, when the electronic device changes the target refresh rate from 120 Hz to 30 Hz, the electronic device may first change the third parameter, which is a parameter for controlling the refresh rate through the software configuration, to correspond to the target refresh rate, based on the environment of the ambient illuminance of the electronic device, on whether the electronic device displays a still image, on the size of the display of the electronic device, on the type of a display panel included in the electronic device, on the interference state of the frequency of a peripheral device of the display panel, and/or on the required reactivity. As the above condition of the electronic device is changed, the electronic device may change the first parameter to correspond to the target refresh rate. Accordingly, the target refresh rate is implemented through the hardware configuration at the final stage, thereby reducing current consumption. Hereinafter, the operation of the electronic device according to an embodiment will be described with reference toFIGS.7and8. FIG.7is a flowchart700illustrating the operation of an electronic device, according to an embodiment.FIG.8is a flowchart800illustrating the operation of an electronic device, according to an embodiment. According to an embodiment, the same components as those of the above described embodiment will be assigned with the same reference numerals, and the duplication thereof will be omitted. According to an embodiment, the operation of the electronic device (e.g., the electronic device101ofFIG.1) may be performed by a processor (e.g., the processor120ofFIG.1) of the electronic device. Referring toFIG.7, in operation701, according to an embodiment, an electronic device may determine whether the refresh rate needs to be changed. According to an embodiment, the electronic device may determine that the refresh rate needs to be changed, when an application of the electronic device is executed. According to an embodiment, when the switching between a still image and a moving picture is made, when a user input is made or is not made for a specific time interval, or when a screen to be displayed on the display is changed, the electronic device may determine that the refresh rate needs to be changed. According to an embodiment, as the electronic device determines that the refresh rate does not need to be changed, the electronic device may determine whether the restriction in the change of the refresh rate is changed in operation702. The restriction in the change of the refresh rate refers to the restriction in which the refresh rate is prevented from being changed. For example, when the environment of the ambient illuminance is equal to or less than 40 lux, the change of the refresh rate may be restricted. When the restriction in the change of the refresh rate is changed, the restriction in the change of the refresh rate may be present and then disappeared, or the restriction in the change of the refresh rate may be absent and then made. For example, when the environment of the ambient illuminance is changed from 40 lux or less to more than 40 lux, the electronic device may identify the restriction in the change of the refresh rate as being changed. According to an embodiment, when the refresh rate needs to be changed, or when the restriction in the change of the refresh rate is changed, the electronic device may determine the target refresh rate. In operation703, the electronic device may change at least one of the first parameter, the second parameter, or the third parameter to change the refresh rate of the electronic device to the target refresh rate. According to an embodiment, in operation703, the electronic device may consider the condition of the electronic device for a parameter to be changed and a change degree of the parameter, when changing the at least one of the first parameter, the second parameter, or the third parameter to change the refresh rate of the electronic device to the target refresh rate. According to an embodiment, the condition which is to be considered for a parameter to be changed and a change degree of the parameter, when the electronic device changes the at least one of the first parameter, the second parameter, or the third parameter may include the environment of the ambient illuminance of the electronic device, whether the electronic device displays a still image, the size of the display of the electronic device, the type of a display panel included in the electronic device, the interference state of the frequency of a peripheral device of the display panel, on the required reactivity, and/or the current consumption. The following description will be made regarding the condition considered when the electronic device determines a parameter, which is to be changed, of the first parameter, the second parameter, or the third parameter and a change degree of the parameter, according to an embodiment. Hereinafter, implementing the target refresh rate through the first parameter and/or the second parameter may be referred to as a hardware (H/W) modulation, and implementing the target refresh rate through the third parameter may be referred to as a software (S/W) modulation. According to an embodiment, the electronic device may consider the environment of the ambient illuminance of the electronic device. For example, when the electronic device is present under the environment of the lower illuminance, and when the color or the brightness of the display screen of the display is changed, the visibility by a user eye is increased more, when compared to the environment of the higher illuminance. Accordingly, when the refresh rate of the electronic device is changed under the environment of the lower illuminance, the change of the refresh rate is more viewed, when compared to the environment of the higher illuminance. Accordingly, as the illuminance is lowered, the refresh rate needs to be changed more seamlessly. The following Table 2 shows a criterion of selecting a hardware (H/W) modulation and a software (S/W) modulation. TABLE 2Range forVisibilitychangingRange fordegree forHardwarechangingchange ofmodulationSelectedfrequency isIlluminancerefresh rateseamlesslymodulationillustrated40 Lux orHighNarrow rangeS/WS/W: 120 toless1 HzH/W: 120 HzMore thanLowWide rangeS/W andS/W: 120 to40 lux to(e.g., 120 HzH/W1 Hzless thanto 48 Hz)H/W: 120 to7400 lux48 Hz7400 luxHardlyWider rangeH/WS/W: 120 toor moreshown(e.g., 120 Hz1 Hzto 1 Hz)H/W: 120 to1 Hz In Table 2, “high”, “low”, “narrow”, and “wide” may be values relative to each other. Referring to Table 2, as the change of the refresh rate is more viewed under the environment of the lower illuminance of 40 lux or less, the electronic device may select the software modulation. As described with reference to Table 1, when the target refresh rate is implemented in hardware configuration (that is, when the hardware modulation is employed), the change of the refresh rate is more viewed. Accordingly, a range for changing the refresh rate through the hardware modulation seamlessly without being viewed by a user may be significantly narrowed under the environment of the lower illuminance of 40 lux or less. In other words, even if the hardware modulation is slightly changed under the environment of the lower illuminance of 40 lux or less, the change of the refresh rate is greatly viewed. Accordingly, when the target refresh rate is changed under the environment of the lower illuminance of 40 lux or less, the electronic device may select the software modulation. According to an embodiment, when the target refresh rate is changed under the environment of the lower illuminance of 40 lux or less, the hardware modulation may be fixed to 120 Hz, and the software modulation may be changed in the range of 120 Hz to 1 Hz. Referring to Table 2, as the visibility for the change of the refresh rate is lower under the environment of the illuminance in the range of more than 40 lux to less than 7400 lux, the electronic device may select the combination of the software modulation and the hardware modulation. As described with reference to Table 1, although the target refresh rate implemented in hardware configuration (that is, when the hardware modulation is used) shows the higher visibility for the change of the refresh rate, the visibility for the change of the refresh rate is lowered under the environment of the illuminance in the range of more than 40 lux to less than 7400 lux. The range of changing the hardware modulation seamlessly is widened (e.g., 120 Hz to 40 Hz). Accordingly, when the target refresh rate is changed under the environment of the illuminance in the range of more than 40 lux to less than 7400 lux, the electronic device may select the combination of the software modulation and the hardware modulation. According to an embodiment, when the target refresh rate is changed under the environment of the illuminance in the range of more than 40 lux to less than 7400 lux, the electronic device may change the hardware modulation in the range of 120 Hz to 48 Hz, and may change the software modulation in the range of 120 Hz to 1 Hz. Referring to Table 2, the visibility for the change of the refresh rate is hardly shown under the environment of the illuminance of 7400 lux or more. Accordingly, the electronic device may select the hardware modulation. As described above with reference to Table 1, the target refresh rate implemented in hardware configuration (that is, when the hardware modulation is used) shows the higher visibility for the change of the refresh rate, but makes the current consumption lower. In other words, the lower visibility for the change of the refresh rate is shown under the environment of the illuminance of 7400 lux or more. Accordingly, under the environment of the illuminance of 7400 lux or more, the range for changing the hardware modulation is widest (e.g., 120 Hz to 1 Hz). Accordingly, when the electronic device changes the target refresh rate under the environment of the illuminance of 7400 lux or more, the electronic device may select the hardware modulation to reduce the current consumption. According to an embodiment, when the target refresh rate is changed under the environment of the illuminance in the range of 7400 lux or more, the electronic device may change the hardware modulation in the range of 120 Hz to 1 Hz, and may change the software modulation in the range of 120 Hz to 1 Hz. When only the scheme of changing the hardware modulation is employed, the change of the refresh rate may be restricted due to the visibility for the change of the refresh rate under the environment of the lower illuminance. However, according to various embodiments of the disclosure, as the scheme of changing the software modulation is used, even when a problem is caused with the visibility for the change of the refresh rate, the frequency for generating the second synchronization signal (e.g., SW-VSYNC) affecting the rendering period of the graphics module may be lowered. Accordingly, the current consumption may be reduced. According to an embodiment, when determining whether to select the software modulation or the hardware modulation, the electronic device may consider whether the still image is displayed or the moving picture is displayed. For example, although the change of the refresh rate is hardly viewed when the electronic device displays the moving picture, the change of the refresh rate may be more easily viewed when the still image is displayed. The following Table 3 shows a criterion of selecting a hardware (HAV) modulation and a software (S/W) modulation depending on the screen displayed by the electronic device. TABLE 3Range forVisibilitychangingRange fordegree forHardwarechangingthe change ofmodulationSelectedfrequency isScreenrefresh rateseamlesslymodulationillustratedStill imageHighNarrow rangeS/WS/W: 120 to1 HzH/W: 120 HzMovingHardlyWider rangeH/WS/W: 120 topictureshown(e.g., 1201 HzHz to 48 Hz)H/W: 120 to48 Hz In Table 3, “high”, “low”, “narrow”, and “wide” may be values relative to each other. Referring to Table 3, when the electronic device displays the still image, as the change of the refresh rate is more viewed, the electronic device may select the software modulation. When the electronic device displays the still image, the range for changing the hardware modulation seamlessly without the recognition of the user may be significantly narrowed. In other words, when the electronic device displays the still image, the change of the refresh rate may be greatly viewed even if the hardware modulation is slightly changed. Accordingly, when displaying the still image and changing the target refresh rate, the electronic device may select the software modulation. According to an embodiment, when the electronic device displays the still image, and changes the target refresh rate, the hardware modulation may be fixed to 120 Hz, and the software modulation may be changed in the range of 120 Hz to 1 Hz. Referring to Table 3, when the electronic device displays the moving picture, as the change of the refresh rate is hardly viewed, the electronic device may select the hardware modulation. When the electronic device displays the moving picture, as the change of the refresh rate is hardly viewed, the range for changing the hardware modulation seamlessly is wider (e.g., 120 Hz to 48 Hz). Accordingly, when the electronic device displays the moving picture, and when the target refresh rate is changed, the electronic device may select the hardware modulation by considering the benefit in terms of the current consumption. According to an embodiment, when the electronic device displays the moving picture, and when the target refresh rate is changed, the electronic device may change the hardware modulation in the range of 120 Hz to 48 Hz, and may change the refresh rate through the software modulation in the range of 120 Hz to 1 Hz. In other words, the electronic device solves the problem with the visibility for the change of the refresh rate at two stages. In detail, when the electronic device displays the still image, the frequency is changed through the software modulation. Thereafter, when the visibility for the change of the refresh rate is lower as the screen is changed or the brightness is changed, the frequency is changed through the Hardware modulation, thereby reducing the current consumption. In addition, according to an embodiment, the electronic device may consider the size of the display screen of the electronic device to determine whether to select the software modulation or to select the hardware modulation. For example, as the screen of the electronic device becomes smaller, the change of the refresh rate may be more easily viewed. The following Table 4 shows a criterion of selecting a hardware (H/W) modulation and a software (S/W) modulation depending on the size of the screen displayed by the electronic device. TABLE 4Range forVisibilitychangingRange fordegree forhardwarechangingchangingmodulationSelectedfrequency isScreen sizerefresh rateseamlesslymodulationillustratedLarger screenHighNarrow rangeS/WS/W: 120 to(e.g., 7 inch(e.g., 1201 Hzor more)Hz to 96 Hz)H/W: 120~96 HzSmallerLowerWide rangeS/W andS/W: 120 toscreen (e.g., 7(e.g., 120 HzH/W1 Hzinch or less)to 48 Hz)H/W: 120 to48 Hz In Table 4, “large”, “small”, “high”, “low” ˜, “narrow”, or “wide” may be values relative to each other. Referring to Table 4, when the electronic device corresponds to a tablet PC or a laptop computer and has a larger display screen size (e.g., 7 inch or more), as the change of the refresh rate is more viewed, the electronic device may select the software modulation. When the electronic device has the larger display screen size, the range for changing the hardware modulation seamlessly is relatively narrow (e.g., 120 Hz to 96 Hz) without the recognition of the user. Accordingly, when the electronic device has the larger display screen size, and changes the target refresh rate, the electronic device may select the software modulation. According to an embodiment, when the electronic device has the larger display screen size, and changes the target refresh rate, the hardware modulation may be changed in the range of 120 Hz to 96 Hz, and the software modulation may be changed in the range of 120 Hz to 1 Hz. Referring to Table 4, when the electronic device is a smartphone, or when the electronic device has a smaller display screen size (e.g., 7 inch or less), as the visibility for the change of the refresh rate is lower, the electronic device may select the combination of the software modulation and the hardware modulation. When the electronic device has a smaller display screen size, as the visibility for the change of the refresh rate is lower, the range for changing the hardware modulation seamlessly is wider (e.g., 120 Hz to 48 Hz). Accordingly, when the electronic device has the smaller display screen size, and when the target refresh rate is changed, the electronic device may select the combination of the software modulation and the hardware modulation by considering the reduction of the current consumption. According to an embodiment, when the electronic device has the smaller display screen size, and when the target refresh rate is changed, the electronic device may change the hardware modulation in the range of 120 Hz to 48 Hz, and may change the software modulation in the range of 120 Hz to 1 Hz. In addition, according to an embodiment, when the electronic device includes a rollable display or a foldable display, and when the screen size of the display is changed, the electronic device may control the software modulation and the hardware modulation depending on the change in the screen size of the display. In addition, according to an embodiment, the electronic device may consider the characteristic of the leakage current of the display panel of the electronic device to determine whether to select the software modulation or to select the hardware modulation. According to an embodiment, the characteristic of the leakage current of the display panel of the electronic device may be classified depending on the type of the thin film transistor included in the display panel. For example, a display panel based on a low temperature polycrystalline silicon (LTPS) thin film transistor may have a larger amount of leakage current, and a display panel based on a thin film transistor (e.g., hybrid oxide and polycrystalline silicon (HOP) and a low temperature polyscrystalline oxide (LTPO)) formed by bonding an LTPS and an oxide may have a smaller amount of leakage current. The following Table 5 shows a criterion of selecting a hardware (H/W) modulation and a software (S/W) modulation depending on the leakage current characteristic of the display panel of the electronic device. TABLE 5Range forVisibilitychangingRange forCharacteristicdegree forhardwarechangingof leakagechangingmodulationSelectedfrequencycurrentrefresh rateseamlesslymodulationillustratedLargerHighNarrow rangeS/WS/W: 120 toamount of1 HzleakageH/W: 120 tocurrent96 HzSmallerLowerWide rangeS/W andS/W: 120 toamount of(e.g., 120 HzH/W1 Hzleakageto 48 Hz)H/W: 120 tocurrent48 Hz In Table 5, “large”, “small”, “high”, “low”, “narrow”, or “wide” may be values relative to each other. Referring to Table 5, when the display panel of the electronic device has a larger amount of leakage current, a flicker phenomenon is more frequently caused, so the change of the refresh rate is more viewed. Accordingly, the electronic device may select the software modulation. When the display panel of the electronic device has the larger amount of leakage current, the range for changing the hardware modulation seamlessly without recognition of the user may be relatively narrowed. Accordingly, when the display panel of the electronic device has the larger amount of leakage current, and when the electronic device changes the target refresh rate, the electronic device may select the software modulation. According to an embodiment, when the display panel of the electronic device has the larger amount of leakage current, and when the target refresh rate is changed, the hardware modulation may be changed in the range of 120 Hz to 96 Hz, and the software modulation may be changed in the range of 120 Hz to 1 Hz. Referring to Table 5, when the display panel of the electronic device has a smaller amount of leakage current, a flicker phenomenon is less frequently caused, so the change of the refresh rate is less viewed. Accordingly, the electronic device may select the combination of the software modulation and the hardware modulation. When the display panel of the electronic device has a smaller amount of leakage current, as the visibility for the change of the refresh rate is lower, the range for changing the hardware modulation seamlessly is wider (e.g., 120 Hz to 48 Hz). Accordingly, when the display panel of the electronic device has a smaller amount of leakage current, and when the target refresh rate is changed, the electronic device may select the combination of the software modulation and the hardware modulation by considering the reduce of the current consumption. According to an embodiment, when the display panel of the electronic device has the smaller amount of leakage current, and when the target refresh rate is changed, the hardware modulation may be changed in the range of 120 Hz to 48 Hz, and the software modulation may be changed in the range of 120 Hz to 1 Hz. In addition, according to an embodiment, the electronic device may consider the interference with a touch circuit, which is a peripheral device of a display module (e.g., the display module160ofFIG.2) to determine whether to select the software modulation or select the hardware modulation. For example, when a duration in which the display module autonomously outputs a frame is increased, instead of receiving information on the frame from the display controller driver by increasing the first parameter and/or the second parameter, the potential difference in a specific level or more may be made in a capacitor of a touch sensor (e.g., the touch sensor251ofFIG.2), and a ghost touch phenomenon of recognizing a touch input that does not actually exist, may be caused. Accordingly, when the user is making a touch, the hardware modulation (that is, the change of the first parameter and/or the second parameter) needs to be restricted to prevent the ghost touch phenomenon. The following description will be made with reference toFIG.8. Referring toFIG.8, in operation801, the electronic device may identify whether a touch input of the user is made. In operation802, as the electronic device identifies that the touch input of the user is generated, adjusting the refresh rate through the hardware modulation is restricted such that the refresh rate is adjusted to ½ of a present refresh rate. For example, when the present refresh rate is 120 Hz and the target refresh rate is 30 Hz, and when the touch input is made, the adjusting of the refresh rate through the hardware modulation is restricted such that the refresh rate is adjusted to 60 Hz which is ½ of 120 Hz. AlthoughFIG.8illustrates that the adjusting of the refresh rate through the hardware modulation is restricted such that the refresh rate is adjusted to ½ of a present refresh rate, which is provided only for the illustrative purpose, the adjusting of the refresh rate through the hardware modulation is restricted such that the refresh rate is adjusted to 1/N of the present refresh rate, depending on the designs of a touch circuit included in the electronic device. N may be an integer value which is predetermined based on the designs of the touch circuit. In operation803, as the adjusting of the refresh rate through the hardware modulation is restricted, the electronic device may control the hardware modulation and/or the software modulation to implement the target refresh rate. For example, when the present refresh rate is 120 Hz and the target refresh rate is 30 Hz, the adjusting of the refresh rate through the hardware modulation is restricted such that the refresh rate is adjusted to 60 Hz. Accordingly, the software modulation may change the refresh rate to 30 Hz. In this case, whether to adjust the refresh rate to 60 Hz through the hardware modulation may be determined based on the above-described illuminance environment, a display screen size, a screen display content, and/or a leakage current characteristic. In operation804, the electronic device may determine whether the touch is released. According to an embodiment, when the touch is not released, a present state may be maintained. In operation805, as the touch is determined as being released, the electronic device may release a frequency adjustment restriction through the hardware modulation. In operation806, the electronic device may control the hardware modulation and/or software modulation to implement the target refresh rate, as the restriction in adjusting of the refresh rate through the hardware modulation is released. For example, when the target refresh rate is 30 Hz, the software modulation corresponds to 30 Hz, and the hardware modulation corresponds 60 Hz, the refresh rate to be adjusted through the hardware modulation may be changed 30 Hz, as the restriction in adjusting of the refresh rate through the hardware modulation is released. In addition, according to an embodiment, the electronic device may consider interference with the wireless communication module (e.g., the wireless communication module192ofFIG.1) which is a peripheral device of the display module (e.g., the display module160ofFIG.1) to determine whether to select the software modulation or the hardware modulation. For example when it is difficult to change the hardware modulation due to the noise of the radio frequency of the wireless communication module, the frequency interference may be avoided by changing the software modulation. The following Table 6 shows a criterion of selecting a hardware (H/W) modulation and a software (S/W) modulation depending on the interference state of the radio frequency. TABLE 6Interference state of radio frequencySelected modulationInterference causedS/W and the combinationof S/W and H/WInterference not causedH/W In addition, according to an embodiment, the electronic device may consider the reactivity required to determine whether to select the software modulation or to select the hardware modulation. For example, when the hardware modulation is changed and when a next first synchronization signal (e.g., TE-VSYNC) is formed, a changed value is applied. However, when the software modulation is changed, the changed value is immediately applied such that the rapid reactivity is obtained. Accordingly, when higher reactivity is required, the software modulation is first applied. In the following Table 7, the difference in reactivity between the hardware TABLE 7Reactionrate whenrefresh rateis changedSoftwareHardwareCurrentin responseModulationmodulationmodulationRe-con-to touchmethodfrequencyfrequencyactivitysumptioninputHardware30 Hz30 HzSlowLow66.7 msec tomodulation100 msecSoftware30 Hz60 HzInter-Inter-33.3 msec tomodulationmediatemediate50 msecandHardwaremodulationSoftware30 Hz120 HzFastSlightly16.7 msec tomodulationhigher25 msec In Table 7, “high”, “low”, “slow”, and “fast” may be values relative to each other. Referring to Table 7, when the present refresh rate is 120 Hz, the target refresh rate is 30 Hz, the frequency of the hardware modulation is immediately changed to 30 Hz, lower current consumption may be shown and the slowest reactivity may be made. According to an embodiment, when the frequency of the hardware modulation is changed to 30 Hz in response to the touch input of the user, the reactivity speed may be in the range of 66.7 msec to 100 msec. Referring to Table 7, when the present refresh rate is 120 Hz, the target refresh rate is 30 Hz, the frequency of the hardware modulation is changed to 60 Hz, and the frequency of the software modulation is changed to 30 Hz, intermediate current consumption may be shown and the intermediate reactivity may be made. According to an embodiment, the frequency of the hardware modulation is changed to 60 Hz and the frequency of the software modulation is changed to 30 Hz, in response to the touch input of the user, the reaction rate may be in the range of 33.3 msec to 50 msec. Referring to Table 7, on the assumption that the present refresh rate is 120 Hz, and when the target refresh rate is 30 Hz, when the frequency of the software modulation is changed to 30 Hz, slightly higher current consumption may be shown and the fastest reactivity may be made. According to an embodiment, when the frequency of the software modulation is changed to 30 Hz in response to the touch input of the user, the reaction rate may be in the range of 16.7 msec to 25 msec. Accordingly, the electronic device may select the software modulation and the hardware modulation depending on the required reactivity degree. In addition, according to an embodiment, the electronic device may operate a component, such as a touch sensor circuit (e.g., the touch sensor IC253ofFIG.2) of the electronic device and/or a display driving circuit (e.g., the display driver IC230ofFIG.2), which requires higher reactivity, in synchronization with the frequency of the hardware modulation, and may operate a component, such as a graphic processing unit (GPU), which reduces the current consumption, in synchronization with the frequency of the software modulation to reduce a rendering speed, thereby comprising the higher reactivity and lower-power operation. Referring back toFIG.7, when the electronic device changes at least one of the first parameter, the second parameter, and the third parameter to change the refresh rate of the electronic device to the target refresh rate, the electronic device may determine whether at least one of the first parameter and the second parameter is changed, in operation704. When the at least one of the first parameter and the second parameter is changed, the electronic device may employ the changed first parameter and/or the changed second parameter in operation705. According to an embodiment, as the display controller driver transmits the information on the changed first parameter and/or the information on the changed second parameter to the display driving circuit, the electronic device may employ the changed first parameter and/or the changed second parameter. When at least one of the first parameter and the second parameter is not changed, or when the changed first parameter and/or the changed second parameter is employed, the electronic device may update the third parameter in operation706. According to an embodiment, the display controller driver may update the frequency of the second synchronization signal (e.g., SW-VSYNC), based on the information on the third parameter. Hereinafter, the effect of the electronic device according to an embodiment will be described with reference toFIG.9. FIG.9is a view900illustrating an example in which an electronic device is applied, according to an embodiment. According to an embodiment, the same components as those of the above described embodiment will be assigned with the same reference numerals, and the duplication thereof will be omitted. Referring toFIG.9, an electronic device (e.g., the electronic device101ofFIG.1) may be positioned under the environment of lower illuminance (e.g., the interior environment having the illuminance of 40 lux or less) (901). According to an embodiment, the electronic device may provide a user interface to select a higher-speed driving mode and a normal mode. According to an embodiment, the higher-speed driving mode is to automatically adjust the refresh rate to 120 Hz such that animation and a scroll operation may be more smoothly provided. According to an embodiment, the normal mode may be to a mode to reduce the power consumption of the battery while maintaining the refresh rate to 60 Hz. According to an embodiment, the electronic device may select the higher-speed driving mode in the normal mode in which the present refresh rate is 60 Hz (902). The electronic device may increase the refresh rate to 120 Hz, as the higher-speed driving mode is selected. According to an embodiment, the electronic device may identify an application executed by a user and a touch input (903). The electronic device may maintain the refresh rate to 120 Hz, as the application executed by the user and the touch input are identified. According to an embodiment, the electronic device may identify that a specific time is elapsed after the touch is released (904). The electronic device may change the target refresh rate to 60 Hz to lower the refresh rate to 60 Hz from 120 Hz, as the electronic device identifies that the specific time is elapsed after the touch is released. According to an embodiment, the electronic device may determine the frequency of the hardware modulation as being maintained, and the frequency of the software modulation as being changed, in response to recognizing the electronic device under the environment of lower illuminance. As the electronic device may determine the frequency of the hardware modulation as being maintained, and the frequency of the software modulation as being changed, the electronic device may adjust the rendering speed by reducing the frequency of the software modulation to 60 Hz, and may maintain the frequency of the hardware modulation to 120 Hz (905). According to an embodiment, the electronic device may recognize that the environment of the ambient illuminance of the electronic device exceeds 40 lux (906). According to an embodiment, as the environment of the ambient illuminance of the electronic device is changed to exceed 40 lux, the restriction in the change of the refresh rate is released. Accordingly, the electronic device may determine the hardware modulation to be changed. The electronic device may reduce the current consumption, as the frequency of the hardware modulation is reduced to 60 Hz (907). In other words, according to an electronic device of an embodiment of the disclosure, as the software modulation is changed under the environment of the lower illuminance, the rendering speed may be reduced seamlessly. When moving to the environment of the higher illuminance, the electronic device automatically changes the hardware modulation to reduce the current consumption. In addition, according to an embodiment, the target refresh rate of the electronic device is reduced and then increased again. According to an embodiment, on the assumption that the target refresh rate is determined to 24 Hz, and the frequency of the software modulation is changed to 24 Hz as the moving picture is reproduced under the situation in which the refresh rate of the electronic device is 120 Hz, when a new message is updated in a message application, so the target refresh rate is changed to 120 Hz. In this case, since only the frequency of the software modulation is changed, the frequency of the software modulation may be immediately changed to 120 Hz with a small delay. According to an embodiment, when the frequency of the hardware modulation is changed from 24 Hz to 120 Hz, the delay may be made up to 41.7 msec. When the frequency of the software modulation is changed from 24 Hz to 120 Hz, the refresh rate may be changed within 8.3 msec. According to the electronic device of an embodiment of the disclosure, the current consumption may be reduced by changing the rendering rate seamlessly under the condition in which it is difficult to change the refresh rate seamlessly. According to the electronic device of an embodiment of the disclosure, the refresh rate may be changed while increasing the efficiency in terms of the seamless change of the refresh rate, the reactivity, the interference of the peripheral device, and the current consumption through the combination of software modulation and hardware modulation. | 104,679 |
11862126 | Throughout the drawings, identical reference numbers designate similar, but not necessarily identical, elements. The figures are not necessarily to scale, and the size of some parts may be exaggerated to more clearly illustrate the example shown. Moreover, the drawings provide examples and/or implementations consistent with the description; however, the description is not limited to the examples and/or implementations provided in the drawings. DETAILED DESCRIPTION Computing devices are used by millions of people daily to carry out business, personal, and social operations and it is not uncommon for an individual to interact with multiple computing devices on a daily basis. Examples of computing devices include desktop computers, laptop computers, all-in-one devices, tablets, and gaming systems to name a few. In some cases, these computing devices are used to communicate with other users via video and audio. For example, a videoconferencing application executing on a computing device may allow a user to see and interact with users in remote locations. A video conferencing application may generate an inset window over the video conference. Such an inset window may present a secondary video stream to that presented in the video conference. An inset window may be a window on a graphical user interface that is smaller than the graphical user interface and is overlaid on top of another window. In an example, the inset window is disposed within the borders of the other window and may display video content that is different than the content of the other window. For example, within a room of multiple users, one of the users may be speaking. The inset window may present a focused view of the speaker such that remote participants may be aware of, and may pay attention to, the speaker. Such a focused view allows a remote participant to identify the speaker and to observe facial and bodily gestures to derive additional meaning from the communication. Such an inset window, which may be referred to as a picture-in-picture window, allows a remote participant to be more involved in the video conference. While particular reference is made to an inset window displaying a different perspective of the scene presented in the underlying window, the inset window may present any variety of different content, such as a different video stream or a different application. In some situations, the inset window may be placed in a corner of the video scene that is being displayed. However, it may be the case that the inset window obscures objects of interest, such as the users, in the room of the video conference. For example, an inset window may block one of the users in the video scene, perhaps even the speaker, such that the remote participant has an obscured view of the events or interactions in the video scene. This is exacerbated as the users in the video scene move around. For example, a user in the video scene may move to a location that is behind the inset window. Accordingly, a previously unobscured user is now obscured by the overlaid inset window. To address this, a remote participant may instruct a user in the video scene to move location so as to not be blocked. This may be cumbersome and may interrupt the flow of the meeting. In another example, the remote participant may move the inset window manually. However, this manual adjustment of the inset window also interrupts the meeting as the remote participant diverts his or her attention from the video scene to the control of the inset widow. Accordingly, the present specification describes a non-transitory machine-readable storage medium and method to determine a location of an inset window on the GUI. The location of the inset window is determined based on objects of interest, such as users, identified in the GUI. Specifically, the location of the inset window is selected to avoid overlapping with any of the objects of interest. Specifically, the method includes performing object recognition to detect objects in the video scene. The location of the objects are defined by coordinates relative to the GUI. The method may include automatically tracking the objects as they move throughout the video scene as well. The inset window also has coordinates. During execution of the video streaming application, the inset window coordinates are compared with the object coordinates. The system moves the inset window responsive to any overlap between the coordinates of the inset window and the coordinates of the object. As such, the method and non-transitory machine-readable storage medium automatically detect objects of interest and move the inset window to a location where it does not overlap, or where it minimally overlaps, the tracked objects. Specifically, the present specification describes a non-transitory machine-readable storage medium encoded with instructions executable by a processor of a computing device. As used in the present specification and in the appended claims, the term “non-transitory” does not encompass transitory propagating signals. The instructions, when executed by the processor, cause the processor to 1) identify an object depicted in a video scene, wherein the video scene is displayed on a graphical user interface (GUI) and 2) identify coordinates of the object depicted in the video scene, wherein the coordinates are relative to the GUI. The instructions are also executable by the processor, to cause the processor to 1) identify coordinates of an inset window which is smaller than the GUI and overlaps the video scene and 2) compare the coordinates of the object with the coordinates of the inset window to determine an overlap of the inset window with the object. Responsive to an identified overlap of the inset window and the object, the instructions are executable by the processor to cause the processor to alter a display characteristic of the inset window to avoid the overlap of the inset window with the object. The present specification also describes a method. According to the method, a processor of a computing device identifies a user depicted in a video scene, wherein the video scene is displayed in the GUI. The processor also identifies coordinates of the user depicted in the video scene, wherein the coordinates are relative to the GUI. The processor also identifies coordinates of an inset window which is smaller than the GUI and overlaps the video scene. The processor compares the coordinates of the user with the coordinates of the inset window to determine an overlap of the inset window with the user and responsive to an identified overlap of the inset window and the user, the processor alters a display characteristic of the inset window based on a movement of the user in the video scene to avoid overlap of the inset window and the user. In another example, the instructions are executable by the processor to cause the processor to identify a user depicted in a video scene, wherein the video scene is displayed on the GUI and generate a bounding box around a head of the user. The instructions are executable to identify coordinates of 1) the bounding box of the head of the user, wherein the coordinates are relative to the GUI and 2) an inset window which is smaller than the GUI and overlaps the video scene. The instructions are executable to compare the coordinates of the bounding box with the coordinates of the inset window to determine an overlap of the inset window with the bounding box. Responsive to an identified overlap of the inset window with the user, the instructions are executable by the processor to alter a display characteristic of the inset window to avoid the overlap of the inset window with the user. Turning now to the figures,FIGS.1A and1Bdepict the alteration of an inset window102of a GUI100, according to an example. The GUI100presented inFIGS.1A,1B,3A, and3Bmay be displayed on any number of computing devices including a desktop computer display device, a laptop display device, a tablet, a smartphone or any number of other computing devices. As described above, a video scene may be presented on the GUI100. The video scene may present a variety of content. In the example depicted inFIGS.1A,1B,3A, and3B, the video scene is of multiple users in a meeting room. As described above, an inset window102may be overlaid on top of the video scene to enhance the experience of remote participants. For example, due to low resolution or the size of the GUI100, the remote participant may be unaware of which user is speaking, and therefore which user should command their attention. Accordingly, an inset window102may be presented which highlights a particular user, such as a speaker. That is, the inset window102may depict the user corresponding to an active speaker, which in this case is the third user at the rear of the room. While particular reference is made to particular content presented in the inset window102, other content, such as a separate video stream, may be presented in the inset window102. FIG.1Adepicts a scenario where the inset window102overlaps the video scene and obscures the presentation of some of the users in the video. This may be distracting to the remote participant and may negate or negatively impact the intent of the video communication. Accordingly, the present specification describes the alteration of the inset window102based on detected objects of interest, which in this case are users, in the video scene. For example, as depicted inFIG.1B, the inset window102may be moved to a location within the video scene where it does not block the objects of interest. As depicted inFIG.1B, the first user is no longer blocked by the inset window102such that the remote participant has an unobstructed view of all users. WhileFIG.1Adepicts the alteration of the inset window102being a movement of the inset window102, another example of an alteration is depicted inFIGS.3A and3Bwhere the inset window102is reduced in size to prevent, or avoid, any overlap with an object of interest. In the example depicted inFIGS.1A and1B, each object of interest is enveloped by a bounding box104-1,104-2,104-3, which may be used to identify an overlap that triggers movement of the inset window102. In other examples, different methods of locating users and identifying overlap may be performed. FIG.2is a flowchart of a method200for altering an inset window102of a GUI100, according to an example. At step201, the method200includes identifying, via a processor of a computing device, a user depicted in a video scene which video scene is displayed in a GUI100. WhileFIG.2depicts identification of a user of interest, as described above any variety of other objects of interest may be identified and tracked. Identifying a user, or other object of interest, depicted in the video scene may include performing object identification. Identifying an object or user may occur in a variety of ways. For example, a processor of the computing device may identify a landmark feature on the face of the user. That is, the face of the user has certain landmark features, such as the eyes, mouth, nose etc. that may be identified via machine-learning to identify the object as a user. Using a machine-learning model, the processor may identify the head and/or body of the user from these landmark features. That is the machine-learning engine may analyze the image of a user as captured by a capture device. The machine-learning engine may compute and map the features of the objects with regards to the face models library. Such machine-learning identification of the user may occur regardless of the orientation of the user. That is, the processor may identify the head of a user whether the user is facing a capture device or is facing a direction perpendicular to the capture device. In some examples, the identification of the user may include generation of a bounding box104around the users as depicted inFIGS.1A,1B,3A, and3B. Such a bounding box104may simplify the calculation of an overlap of the inset window102with the user. That is, a bounding box104may be generated around the user to envelop the landmark features and a buffer area around the landmark features. While particular reference is made to identifying users in the video scene, other objects of interest may be identified in the video scene. Accordingly, the present method200allows for the alteration of an inset window102so as to provide a desired presentation of any identified object of interest, which objects may be users in the video scene. In another example, of the object of interest may be based on user input. For example, a user may draw the bounding box104around an object of interest that is to be obstruction free. While particular reference is made to particular operations to identify the user, or other object of interest, depicted in the video scene, other operations may be performed as well. At step202, the method200includes identifying, via the processor, coordinates of the user relative to the GUI100. That is, the GUI may have a coordinate system that may be used to define the position of various objects depicted therein. As will be described below, the coordinates of the objects, and particularly of the corners of the objects, may be used to identify an overlap between objects on the GUI100with the inset window102that overlaps the video scene. In a particular example, the top left-hand corner of the GUI100depicted inFIGS.1A and1Bmay be the origin and may have coordinates 0, 0. The location of other objects within the GUI100may be based of this origin. The x-coordinate values increase moving in a rightward direction and the y-direction coordinate values increase moving in a downward direction in the view ofFIGS.1A and1B. In some examples the coordinates of the bounding boxes104, as well as the inset window102may identify the top right corner coordinates, followed by a length and width of the bounding box104. For example, the coordinates for the first user bounding box104-1may have the notation (1630, 544) 354×236 where 1630 is the x-coordinate of the upper left-hand corner of the first user bounding box104-1, 544 is the y-coordinate of the upper left-hand corner of the first user bounding box104-1, 354 is the width in the x-direction, and 236 is the height in the y-direction. Given this notation, the upper left-hand corner of the first user bounding box104-1, which may be designated as P1-, is found at the coordinates (1630, 544) relative to the GUI. The lower right-hand corner of the first user bounding box104-1, which may be designated as -P1, has the coordinates (1984, 780). At step203, the method200includes identifying coordinates of the inset window102. As depicted inFIGS.1A and1B, the inset window102is smaller than the GUI100and overlaps the video scene. Similar to coordinates for the bounding boxes104of the users, the processor may identify coordinates of the inset window102. For example, the processor may identify the upper left-hand, P0-, coordinates of the inset window102to be (1920, 760) and the lower right-hand coordinates for the inset widow102, -P0, as (2560, 1440). The coordinates of the bounding boxes104and the inset window102provide a mechanism by which it may be determined that the inset window102is overlapping the objects of interest. Accordingly, at step204, the method200includes comparing, via the processor, the coordinates of the bounding box104surrounding the user, with the coordinates of the inset window102to determine an overlap of the inset window102with the user, or other object of interest. As depicted inFIGS.1A and1B, such a comparison may be of different corners of the respective elements. That is, comparing the coordinates of the object with the coordinates of the inset window102may include comparing the coordinates of a first corner of the bounding box104for the user or other object, with coordinates of a second corner of the inset window102, where the first corner is opposite the second corner. In the example depicted inFIGS.1A and1B, the lower right-hand corner coordinates of the first user bounding box104-1may be compared with the upper left-hand corner coordinates, (i.e., the opposite of the lower right-hand corner) of the inset window102. By comparing the coordinates of these opposing corners, the processor may determine whether there is any overlap. As an example, given the lower right-hand coordinates of the first user bounding box, -P1, of (1984, 780) and given the upper left-hand coordinates of the inset window bounding box, P0-, of (1920, 760) the processor may compare the opposite corners to determine if P0-X is less than -P1X and whether P0-Y is less than -P1Y. If both these conditions are met, the processor determines that there is an overlap. InFIG.1A, P0-X, which is 1920, is less than -P1X, which is 1984 and P0-Y, which is 760 is less than -P1Y which is 780. Accordingly, the processor may determine that there is an overlap of the inset window102with the first user bounding box104-1. Similar comparisons may be made between the inset window102and the other bounding boxes104-2,104-3. Note that while a particular example has been provided of comparing an inset window102at a lower right-hand corner of the GUI100with a single bounding box104, similar comparisons may be made between the inset window102and the bounding boxes104-1,104-2,104-3, when the inset window102is initially located in a different corner. Accordingly, at step205, the method200includes altering a display characteristic of the inset window102responsive to the identified overlap. The alteration may take a variety of forms. For example, as depicted inFIGS.1A and1B, the alteration may include moving the inset window102to a different location, which different location still overlaps the video scene. Such a movement of the inset window102may occur in a variety of ways. For example, the processor may move the inset window102to a different location. With the inset window102at the different location, the processor may compare the coordinates of the object bounding box104with the coordinates of the inset window102at the different location to determine if an overlap exists with the inset window102at the different location. That is, in this example determination of overlap at the different location occurs following movement of the inset window102to the different location. For example, were the inset window102moved to the position depicted inFIG.1B, the upper right-hand corner coordinates of the inset window102may be compared against the lower left-hand corner coordinates of the bounding boxes104, including the second user bounding box104-2. However, as depicted inFIG.1B, such an overlap does not exist, and so the inset window102may remain at this location. Were an overlap to exist, the processor may again move the inset window102to a new location and again test for an overlap. In another example, the processor may determine a non-overlapping location for the inset window102prior to moving the inset window102. That is, the processor may compare the coordinates of the bounding boxes104with the coordinates of the inset window102at multiple candidate locations to determine if there is any overlap of the inset window102with the objects, were the inset window102moved to the multiple candidate locations. In this example, the processor may move the inset window102to the candidate location which would result in no overlap of the inset window102with the bounding boxes104. That is, in the previous example, the processor may identify the new location of the inset window102via trial and error whereas in this example, the processor may preemptively determine a location for which there will be no overlap and move the inset window102to that location. In some examples, it may be that the inset window102overlaps the objects/users regardless of the position of the inset window102. Accordingly, in this example, the processor may identify the candidate location for which the inset window102would be largest without overlap of the inset window102with the object. That is, in addition to moving the inset window102the processor may potentially resize the inset window102. As a particular example, if positioned in the lower left-hand corner, the inset window102may be maintained at a first size to avoid overlap. In this example, it may be the case that if the inset window102were positioned in the upper right-hand corner, upper left-hand corner, or in the lower right-hand corner, the inset window102would be reduced to a second size, which is smaller than the first size, to prevent overlap. In this example, the processor may move the inset window102to the candidate location, i.e., the lower left-hand corner, which would result in the largest inset window102without overlapping the inset window102with the object of interest. As yet another example, it may be desirable to maintain the inset window102a certain size, even if doing so would result in overlap. That is, as compared to the above example, if each of the candidate locations would result in overlap of the inset window102with the objects, rather than re-sizing the inset window102, the processor may position the inset window102in a location which has a reduced amount of overlap. Accordingly, the processor may identify from a set of regions having a same size as the inset window102, a region which would result in the least amount of overlap of the inset window102with the object. In this example, “least amount of overlap” may be determined based on the coordinates of the bounding boxes104and the inset window102. That is, the overlapping region may have an area which may be determined based on a comparison of the coordinates of the bounding box with the coordinates of the inset window102. Accordingly, the region with the “least amount of overlap” may refer to the region where the overlap between the inset window102and the bounding box104has the smallest area. In this example, the processor may move the inset window102to the region which would result in the least amount of overlap of the inset window102with the bounding box104. In this case where overlap exists even after movement, the processor may further alter the display characteristic of the inset window102. For example, the processor may alter a transparency of the inset window102responsive to the least amount of overlap being greater than a threshold amount. For example, if the inset window102overlaps a bounding box104, but less than a threshold amount such as 10%, the inset window102may be maintained at full opacity. However, if the inset window102overlaps the bounding box by a higher amount such as 25%, then the inset window102may be altered to have a higher transparency value, such as for example 25%. While particular reference has been made to different threshold amounts of overlap and transparency levels, any threshold amount and transparency level may be implemented in accordance with the principles described herein. In some particular examples, the amount of transparency may be based on the amount of overlap. In some examples, the alteration that is made is based on a movement of the object of interest in the video scene. That is, in video streams, the multiple users may not be stationary and may be moving. In this example, the processor may track the movement of the objects of interest and update the bounding boxes104that surround the users. As such, the adjustment to the inset window102may be dynamic and automatic throughout the remote communication to ensure that there is no overlap between the inset window102and any object of interest, regardless of the motion of the object of interest. Accordingly, rather than relying on pixel or texture analysis to determine where to position an inset window102, the present method200adjusts the position, size, or other display characteristic based on machine-learning identification of objects and tracking those objects as they move through a video scene. Moreover, the present method200is a coordinate-based determination regarding the overlap between objects of interest in the video scene. WhileFIGS.1A and1Bdepict, andFIG.2describes, comparison of a single user bounding box104with an inset window102, the method may compare various object bounding boxes104with an inset window102to determine a desired placement of the inset window102. Moreover, while FIGS.1A and1B depict positioning of the inset window102at different corner locations, in some examples, the processor may move the inset window102to a non-corner location of the video scene. FIGS.3A and3Bdepict the alteration of an inset window102of a GUI100, according to an example. As described above, the alterations made to the inset window102to prevent overlap with an object bounding box104may vary. In the example depicted inFIGS.3A and3B, altering the display characteristic of the inset window102includes resizing the inset window102. That is, the processor may compare the opposite corners of the first user bounding box104-1and the inset window102as described above. If there is a determined overlap, as there is depicted inFIG.3A, the processor may incrementally decrease the size of the inset window102until a comparison of the coordinates of the object bounding box104and the coordinates of the inset window102indicate no overlap of the inset window102with the object. Note that whileFIGS.1A,1B,3A, and3Bdepict different alterations, these alterations may be used independently, in combination with one another, and/or with other alterations to present an inset window102that does not obstruct the view of the objects of interest, such as users, within the video scene. FIG.4is a flowchart of a method400for altering an inset window102of a GUI100, according to an example. At step401, the method400includes identifying a landmark feature on a face of a user in a video scene. That is, as described above, the present methods and systems may track any variety of objects, an example of which is a user. A user may be identified based on the object recognition of landmark features of the user such as the user's eyes, nose, mouth etc. At step402, the method400includes generating a bounding box104around the head of the user. The bounding box104therefore is to envelop the landmark feature as well as a buffer area around the landmark feature such that the entirety of the user's head is captured within the bounding box104. At step403, the method400includes identifying coordinates of the bounding box104around the head of the user and at step404, the method400includes identifying coordinates of an inset window102over the video scene. At step405, the method400includes comparing the coordinates of the bounding box104with the coordinates of the inset window102. These operations may be performed as described above in connection withFIG.2. At step406, the method400includes altering a display characteristic of the inset window102. This may be performed as described above in connection withFIG.2and may include any variety and combination of alterations including moving the inset window102, re-sizing the inset window102, or other alterations. As described above, in some examples, the alterations may be based on movement of the object of interest. Accordingly, at step407, the method400includes tracking a movement of the user in the video scene. That is, the machine-learning model may be used not only to identify static users, but may be used to identify movement of the users. As such, the present method400dynamically and in real-time updates the inset window102to provide an unobscured view of the video scene. As described above, despite the alterations made to the inset window102there may still exist some overlap with users in the video scene by the inset window102. In these examples, the processor may prioritize which objects of interest are overlapped. That is, the processor may identify a region within the video scene that is precluded from being a location to which the inset window102is moved. This region may be a location associated with a speaker in the video scene. Accordingly, at step408, the method400includes preventing an active speaker from being blocked by the inset window. This may be performed in a variety of ways. For example, by analyzing an audio signature associated with the video scene or the video capture system, the processor may identify a source of the audio. When the source of the audio is determined to be a user, that user is designated as a speaker. As such, the processor may, while allowing a degree of overlap with other bounding boxes104when there is no option for non-overlap between the inset window102and the variety of objects of interest, prevent any overlap with the bounding box104associated with the speaker. Thus, when a degree of overlap is inevitable, the processor still ensures engagement of the remote participant by ensuring that the speaker and subject of attention is unobscured. FIG.5depicts a non-transitory machine-readable storage medium506for altering an inset window102of a GUI100, according to an example. As used in the present specification, the term “non-transitory” does not encompass transitory propagating signals. To achieve its desired functionality, a computing device includes various hardware components. Specifically, a computing device includes a processor and a machine-readable storage medium506. The machine-readable storage medium506is communicatively coupled to the processor. The machine-readable storage medium506includes a number of instructions508,510,512,514,516for performing a designated function. The machine-readable storage medium506causes the processor to execute the designated function of the instructions508,510,512,514,516. The machine-readable storage medium506can store data, programs, instructions, or any other machine-readable data that can be utilized to operate the computing device. Machine-readable storage medium506can store computer readable instructions that the processor of the computing device can process, or execute. The machine-readable storage medium506can be an electronic, magnetic, optical, or other physical storage device that contains or stores executable instructions. Machine-readable storage medium506may be, for example, Random Access Memory (RAM), an Electrically Erasable Programmable Read-Only Memory (EEPROM), a storage device, an optical disc, etc. The machine-readable storage medium506may be a non-transitory machine-readable storage medium506. Object identification instructions508, when executed by the processor, cause the processor to, identify an object in a video scene, wherein the video scene is displayed in a GUI100. Object coordinates instructions510, when executed by the processor, cause the processor to identify coordinates of the object depicted in the video scene, wherein the coordinates are relative to the GUI100. Inset window coordinates instructions512, when executed by the processor, cause the processor to, identify coordinates of an inset window102which is smaller than the GUI100and overlaps the video scene. Coordinate comparison instructions514, when executed by the processor, cause the processor to compare the coordinates of the object with the coordinates of the inset window102to determine an overlap of the inset window102with the object. Display alteration instructions516, when executed by the processor, cause the processor to alter a display characteristic of the inset window102to avoid the overlap of the inset window102with the object responsive to an identified overlap of the inset window and the object. FIG.6depicts a non-transitory machine-readable storage medium506for altering an inset window of a GUI, according to an example. The machine-readable storage medium506includes a number of instructions618,620,622,512,514,516for performing a designated function. The machine-readable storage medium506causes the processor to execute the designated function of the instructions618,620,622,512,514,516. The machine-readable storage medium506can store data, programs, instructions, or any other machine-readable data that can be utilized to operate the computing device. Machine-readable storage medium506can store computer readable instructions that the processor of the computing device can process, or execute. The machine-readable storage medium506can be an electronic, magnetic, optical, or other physical storage device that contains or stores executable instructions. Machine-readable storage medium506may be, for example, Random Access Memory (RAM), an Electrically Erasable Programmable Read-Only Memory (EEPROM), a storage device, an optical disc, etc. The machine-readable storage medium506may be a non-transitory machine-readable storage medium506. User identification instructions618, when executed by the processor, cause the processor to, identify a user depicted in a video scene, wherein the video scene is displayed in a GUI100. Bounding box generation instructions620, when executed by the processor, cause the processor to, generate a bounding box104around a head of the user. Bounding box coordinates instructions622, when executed by the processor, cause the processor to identify coordinates of the bounding box104of the head of the user, wherein the coordinates are relative to the GUI100. Inset window coordinates instructions512, when executed by the processor, cause the processor to, identify coordinates of an inset window102which is smaller than the GUI100and overlaps the video scene. Coordinate comparison instructions514, when executed by the processor, cause the processor to compare the coordinates of the bounding box with the coordinates of the inset window102to determine an overlap of the inset window102with the object. Display alteration instructions516, when executed by the processor, cause the processor to alter a display characteristic of the inset window102to avoid the overlap of the inset window102with the object responsive to an identified overlap of the inset window and the object. FIG.7depicts a computing device724for altering an inset window of a GUI, according to an example. The computing device724may be a desktop computer, laptop computer, all-in-one device, tablet, or gaming system to name a few. To execute its intended functionality, the computing device724includes various hardware components, which may include a processor726and non-transitory machine-readable storage medium506. The processor726may include the hardware architecture to retrieve executable code from the non-transitory machine-readable storage medium506and execute the executable code. As specific examples, the computing device724as described herein may include computer readable storage medium, computer readable storage medium and a processor, an application specific integrated circuit (ASIC), a semiconductor-based microprocessor, a central processing unit (CPU), and a field-programmable gate array (FPGA), and/or other hardware device. The non-transitory machine-readable storage medium506stores computer usable program code for use by or in connection with an instruction execution system, apparatus, or device. The non-transitory machine-readable storage medium506may take many types of memory including volatile and non-volatile memory. For example, the memory may include Random Access Memory (RAM), Read Only Memory (ROM), optical memory disks, and magnetic disks, among others. The executable code may, when executed by the processor726cause the processor726to implement the functionality described herein. As described above, the processor726executes the object identification instructions508to, identify an object in a video scene, wherein the video scene is displayed in a GUI100. The processor726executes the object coordinates instructions510to identify coordinates of the object depicted in the video scene, wherein the coordinates are relative of the GUI100. The processor726executes the inset window coordinates instructions512to, identify coordinates of an inset window102which is smaller than the GUI100and overlaps the video scene. The processor726executes the coordinate comparison instructions514to compare the coordinates of the object with the coordinates of the inset window102to determine an overlap of the inset window102with the object. The processor726executes the display alteration instructions516to alter a display characteristic of the inset window102to avoid the overlap of the inset window102with the object responsive to an identified overlap of the inset window and the object. | 36,626 |
11862127 | DETAILED DESCRIPTION Hereinafter, implementations of a refrigerator according to the present disclosure will be described in detail with reference to the accompanying drawings. Referring toFIGS.1and2, a refrigerator10according to one implementation includes a cabinet11having a storage compartment formed therein and a door12connected to a front surface of the cabinet11to selectively open or close the storage compartment. In more detail, the storage compartment may include one or both of a freezing compartment and a refrigerating compartment, and the door12may include a transparency-adjustable door glass122and a door frame121covering an outer rim of the door glass122. The transparency of the door glass122may be adjusted between, for example, a complete opaque state and a complete transparent state so that food stored in the storage compartment may be visually checked even when the door12is closed. Further, information on food of interest may be displayed on the door glass122. The door glass122may be made from glass or a glass-like material. In some cases, the door glass122may include regions that are fixed to be transparent and/or fixed to be opaque. The door glass122may be provided with a transparent display panel for displaying texts, images, or videos. Furthermore, a plurality of display areas A may be arranged on a front surface of the door glass122. In detail, each display area A may correspond to the area of a front surface part of a storage area divided by a shelf in the storage compartment. In other words, a vertical height of any one of the display areas A may correspond to a distance between shelves that are vertically adjacent to each other in the storage compartment, and a horizontal width of the one of the display areas A may correspond a width of the door glass122excluding the door frame121. However, without being limited to this manner of division, the one of the display areas A may be further divided into a plurality of smaller display areas. In some cases, the display areas A may be enlarged to correspond to multiple storage sections. In some cases, the display area A may be managed by a coordinate system. That is, four vertices of the display area A may be defined in terms of X and Y coordinates, and, when any point within the display area A is touched by a user, coordinates of the touched point may be recognized by a control unit. Furthermore, the control unit may render the entirety of the display area A including the touched point transparent, or may allow a display screen to be displayed on the display area A. As described above, the display area may be formed on the door glass122for each storage section in the storage compartment, and each display area may be individually controlled. Therefore, information on food stored in a specific section of the storage compartment may be displayed on the display area positioned at the front of the specific section. As a result, a user may recognize various information on food stored in the refrigerator while viewing the food, without having to move his/her eyes too much vertically or horizontally. When the user touches a certain point on the door glass122with a finger as illustrated inFIG.1, a food information display image20for presenting information on food stored in a storage space corresponding to a touch point may be displayed on the display area A corresponding to the touch point as illustrated inFIG.2. For example, if the user checks a section of the storage compartment in which a fruit is stored through the door glass122that is transparent, and then touches a part of the door glass122corresponding to an area in which the fruit is stored, information on the fruit is displayed on the display area A. The information on the fruit may include the number of days left until an expiration date, as one example. In the case where food items having different types or stored at different time points are stored in a specific section, if the display area A corresponding to the specific section is touched, the food information display images20for all of the food items stored in the specific section may be simultaneously displayed. Referring now toFIG.3, the food information display image may be displayed so as to notify a name of a specific food and the number of days left until an expiration date of the food. Referring also toFIG.4, the food information display image may be displayed so as to notify the name of the specific food and the number of days that have elapsed from a storage date of the food. Furthermore, in the case of a food that has passed an expiration date, the food information display image may be displayed so as to notify the number of days that have elapsed from the expiration date. FIG.5illustrates an example method for controlling the refrigerator to input and check food information. Referring toFIG.5, the user touches the display area A on the door glass122to check food information or input storage information on a stored food. Here, it is assumed that the door glass122is in a transparent state and the display area A is in an activated state. In the activated state of the display area A, a touch motion of the user may be recognized and the food information may be displayed. The control unit of the refrigerator10can determine whether the touch of the user is for inputting the food information or for checking the food information according to the type of the touch (S12, S13). It is preferable that a touch for inputting the food information be different from a touch for checking the food information. For example, if the user touches a certain area of a front surface part of the door glass122for at least two seconds, i.e., touches on the certain area of the front surface part of the door glass122, the control unit may recognize the touch as an instruction for inputting the food information. If the user performs a touch on and a touch off within one second, the control unit may recognize the touch as an instruction for checking the food information. There may be other various methods for differently setting a touch for inputting the food information and a touch for checking the food information so as to differentiate therebetween. In some cases, if it is determined that an instruction for checking the food information is input since the user touches a front surface of the door glass122for less than one second, a storage space of the refrigerator corresponding to the display area A including a point touched by the user, i.e., the storage compartment, is matched (S14). Here, the matching of the storage space may be construed as performing, by the control unit, an algorithm for detecting a storage space corresponding to a touch area. A memory of the control unit may store, in the form of a look-up table, storage sections respectively corresponding to the plurality of display areas A on the door glass122. That is, a plurality of display areas respectively corresponding to storage sections of the refrigerating compartment and storage sections of the freezing compartment are defined on each of the door glass122of the refrigerating compartment and the door glass122of the freezing compartment, and the display areas are programmed to be operated individually. Therefore, the user can first check, visually through the door glass122, what section of what storage compartment stores a food of interest. Then, the user can touch the display area of the door glass122located directly at the front of a storage section in which the food of interest is stored. Here, each display area A has a size substantially similar to that of a front part of the storage section. Therefore, the user may touch any one point considered to correspond to an inner area of the storage section in which the food of interest is stored. If coordinates of the touch point fall within coordinates that define the size of the display area A, an information image for the food of interest is displayed on the display area A corresponding to the storage section in which the food of interest is stored (S15). Furthermore, it is determined whether the front surface of the door glass122is touched again by the user (S16), or it is determined whether a set time expires after food information image is displayed (S17). If it is determined that a certain point within the display area A is touched by the user after the food information is displayed on the display area A, or if it is determined that the set time expires after the food information is displayed on the display area A, the food information image20is displayed-off. If the user touches on the door glass122for a set time or longer and the control unit recognizes the touch as an instruction for inputting the food information, a display screen for inputting the food information is displayed on the display area A. Here, in order to input information on a stored food, the user may touch the display area A located at the front of a storage section in which the food is stored. In some cases, when the control unit receives the instruction for inputting the food information, the screen for inputting the food information is displayed on a selected display area A, i.e., the display area A including the touch point (S19). The screen for inputting the food information will be described in detail with reference to the drawings. When the screen for inputting the food information is displayed, the user inputs the information on the stored food through the screen (S20). Here, the information on the stored food may include at least one of a name, expiration date, price, weight, and number/quantity of the stored food. Furthermore, in addition to the foregoing pieces of information, other various types of information may be input. When the input of the food information is completed, the user can touch a storage completion button to complete a storage process (S21). Here, the storage completion button may be provided to one side of the food information input screen or the display area A in the form of an icon, so that a completion instruction is input when the user touches the button. When the input of the food information is completed, the screen for inputting the food information is displayed-off (S22) so that the process of inputting the food information is completed. As described above, the door glass122may be a transparency-adjustable glass. For example, the door glass122may be a smart glass for adjusting the transparency by controlling an intensity of current that flows thereto, more specifically, a smart glass made of an electrochromic material. Therefore, the door glass122may switch from an opaque state to a semi-transparent state or a completely transparent state. When the door glass122is maintained in the completely transparent state, not only the user but also other persons may recognize all the foods stored in the refrigerator. In this state, it may be the case that the foods should not be allowed to be seen from the outside, for example, because the user has visitors. Furthermore, when the door glass122is maintained in the completely transparent state, radiant heat due to sun or other external light may increase a temperature in the refrigerator. Therefore, it may be necessary to allow the user to selectively render the door glass122transparent. FIG.6illustrates an example method for controlling the refrigerator to input or check the food information through a transparency-adjustable door glass. Referring to the implementation illustrated inFIG.6, it is assumed that the door12is closed and the door glass122is maintained in the opaque state (S40). In other words, it is assumed that the door glass122is maintained in the opaque state while an event does not occur on the door glass122. In more detail, when the door glass122is opaque, an operation of switching the door glass122into the transparent state may first be performed. To this end, the control unit can determine whether the front surface of the door glass122is touched by the user (S41). In some cases, in the case where the control unit detects that the door12is closed after being opened, the door glass122may be switched into the transparent state even though the door glass122is not touched by the user (S47). That is, since the door should be opened and closed to store a new food, the control unit may switch the entirety of the door glass122into the transparent state when detecting the opening and closing of the door so that the food information is input (S48). Furthermore, since the control unit may be unable to recognize a storage section in which a food has been stored by the user, the entirety of the door glass122may be switched into the transparent state. In this case, after closing the door12, the user may be allowed to input information on the food stored in the storage section by touching the display area A corresponding to the storage section while viewing the storage section through the transparent door glass. When the control unit recognizes that a specific area of the door glass122is touched on, the door glass122may be switched into the transparent state. Here, according to a touch type, a part or the entirety of the door glass122may be switched into the transparent state. For examples, according to the number of times of touching the door glass122, an area to be switched into the transparent state or a size of an area to be switched into the transparent state may be differently determined. In detail, if the user touches the specific area of the door glass122one time, only the display area A including the touched point may be switched into the transparent state, and, if the user consecutively touches the specific area of the door glass122two times, the entirety of the door glass122may be switched into the transparent state (S42). Here, the motion of consecutively touching the door glass122two times may include a motion of consecutively pushing or knocking the door glass122two times. When a specific display area of the door glass122or the entirety of the door glass122is switched into the transparent state, the user can touch the specific display area A again to input or check information on a food of interest while viewing the food (S43). Thereafter, the food information may be input or checked according to the control method as described above with reference toFIG.5(S44). When the input or checking of the food information is completed (S45), the door glass122may be switched into the opaque state. The door glass122may be switched into the opaque state immediately after the food information input screen is displayed-off (S22) or the food information image is displayed-off (S18) as described above with reference toFIG.5. FIG.7illustrates an example food information input screen. Referring toFIG.7, when the door glass122is touched to input the food information, a food information input screen100may be presented in the form of a transparent display. In more detail, a food name input window101, a storage time display window102, an expiration date input window103, and a keyboard window104may be arranged on the food information input screen100. Furthermore, windows for inputting various information such as the number, weight, and price of food may be provided. Accordingly, the user may input a food name through a keyboard displayed on the keyboard window104. Furthermore, the user may set an expiration date, and may touch one of an elapsed days display icon or a remaining days display icon to select an expiration date display method. The storage time display window102may be programmed so that the user inputs a date and time or the time controlled by the control unit and displayed on the door glass122is automatically input and stored. When the input of information is completed, an input completion window105may be touched so that the food information input screen100is displayed-off. Furthermore, the door glass122may be switched into the opaque state at the same time as when the food information input screen100is displayed-off or after a set time expires. As described above, according to the refrigerator according to one implementation, the user may immediately check information on a stored food while viewing the stored food through a transparency-adjustable door glass on which a transparent display is presented. Furthermore, since it is not necessary to open the door to check the food information, the power consumption of the refrigerator may be reduced. Described below is a method for controlling the refrigerator that includes the transparent door glass to allow the user to check information on a food required for a specific recipe and purchase required foods through online shopping. FIG.8illustrates an example method for controlling the refrigerator to check retained food materials for a recipe according to an implementation. Referring toFIG.8, it may initially be assumed that the door12is closed and the door glass122is maintained in the opaque state (S51). In this state, if the user touches the door glass122(S52), selectable menus are displayed-on while the door glass122is in the opaque state (S53). Alternatively, the menus may be displayed-on after the door glass122is switched into the transparent state. In further detail, the selectable menus may be provided as touchable icons, and the user may touch and select a recipe menu among the selectable menus (S54). When the recipe menu is selected, a recipe list stored in the memory of the control unit is displayed-on (S55). In this state, the user may touch and select a desired recipe item (S56), and information on a selected recipe is displayed-on (S57). In detail, the information on the recipe may be output to the front surface of the door glass122in the form of a text, an image, or a video. Furthermore, the information on the recipe may include a method of cooking using the recipe and required food material information. An item for selecting an option for viewing a retained material is displayed on one side of a display area in the form of an icon, and the user may touch a retained material viewing icon to select the option for viewing a retained material (S58). When the option for viewing a retained material is selected, foods required for the recipe, among the foods stored in the refrigerator, are retrieved. Furthermore, the display area of the door glass122which corresponds to a storage section in which a currently-stored retained material is located is switched into the transparent state, and, at the same time, an information image for the retained material food is displayed-on (S59). In the case where the door glass122is in the transparent state, the information image for the retained material food may be displayed on a point of the door glass122which corresponds to the location where the food is stored. When the information on the currently retained food is displayed on the display area of the door glass122, the user may check what food is required to be purchased. Furthermore, a material shopping selection menu may be displayed on the display area together with the retained food information so that the user may immediately purchase a required food through online shopping (S60). If it is determined that the user selects an option for shopping for a material by touching a material shopping menu displayed on a screen (S61), the control unit may immediately establish a connection to the Internet, via a wired or a wireless connection, and a shopping mall screen may be displayed-on (S62). Here, a homepage of a specific shopping mall may be displayed immediately after the connection to the Internet is established, or in some cases, a list of homepages of accessible shopping malls may be listed. In further detail, when shopping is completed after logging on to an Internet shopping mall (S63), the door glass122may be automatically switched into the opaque state (S64). However, in some cases, the door glass122may be maintained in the transparent state and then may be switched into the opaque state if the user makes a touch or a set time expires. Furthermore, the user may touch a shopping completion icon displayed on the display area so that a shopping completion signal is input to the control unit. FIGS.9to12illustrate example screens displayed on the display area when the method described above with reference toFIG.8is performed. Referring toFIG.9, when the user touches the door glass122while the door glass122is in the opaque state, selectable menus may be displayed on the door glass122. In detail, a storage compartment temperature display part123may be disposed on one side of the door glass122, more specifically, an upper corner of the door glass122to display a current temperature of the storage compartment. Storage compartment temperature information displayed on the temperature display part123may not be shown when a screen is switched, or may be maintained in a display-on state. Furthermore, a menu display part124may be disposed on another side of the door glass122, and a plurality of recipe selection images124amay be displayed on the menu display part124in the form of a touch icon. The menu display part124may be switched into a transparent state or a semi-transparent state to improve the visibility of the recipe selection images124a. In this state, the user may touch a recipe selection icon so that a screen of a next step may be displayed. Referring toFIG.10, when the user touches a recipe selection image among the selection images124adisplayed on the menu display part124, a plurality of recipes stored in the memory of the control unit may be displayed on the front surface of the door glass122in the form of a touch icon. The size or location of the display area on which the recipes are displayed is not particularly limited. That is, recipe lists may be displayed over the door glass122as illustrated inFIG.10, or may be displayed on a specific area. Each recipe selection menu may be displayed in the form of an image or a video showing a recipe name and a dish or a result obtained from a recipe. In this state, the user may search for and touch a desired recipe, and information on the recipe may be displayed. Referring toFIG.11, when the user touches and selects a desired recipe, information on the recipe is displayed in the form of a text, an image or a video. Furthermore, a retained material viewing selection image124bmay be displayed on one side of the door glass122in the form of a touch icon. In detail, information such as a taste, a nutritional effect, required food materials, and/or a level of cooking difficulty of the recipe may be displayed on one side of the display area of the122in the form of a text, an image, or a video. Referring toFIG.12, when the user selects the retained material viewing selection image124b, information on a food material stored in the refrigerator, among food materials required for a selected recipe, is displayed on the door glass122. In detail, the information on the food material may be displayed, in the form of a food information display image, on the display area of the door glass122which corresponds to the location of a storage section in which a corresponding food is stored as described above with reference toFIG.2. The displayed food information may include a name of the food material. Furthermore, an expiration date and a remaining amount of the food material may be displayed together with the name of the food material. In this state, a food material shopping selection menu may be displayed on one side of the front surface of the door glass122so as to help the user immediately access an Internet shopping mall or the like. Although implementations have been described with reference to a number of illustrative implementations thereof, it should be understood that numerous other modifications and implementations can be devised by those skilled in the art that will fall within the spirit and scope of the principles of this disclosure. More particularly, various variations and modifications are possible in the component parts and/or arrangements of the subject combination arrangement within the scope of the disclosure, the drawings and the appended claims. In addition to variations and modifications in the component parts and/or arrangements, alternative uses will also be apparent to those skilled in the art. | 24,439 |
11862128 | DESCRIPTION OF EXAMPLE EMBODIMENTS AR/VR system may have limited available power (e.g., powered by battery) and limited computational resources (e.g., computational units, memory, data transmission bandwidth, etc.). However, graphic rendering processes for full resolution display content could be demanding on both power consumption and computational resources, and therefore could negatively impact the performance of the AR/VR system. Particular embodiments may use a foveated rendering process to reduce the power consumption and computational resources usage related to the display content rendering processes. For example, the system may render display content with full resolution (for all color channels of Red, Green, and Blue) in a foveal region corresponding to the user's gazing point and render display content with reduced resolutions (for one or more color channels) in the display regions beyond the user's foveal region. By using the foveated rendering process, the system may cast fewer rays for determining tile/surface pairs for the display content with reduced resolutions, and therefore use less computational resources for the rendering processes. The system may process a larger image area (e.g., a larger number of pixel or pixel tiles) in a given clock cycle using the same amount the computational resources as for processing full resolution image because of the reduced memory reading and data processing, and therefore improve the efficiency of the system performance. Furthermore, the system may need less transmission bandwidth for sending the pixel values to the display because of the reduced resolution in at least a portion of the foveated image. FIG.1Aillustrates an example artificial reality system100A. In particular embodiments, the artificial reality system100may comprise a headset104, a controller106, and a computing system108, etc. A user102may wear the headset104that could display visual artificial reality content to the user102. The headset104may include an audio device that could provide audio artificial reality content to the user102. The headset104may include one or more cameras which can capture images and videos of environments. The headset104may include an eye tracking system to determine the vergence distance of the user102. The headset104may be referred as a head-mounted display (HDM). The controller106may comprise a trackpad and one or more buttons. The controller106may receive inputs from the user102and relay the inputs to the computing system108. The controller106may also provide haptic feedback to the user102. The computing system108may be connected to the headset104and the controller106through cables or wireless connections. The computing system108may control the headset104and the controller106to provide the artificial reality content to and receive inputs from the user102. The computing system108may be a standalone host computer system, an on-board computer system integrated with the headset104, a mobile device, or any other hardware platform capable of providing artificial reality content to and receiving inputs from the user102. FIG.1Billustrates an example augmented reality system100B. The augmented reality system100B may include a head-mounted display (HMD)110(e.g., glasses) comprising a frame112, one or more displays114, and a computing system120. The displays114may be transparent or translucent allowing a user wearing the HMD110to look through the displays114to see the real world and displaying visual artificial reality content to the user at the same time. The HMD110may include an audio device that may provide audio artificial reality content to users. The HMD110may include one or more cameras which can capture images and videos of environments. The HMD110may include an eye tracking system to track the vergence movement of the user wearing the HMD110. The augmented reality system100B may further include a controller comprising a trackpad and one or more buttons. The controller may receive inputs from users and relay the inputs to the computing system120. The controller may also provide haptic feedback to users. The computing system120may be connected to the HMD110and the controller through cables or wireless connections. The computing system120may control the HMD110and the controller to provide the augmented reality content to and receive inputs from users. The computing system120may be a standalone host computer system, an on-board computer system integrated with the HMD110, a mobile device, or any other hardware platform capable of providing artificial reality content to and receiving inputs from users. FIG.2Aillustrates an example architecture200of a display engine210. In particular embodiments, the processes and methods as described in this disclosure may be embodied or implemented within a display engine210. The display engine210may include, for example, but is not limited to, a texture memory212, a transform block213, a pixel block214, a display block215, input data bus211, output data bus216, etc. In particular embodiments, the display engine210may include one or more graphic pipelines for generating images to be rendered on the display. For example, the display engine210may include two graphic pipelines for the user's left and right eyes. One of the graphic pipelines may include or may be implemented on the texture memory212, the transform block213, the pixel block214, the display block215, etc. The display engine210may include another set of transform block, pixel block, and display block for the other graphic pipeline. The graphic pipeline(s) may be controlled by a controller or control block (not shown) of the display engine210. In particular embodiments, the texture memory212may be included within the control block or may be a memory unit external to the control block but local to the display engine210. One or more of the components of the display engine210may be configured to communicate via a high-speed bus, shared memory, or any other suitable methods. This communication may include transmission of data as well as control signals, interrupts or/and other instructions. For example, the texture memory212may be configured to receive image data through the input data bus211and the display block215may send the pixel values to the display system through the output data bus216. In particular embodiments, the display engine210may include a controller block (not shown). The control block may receive data and control packages such as position data and surface information from controllers external to the display engine210though one or more data buses. For example, the control block may receive input stream data from a body wearable computing system. The input data stream may include a series of mainframe images generated at a mainframe rate of 30-90 Hz. The input stream data including the mainframe images may be converted to the required format and stored into the texture memory212. In particular embodiments, the control block may receive input from the body wearable computing system and initialize the graphic pipelines in the display engine to prepare and finalize the image data for rendering on the display. The data and control packets may include information related to, for example, one or more surfaces including texel data, position data, and additional rendering instructions. The control block may distribute data as needed to one or more other blocks of the display engine210. The control block may initiate the graphic pipelines for processing one or more frames to be displayed. In particular embodiments, the graphic pipelines for the two eye display systems may each include a control block or share the same control block. In particular embodiments, the transform block213may determine initial visibility information for surfaces to be displayed in the artificial reality scene. In general, the transform block213may cast rays from pixel locations on the screen and produce filter commands (e.g., filtering based on bilinear or other types of interpolation techniques) to send to the pixel block214. The transform block213may perform ray casting from the current viewpoint of the user (e.g., determined using the headset's inertial measurement units, eye tracking sensors, and/or any suitable tracking/localization algorithms, such as simultaneous localization and mapping (SLAM)) into the artificial scene where surfaces are positioned and may produce tile/surface pairs217to send to the pixel block214. In particular embodiments, the transform block213may include a four-stage pipeline as follows. A ray caster may issue ray bundles corresponding to arrays of one or more aligned pixels, referred to as tiles (e.g., each tile may include 16×16 aligned pixels). The ray bundles may be warped, before entering the artificial reality scene, according to one or more distortion meshes. The distortion meshes may be configured to correct geometric distortion effects stemming from, at least, the eye display systems the headset system. The transform block213may determine whether each ray bundle intersects with surfaces in the scene by comparing a bounding box of each tile to bounding boxes for the surfaces. If a ray bundle does not intersect with an object, it may be discarded. After the tile-surface intersections are detected, the corresponding tile/surface pairs may be passed to the pixel block214. In particular embodiments, the pixel block214may determine color values or grayscale values for the pixels based on the tile-surface pairs. The color values for each pixel may be sampled from the texel data of surfaces received and stored in texture memory212. The pixel block214may receive tile-surface pairs from the transform block213and may schedule bilinear filtering using one or more filer blocks. For each tile-surface pair, the pixel block214may sample color information for the pixels within the tile using color values corresponding to where the projected tile intersects the surface. The pixel block214may determine pixel values based on the retrieved texels (e.g., using bilinear interpolation). In particular embodiments, the pixel block214may process the red, green, and blue color components separately for each pixel. In particular embodiments, the display may include two pixel blocks for the two eye display systems. The two pixel blocks of the two eye display systems may work independently and in parallel with each other. The pixel block214may then output its color determinations to the display block215. In particular embodiments, the pixel block214may composite two or more surfaces into one surface to when the two or more surfaces have overlapping areas. A composed surface may need less computational resources (e.g., computational units, memory, power, etc.) for the resampling process. In particular embodiments, the display block215may receive pixel color values from the pixel block214, covert the format of the data to be more suitable for the scanline output of the display, apply one or more brightness corrections to the pixel color values, and prepare the pixel color values for output to the display. In particular embodiments, the display block215may each include a row buffer and may process and store the pixel data received from the pixel block214. The pixel data may be organized in quads (e.g., 2×2 pixels per quad) and tiles (e.g., 16×16 pixels per tile). The display block215may convert tile-order pixel color values generated by the pixel block214into scanline or row-order data, which may be required by the physical displays. The brightness corrections may include any required brightness correction, gamma mapping, and dithering. The display block215may output the corrected pixel color values directly to the driver of the physical display (e.g., pupil display) or may output the pixel values to a block external to the display engine210in a variety of formats. For example, the eye display systems of the headset system may include additional hardware or software to further customize backend color processing, to support a wider interface to the display, or to optimize display speed or fidelity. In particular embodiments, graphics applications (e.g., games, maps, content-providing apps, etc.) may build a scene graph, which is used together with a given view position and point in time to generate primitives to render on a GPU or display engine. The scene graph may define the logical and/or spatial relationship between objects in the scene. In particular embodiments, the display engine210may also generate and store a scene graph that is a simplified form of the full application scene graph. The simplified scene graph may be used to specify the logical and/or spatial relationships between surfaces (e.g., the primitives rendered by the display engine210, such as quadrilaterals or contours, defined in 3D space, that have corresponding textures generated based on the mainframe rendered by the application). Storing a scene graph allows the display engine210to render the scene to multiple display frames and to adjust each element in the scene graph for the current viewpoint (e.g., head position), the current object positions (e.g., they could be moving relative to each other) and other factors that change per display frame. In addition, based on the scene graph, the display engine210may also adjust for the geometric and color distortion introduced by the display subsystem and then composite the objects together to generate a frame. Storing a scene graph allows the display engine210to approximate the result of doing a full render at the desired high frame rate, while actually running the GPU or display engine210at a significantly lower rate. FIG.2Billustrates an example graphic pipeline200B of the display engine210for generating display image data. In particular embodiments, the graphic pipeline200B may include a visibility step272, where the display engine210may determine the visibility of one or more surfaces received from the body wearable computing system. The visibility step272may be performed by the transform block (e.g.,213inFIG.2A) of the display engine210. The display engine210may receive (e.g., by a control block or a controller) input data261from the body-wearable computing system. The input data261may include one or more surfaces, texel data, position data, RGB data, and rendering instructions from the body wearable computing system. The input data261may include mainframe images with 30-90 frames per second (FPS). The main frame image may have color depth of, for example, 24 bits per pixel. The display engine210may process and save the received input data261in the texel memory212. The received data may be passed to the transform block213which may determine the visibility information for surfaces to be displayed. The transform block213may cast rays for pixel locations on the screen and produce filter commands (e.g., filtering based on bilinear or other types of interpolation techniques) to send to the pixel block214. The transform block213may perform ray casting from the current viewpoint of the user (e.g., determined using the headset's inertial measurement units, eye trackers, and/or any suitable tracking/localization algorithms, such as simultaneous localization and mapping (SLAM)) into the artificial scene where surfaces are positioned and produce surface-tile pairs to send to the pixel block214. In particular embodiments, the graphic pipeline200B may include a resampling step273, where the display engine210may determine the color values from the tile-surfaces pairs to produce pixel color values. The resampling step273may be performed by the pixel block214inFIG.2A) of the display engine210. The pixel block214may receive tile-surface pairs from the transform block213and may schedule bilinear filtering. For each tile-surface pair, the pixel block214may sample color information for the pixels within the tile using color values corresponding to where the projected tile intersects the surface. The pixel block214may determine pixel values based on the retrieved texels (e.g., using bilinear interpolation) and output the determined pixel values to the respective display block215. In particular embodiments, the graphic pipeline200B may include a bend step274, a correction step275, a serialization step276, etc. In particular embodiments, the bend, correction and serialization steps of274,275, and276may be performed by the display block (e.g.,215inFIG.2A) of the display engine210. The display engine210may blend the display content for display content rendering, apply one or more brightness corrections to the pixel color values, serialize the pixel values for scanline output for the physical display, and generate the display data279suitable for the μLED displays of the projectors. The display engine210may send the display data279to the μLED displays of the projectors. In particular embodiments, the system may include three μLED backplane units280A,280B, and280C. Each μLED backplane unit of280A,280B, and280C may include a de-serialization module282, a PWM control and data loading module284, and a μLED matrix286. The display data279received from the display engine210may be de-serialized by the de-serialization module282, loaded by the PWM control and data loading module284, and displayed by the μLED matrix286. In particular embodiments, the μLED display may run at 1-2 k subframes per second with 5 bits per pixel and may generate a data flow at 47 Gbps per color. The subframe images may be dithered (e.g., spatial or/and temporal dithering) to represent a color depth or grayscale of 8 bits. FIG.2Cillustrates an example scheme200C for rendering display content using a master-subframe mechanism. In particular embodiments, the system may adopt a master-subframe rendering mechanism for rendering display content. The display engine of the system may load mainframe image data including a series of mainframe images from a controller external to the display engine (e.g., a central controller coordinating multiple display engines of the AR/VR system or a body-wearable computing system, etc.). The mainframe images may be generated and loaded to the display engine at a master frame rate (e.g., 30-90 Hz). The display engine may use the graphic pipeline or localized transformative operations (e.g., 2D shifting, interpolation, compositing multiple surfaces into a single surfaces) to generate a series of subframe image at a subframe frame rate (e.g., 1-2 kHz) which could be higher than the master frame rate (e.g., 30-90 Hz). The display engine may render the subframe images to the physical display at the subframe frame rate. This master-subframe rendering mechanism may allow the display engine to render display content with high subframe rate (e.g., 1-2 kHz), and therefore to be more responsive (e.g., shorter responding time) to the user's head movement or eye movement. As an example and not by way of limitation, the display engine may load the image data from the central control units (which are external to the display engine) of the wearable computing system into the texel memory and render display content to physical display based on a master frame clock signal220and a subframe clock signal230, as illustrated inFIG.2C. The master frame clock signal220may include periodical time periods including the active time period222and inactive time period224. In particular embodiments, the active time period222of the master frame clock signal220may have a length in a range of 6 ms to 28 ms and the inactive time period224may have a length about 5 ms. Mainframe image data may be updated or loaded into the texture memory of the display engine during the inactive time periods224of the periodical master frame clock signal220. After being loaded or updated into the display engine, the mainframe image data may be stored within the texture memory of the display engine. The display engine may use the graphic pipeline (or one or more localized transformative operations) to generate display data for the physical display based on the mainframe image data. The display data for the physical display may include a number of subframe images which may be generated and rendered at the subframe rate of 1-2 kHz based on the subframe clock signal230. The subframe clock signal230may include periodical time periods including the active time periods232, which corresponds to the active time period222of the master frame clock signal220, and the inactive time periods234, which corresponds to the inactive time period224of the master frame clock signal220. The display content including the subframes240may be rendered to the physical display during the active time periods232at a subframe rate of 1-2 kHz (e.g., 185-270 ns per row update). During the inactive time periods234, the display engine may not render any subframes to the physical display but may perform other operations, for example, adjusting the varifocal lens mechanically, or/and one or more localized transformative operations, instead of rendering any subframes to the physical display. For the master-subframe rendering mechanism, the display engine may use the master frame rate for interfacing with up-stream modules (e.g., central control units of a wearable computing system) to receive mainframe images and render the subframe with a higher subframe rate to the physical display. The display engine can replay multiple frames and perform transformation or operations (e.g., color correction) on the subframes to generate display rendering results with a higher brightness, longer persistence, or/and improved bit depth. In particular embodiments, the system may generate and render subframe images with a high frame rate (e.g., 1-2 kHz) to allow the display content (e.g., scene at particular view angle) to be very responsive to the user's head movement or eye movements. The system may use one or more eye tracking sensors or/and head movement tracking sensors to determine the eye position (e.g., gazing point) or/and head position of the user. Then, the system may generate and render the new subframes of scene according to the up-to-date eye position or/and head position (e.g., based on a viewpoint, a view angle, or/and a gazing point of the user). The system may use the graphic pipeline including one or more processes (e.g., tile/surface determining process by the transform block, resampling process by the pixel block, blending, filtering, correction, and sterilization processes by the display block, etc.) to generate the subframe images. Because the high rendering frame rate (and therefore the short rendering period) of the subframes, the system may have accurate and up-to-date (e.g., real-time or semi-real time) eye position information (e.g., gazing point) or/and head position information before generating next subframe of the scene. In particular embodiments, the system may take advantage of this accurate and up-to-date eye position information or/and head position information to generate foveated subframe images for foveated rendering. The system may determine a number of display regions based on their relative positions and distances to the foveal region or gazing point of the user and generate foveated subframe images with variable resolutions in different image regions corresponding to different display regions. The foveated subframe images may have high resolution (e.g., full resolution) in one or more image regions corresponding to the user's foveal region or gazing point and may have gradually lower resolutions in image regions that are farer from the user's gazing point. FIG.3Aillustrates an example scheme300A for determining display regions with different rendering resolutions for foveated rendering. In particular embodiments, the system may divide the full display area310into different display regions or areas based on the gazing point or eye position of the user. The system may generate subframe image with different resolutions in different image regions corresponding to the display regions and render the display content with different rendering resolutions in different display regions. As an example and not by way of limitation, the system may determine a first display region312based on the user gazing point311. The first display region312may be an rectangular region centered at the gazing point311covering a portion (e.g., 10%, 20%, 25%, 30%, 50%, 60%, or any suitable percentage) of the full display area. The user's gazing point may be determined based on the eye position of the user as measured by one or more eye tracking sensors. The system may determine a second display region313excluding the first display region312. In other words, the second display region313may cover a subset of pixels which may not have shared pixels with the subset of pixels covered by the first display region312. The system may determine a third display region314excluding the first display region312and the second display region313(e.g., covering a subset of pixels which may not have shared pixels with the subset of pixels covered by the first display region312and the second display region313). The third display region314may cover the remaining areas of the display that are not covered by the first display region312and the second display region313. It is notable that the shapes and sizes of the first, second, and third display regions as described here are for example purpose and the display regions are not limited thereof. For example, the display regions could be any suitable shapes (e.g., rectangular shapes, square shapes, round shapes, polygon shapes, customized shapes, irregular shapes, arbitrary shapes, etc.) with any suitable sizes (e.g., any percentage of the full display area). As an example and not by way of limitation, the first display region312may have a ¼ width and a ¼ height of the full display area310. The second display region313may have a ½ width and a ½ height of the full display area310. The third display region314may cover the remaining area of the full display area310beyond the second display region313. As another example and not by way of limitation, the first display region312may have a ⅛ width and a ⅛ height of the full display area310. The second display region313may have a ¼ width and a ¼ height of the full display area310. The third display region314may cover the remaining area of the full display area310beyond the second display region313. It is notable that the relative positions and sizes of the first, second, and third display regions are for example purpose and the display regions are not limited thereof. For example, in particular embodiments, the first display region312may be centered at the gazing point311. However, in some other embodiments, the first display region312may not be centered at the gazing point311. The gazing point311may be located at any suitable positions (e.g., a center-position, a non-center position, a position left to the center, a position right to the center, a position up to the center, a position below the center, an arbitrary position, etc.) in the first display region311. As another example, in particular embodiments, the second display region313may be centered at the first display region312or/and centered at the gazing point311. However, in some other embodiments, the second display region313may not need to be centered at the first display region312or centered at the gazing point311. The first display region312may be located at any suitable positions in the second display region313. The second display region313may be located at any suitable positions in the third display region314. In particular embodiments, the first display region312corresponding to the foveal region of the user may be determined based on degree of uncertainty of the gazing point or gazing direction of the user. For example, the system may determine the gazing direction of the user with 10 degree of uncertainty and the foveal region of the user with ±5 degrees of uncertainty. The system may determine the first display region312have a size corresponding to ±15 degrees of the user's view angles for the full resolution and use lower resolution in other display regions beyond the first display region. It is notable that the three display regions as described here are for example purpose and the display region division is not limited thereof. The system may divide the display into any number of regions in any suitable manners (e.g., regions divided by a grid pattern, co-centered regions, exclusive regions defined by overlapping shapes, etc.). For example, the system may divide the full display area into a number of display regions using a grid pattern and determine a rendering resolution for each of these display regions based on their relative positions or/and to the gazing point of the user. Each display region may cover a matrix of tiles (e.g., 16 tiles by 16 tiles) with each tile containing a matrix of pixels (e.g., each tile having 16 pixels by 16 pixels). The edge positions of the display regions may be constrained to some alignment (e.g., 16 pixels) to simplify the implementation. The system may generate a subframe image having different resolutions in different image regions corresponding to the display regions and render different portions of the image in different display regions of the display using corresponding rendering resolutions. In particular embodiments, the system may render display content with different rendering resolutions in different display regions. For the display region corresponding to the foveal region (where the user's gaze is focused), the system may compute and display pixel values with full resolutions for Red, Green, and Blue color channels. For the display regions outside the foveal region, the system may use lower resolutions for one or more color channels because the user's eyes have less acuity in those regions. The system may first determine two or more display regions of the display based on the gazing point of the user and determine a rendering resolution for each of these display regions. The rendering resolutions of each display region may depend on the distance of that display region to the gazing point of the user. The system may use gradually lower rendering resolutions for display regions that are farer from the gazing point. For example, the system may use a full rendering resolution in a display region containing the gazing point of the user and use lower rendering resolutions in display regions that are farer from the user's gazing point. In particular embodiments, the system may use a graphic pipeline or one or more localized transformative operations to generate a series of foveated subframe images (e.g., at 1-2 kHz subframe frame rate). Each of the foveated subframe images may have variable image resolutions cross different image regions corresponding to the display regions of the display. The system may render the foveated subframe images using different rendering resolutions in different display regions of the display. As an example and not by way of limitation, the system may render display content of the foveated subframe image with a first resolution, a second resolution, and a third resolution in the first display region312, the second display region313, and the third display region314, respectively. The first resolution may be the highest resolution (e.g., full resolution of the display) of the three rendering resolutions and the second and third resolutions may be reduced resolutions lower than the first resolution. In particular, the third resolution may be lower than the second resolution and the second resolution may be lower than the first resolution. By using the reduced rendering resolutions in one or more display regions, the system may reduce the amount of computation and power consumption related to the processes for generating and rendering the display content. FIG.3Billustrates three example pixel arrays (e.g.,320,330,340) for three color channels of Red, Green, and Blue. In particular embodiments, the system may have a display with three color channels (e.g., Red, Green, and Blue) and the system may sub-sample the pixels of one or more chromas of respective color channels. In other words, the system may allow the pixels of different color channels to have sampling resolutions which are different from other color channels. The system may use the resampling process of the graphic pipeline to determine the grayscale values of Red, Green, and Blue color channels, respectively. The grayscale values of Red, Green, and Blue color channels of a same pixel may be independently determined with respect to each other. In particular embodiments, the system may generate and render foveated images which have different sampling resolutions for the pixels of different color channels (e.g., Red, Green, Blue). Within a display region or image region, the system may allow the pixels of Red, Green, and Blue color channels to have different sampling resolutions (e.g., at powers of two to each other) when processing different parts of the screen based on distance from the foveal region of the user. As an example and not by way of limitation, within an image region corresponding to a display region, the system may allow the pixels of Green color channel to have a full sampling resolution and allow the pixels of Red and Blue color channels to have a half sampling resolution. As another example, within an image region corresponding to a display region, the system may allow the pixels of Green color channel to have a half sampling resolution and allow the pixels of Red and Green color channels to have a quarter sampling resolution. It is notable that, in this disclosure, the resolution of the images or display may be described by the number of pixels of the image or display or the number of pixels per unit area. The sampling resolution of a number of pixels (e.g., pixels in a 2D array of a color channel) may be described by a percentage of pixels whose grayscale value are independently computed or by a ratio of the independently computed grayscale values to the total number of corresponding pixels whose grayscale values are determined based on the independently computed grayscale values. The rendering resolution may refer to the resolution that is used for rendering a portion of image in a display region and the rendering resolution may correspond to the sampling resolution of the corresponding image portion that is rendered. In particular embodiments, the system may use a full sampling resolution or a reduced sampling resolution for the pixels of one or more color channels within one or more display regions. In particular embodiments, the full sampling resolution may correspond to the full pixel resolution of the display and the reduced sampling resolution may be a sub-sampling resolution reduced from the full sampling resolution by powers of two (i.e., ½″ of full sampling resolution where n can be any suitable integer). For example the reduced sampling resolutions could be, ½, ¼, ⅛, 1/16, etc., of the full sampling resolution. For a sampling resolution which is ½″ of the full sampling resolution, the system may independently determine a grays scale value for each pixel group containing n×n pixels. It is notable that the sampling resolutions at powers of two to each other are for example purpose and the sampling resolutions are not limited thereof. The sampling resolutions of different color channels or/and different display regions may have a relationship at powers of any suitable integer number to each other. In particular embodiments, for a 2D pixel array with a full sampling resolution, the system may independently determine (e.g., using the resampling process of the graphic pipeline) a color value or grayscale value for each pixel of the 2D pixel array. As an example and not by way of limitation, when the green pixel array320has a full sampling resolution, the system may independently determine a grayscale value for each of the 16 pixels within the green pixels array320. In particular embodiments, for a 2D pixels array having a half sampling resolution, the system may independently determine (e.g., using the resampling process of the graphic pipeline) a color value or grayscale value for each 2×2 pixel sub-array (e.g., two pixels along vertical and two pixels along horizontal directions) of the 2D pixel array. The system may use a replication or interpolation process (e.g., by the display block of the display engine or by one or more controller within the display system) to determine the respective grayscale values for the pixels within each pixel sub-array. The replication or interpolation processes may be performed by the display block prior to brightness correction and dithering processes or by one or more controller of the display system. The replication and interpolation processes may need less computational resources and consume less power than the pipeline processes (e.g., tile/surface pair determining process, resampling process, filtering process, etc.). As a result, the system may reduce the computational resource usage and power consumption by independently computing only a quarter of grayscale values of the number of pixels in the 2D array. As an example and not by way of limitation, when the red pixel array330has a half sampling resolution, the system may independently determine a grayscale value for each of the 2×2 pixel sub-array including a first pixel sub-array including pixels of [0, 0], [0, 1], [1, 0], and [1, 1], a second pixel sub-array including pixels of [0, 2], [0, 3], [1, 2], and [1, 3], a third pixel sub-array including pixels of [2, 0], [2, 1], [3, 0], and [3, 1], and a fourth pixel sub-array including pixels of [2, 2], [2, 3], [3, 2], and [3, 3]. As a result, the system may only need to compute four independent grayscale values for the 16 pixels in the 2D pixel array. In particular embodiments, for a 2D pixel array having a quarter sampling resolution, the system may independently determine (e.g., using the resampling process of the graphic pipeline) a color value or grayscale value of each 4×4 pixel sub-array (e.g., four pixels along vertical direction and four pixels along horizontal direction) of the 2D pixel array. The system may use a replication or interpolation process (e.g., by the display block of the display engine or by one or more controller within the display system) to determine the respective grayscale values for the pixels within each pixel sub-array. As a result, the system may need to compute only 1/16 of grayscale values of the number of pixels in a 2D array. As an example and not by way of limitation, when the blue pixel array340has a quarter sampling resolution, the system may independently determine a grayscale value for the 4×4 pixel group including all 16 pixels in the blue pixel array340. The grayscale values of the 16 pixels may be determined by replicating the same grayscale value or interpolating multiple grayscale values that are independently determined. As a result, the system may only need to independently compute one grayscale value for the 16 pixels in the 2D array. It is notable that the half and quarter sampling resolutions are for example purpose and the reduced sampling resolutions are not limited thereof. The reduced sampling resolutions of different color channels or/and different display regions may have a relationship to the full sampling resolution and to each other at powers of any suitable integer number (i.e., 1/mnof the full sampling resolution, where m and n can be any suitable integer). In particular embodiments, the system may use a display (e.g., μLED display) which has different pixel size for different color channels. For example, the system may use a μLED display which has red pixels and blue pixels being twice as large (four times the area) as the green pixels. The number of red, green, and blue pixels in a display region may have a ratio of 1:4:1 with each red pixel and each blue pixel corresponding to four green pixels. In particular embodiments, the system may use a display having the same size of pixels for three color channels and the same number of pixels for each color channel (e.g., the ratio of red, green, and blue pixels being 1:1:1 in a display region). In particular embodiments, the system may use a display having the same size of pixels for three color channels but different number of pixels for different color channels (e.g., the number ratio of red, green, and blue pixels being 1:2:1 in a display region). It is notable that the systems and methods as described in this disclosure are not dependent on the type of display and are applicable to any types of displays including, for example, but not limited to, μLED displays, OLED displays, AMLED displays, LCD displays, LED displays, or any suitable displays with any pixel sizes or/and any ratio of the number pixels of different color channels. It is notable that the systems and methods as described in this disclosure are not limited to display with RGB color channels. For example, in particular embodiments, the system may have a display with black and white luminance channels and two other channels for color information taking advantage that human perceives color less precisely than luminance. One or more color channels or luminance channels may be sub-sampled in the horizontal or/and vertical direction. The systems and method as described in this disclosure are also applicable to this type of displays. FIG.3Cillustrates an example scheme300C for determining sampling resolutions of different color channels and different display regions. In particular embodiments, the system may determine a number of display regions and a number of gradually lower rendering resolutions for these regions based on their relative positions or/and distances to the gazing point of the user. As discussed in earlier sections of this disclosure, the system may allow different color channels to have different sampling resolutions. The sampling resolutions of different color channels could be independent or related (e.g., at powers of two) to each other. In particular embodiments, the system may have a display with red and blue pixels having a larger size than green pixel (e.g., twice in size and fourth times in area). The number of red, green, and blue pixels in a display region may have a ratio of 1:4:1 with each red pixel and each blue pixel corresponding to four green pixels. In particular embodiments, the system may have a display which may have red, green, and blue pixels of the same size but have more green pixels than red or blue pixels (e.g., the number ratio of the RGB pixels could be 1:2:1). The green pixels of the display may illuminate a larger percentage of light density (e.g., 70-80%) than red or blue pixels. In particular embodiments, the system may have a display which may have red, green, and blue pixels of the same size and have the same number of green, red or blue pixels (e.g., the number ratio of the RGB pixels could be 1:1:1). In particular embodiments, the system may use a relative higher sampling resolution for Green color channel than Red and Blue color channels. However, it is notable that the systems and methods as described in this disclosure are applicable to any types of display with any pixel size and any number of pixels of different color channels. As an example and not by way of limitation, the system may determine a first display region corresponding to (e.g., centered at or containing) the gazing point of the user and determine the second, third, and fourth display regions with each having a longer distance to the gazing point. For the first display region corresponding to the foveal region where the gaze of the user is focused, the system may use a full sampling resolution and compute an independent grayscale value for each pixel of each color channel. The full sampling resolution in the first display region may allow the displayed content in this region to have sharp edges and enable a clear view of the display content in this region. For other display regions, the system may use reduced sampling resolutions since the user may not have as much acuity in these regions. As another example, for the second display region, the system may use a full sampling resolution for Green color channel and use a half sampling resolution for both Red and Blue color channels. As a result, for all the pixels of the three color channels of this display region, the system may only need to independently compute grayscale values for half of the total number of the pixels in this region (with a corresponding computation reduction rate of ½). For example, referring to the pixel arrays of320,330, and340inFIG.3B, when Green color channel has a full sampling resolution and Red and Blue color channels have half a sampling resolution, the number of grayscale values that need to be independently computed may be 16, 4, and 4 for the color channels of Green, Red, and Blue, respectively, with a total number of 24 independent grayscale values, which is ½ of all 48 pixels of the three pixel arrays320,330, and340in total. As yet another example, for the third display region, the system may use a half sampling resolution for all three color channels of Green, Red, and Blue. As a result, for all the pixels in all three color channels in this region, the system may only need to independently compute grayscale values for ¼ of the total number of pixels of this region (with a corresponding computation reduction rate of ¼). For example, referring to the pixel arrays of320,330, and340inFIG.3B, when all three color channels have a half sampling resolution, the number of grayscale values that need to be independently computed could be 4, 4, and 4 for the color channels of Green, Red, and Blue, respectively. The total number of independent grayscale values could be 12, which is ¼ of all 48 pixels in the three pixel arrays320,330, and340. As yet another example, for the fourth display region, the system may use a half sampling resolution for Green color channel and may use a quarter sampling resolution for the color channels of Red and Blue. As a result, for all the pixels in all three color channels of this region, the system may only need to independently compute grayscale values for ⅛ of the total number of pixels in this region (with a corresponding computation reduction rate of ⅛). For example, referring to the pixel arrays of320,330, and340inFIG.3B, when the Green color channel has a half sampling resolution and Blue and Red color channels have a quarter sampling resolution, the number of grayscale values that need to be independently computed could be 4, 1, and 1 for the color channels of Green, Red, and Blue, respectively. The total number of independent grayscale values could be 6, which is ⅛ of all 48 pixels in the three pixel arrays320,330, and340. As a result, the system may reduce the amount of computation for generating the image regions with reduced sampling resolutions. It is notable that the sampling resolution scheme as described here is for example purpose and the sampling resolution determination is not limited thereof. For example, the system may use any suitable combinations of any sampling resolutions for the three color channels. It is notable that the relationship of different sampling resolutions of different color channels is not limited to the relationship of powers of two to each other. It could be at powers of any suitable integer number. In particular embodiments, the system may determine a number of display regions of particular sizes for rendering display content with different rendering resolutions. As an example and not by way of limitation, referring toFIG.3A, the system may determine a first display region312which is a quarter of the full display size310(¼ width and ¼ height of the full display) and covers 1/16 of the full display area. The system may determine a second display region313excluding the first display region312. The second display region may correspond to a rectangular shape (excluding the first display region312) which is half of the full display size (½ of width and ½ height of the full display covering ¼ of the full display area). The system may determine a third display region314including the remaining area of the full display area and covering ¾ of the full display area. In particular embodiments, the full display area may have a resolution of 2560 pixels by 1792 pixels. The first display region312may be approximately centered at the gazing point311of the user (e.g., the center of the first display region being within a threshold distance to the gazing point311of the user). The edge positions of the display regions may be constrained to some alignment (e.g., 16 pixels) to simplify the implementation. FIG.3Dillustrates an example scheme300D using different sampling resolutions for different color channels and different display regions to reduce the amount of computation. In particular embodiments, the first display region may have the all three color channels having the full sampling resolution. The system may independently compute a grayscale value for each pixel of each color channel in this display region. The second display region may use a full sampling resolution for Green color channel and use a half sampling resolution for Red and Blue color channels. As described in earlier sections of this disclosure, the system may only need to independently compute grayscale values for ½ of the total number of the pixels in this region (with a computation reduction rate of ½). Because the second display region covers 3/16 of the full display area, the system may only need to independently compute grayscale values for 3/32 of the total pixels of the display for the second display region (e.g., a computation reduction from 3/16 to 3/32 of the total pixels of the display as contributed by the second display region). The third display region may use a half sampling resolution for all three color channels. As described in earlier sections of this disclosure, the system may only need to independently compute grayscale values for ¼ of the total number of the pixels in this region (with a computation reduction rate of ¼). Because the third display region covers ¾ of the full display area, the system may only need to independently compute grayscale values for 3/16 of the total pixels of the display for the second display region (e.g., a computation reduction from ¾ to 3/16 of the total pixels of the display as contributed by the third display region). As a result, the system may have a computation reduction rate of 11/32 (i.e., 1/16+ 3/32+ 3/16) or approximately 3:1 reduction for the computation reduction to pixel processing by using this example scheme. Consequently, the system may only need approximately ⅓ of the bus bandwidth for transmitting the display data to the display backplane. In particular embodiments, the system may further reduce the computation for determining pixels values by using a smaller region for the first display regions (which may have relatively higher sampling resolutions such as full sampling resolution) or/and by using further lower sampling resolutions in other display regions. As an example and not by way of limitation, the system may use a first display region with ⅕ width and ⅕ height of the full display size (i.e., 1/25 area of the full display area). As another example, the system may use a first display region with ⅙ width and ⅙ height of the full display size (i.e., 1/36 area of the full display area). As another example, the system may use a first display region with ⅛ width and ⅛ height of the full display size (i.e., 1/64 area of the full display area). It is notable that the sizes of the first display region as described here are for example purpose and the size first display region is not limited thereof and can be any suitable sizes. In particular embodiments, the system may further reduce the amount of computation for determining pixel values by using further lower sampling resolutions in one or more display regions. As an example and not by way of limitation, in the third display region, the system may use a half resolution for Green color channel and use a quarter resolution for Red and Blue color channels. For the third display region, the system may need to independently compute grayscale values only for 3/32 (i.e., ⅛×¾) of the total pixels of the display. With other regions having the same sizes and sampling resolution as shown inFIG.3D, the system may have a total computation reduction rate of 7/32 (i.e., 1/16+ 3/32+ 3/32= 7/32) or approximately ¼ for all the pixels of the display. In particular embodiments, the system may reduce the worst-case scanline bandwidth (or peak bandwidth per row) on the data bus by using the foveated rendering with reduced rendering resolutions in one or more display regions. For example, for display regions as illustrated inFIG.3Ausing the scheme shown inFIG.3D, the first display region may have all three color channels with full sampling resolutions and therefore have full pixel density. The second display region may have full sampling resolution for Green color channels and half sampling resolution for Red and Blue color channels, and therefore have ½ pixel density. The third display region may have half sampling resolution for all three color channels, and therefore have ¼ pixel density. Any row that intersects with the first display region may have ¼ pixels of that row falling in the first display region with full pixel density, ¼ pixels of that row falling in the second display region with ½ pixel density, and ½ pixels of that row falling in the third display region with ¼ pixel density. As a result, the worst-case scanline bandwidth for the row intersecting with all three display regions may have a reduction ratio of ½ (or 2:1 compression rate) as determined by ½=¼×1+½×¼+¼×½ (i.e., sum of pixels density×pixel number percentage of the row) comparing with full resolution rendering for the whole display. Consequently, the peak bandwidth per row on the data bus may be the same reduction ratio of 2:1 for this example. In particular embodiments, the system may generate a series of foveated subframe images with variable resolutions in different image regions for foveated rendering. Comparing to generating full resolution subframe images, generating foveated subframe images may use less computational resources and consume less power. For example, for the image regions with lower resolutions, the system may cast less rays during the tile/surface pair determining process by the transform block, and therefore reduce the power consumption and computational resource usage. As another example, for the image regions with lower resolutions, the graphic pipeline may compute for a smaller number of pixels during the resampling process by the pixel block, and therefore reduce the power consumption and computational resource usage. As another example, for the foveated subframe images, the system may process larger display area (e.g., a larger number of pixels or pixel tiles) during the blending, correction, sterilization processes by the display block, and therefore reduce the power consumption and computational resource usage. As another example, for generating image regions with reduced sampling resolutions, the system may access source data with lower resolutions (e.g., MIP map texture data with lower resolutions or larger sizes) and, therefore perform less memory reading for accessing texture data. Furthermore, by using the foveated subframe images, the system may need less data transformation bandwidth for transmitting the pixel values to the display through the data bus (which may have limited bandwidth). FIGS.4A-4Dillustrate an example framework allowing the system to process a larger number of pixel tiles using the same computational resources by reducing sampling resolutions in one or more image regions. As an example and not by way of limitation, the system may need to process the pixels in a tile array400A including 4×4 pixel tiles (e.g., tile411, tile412, tile413, tile414, tile421, tile422, tile423, tile424, tile431, tile432, tile433, tile434, tile441, tile442, tile443, tile444). Each tile may include an array of pixels of any suitable sizes (i.e., n×n pixel array where n could be any integer number), for example, a 4×4 pixel array, a 16×16 pixel array, 64×64 pixel array, etc. In particular embodiments, each pixel tile may correspond to one single pixel including three sub-chroma pixels of the Red, Green, and Blue color channels. For this example, the sub-chroma pixels of the Red, Green, and Blue color channels may have a number of ratio of 1:1:1 or/and the same pixel size and each tile (e.g., tile411, tile412, tile421, tile422) may include a 16×16 pixel array (e.g., a combination of the three pixel arrays320,330,340of the three color channels as illustrated inFIG.3B). In particular embodiments, for generating full resolution image regions (e.g., for the region corresponding to the user's foveal region or gazing point), the system may use a group of computational resources (e.g., memories, buffers, caches, computational units, filters, data bus with certain bandwidth, or any image processing resources) as represented by the blocks410,420, and430of the scheme400B ofFIG.4Bto process the pixel tile in the pixel tile array400A. In particular embodiments, each block may be dedicated to the pixel tiles of a color channel and process four pixel tile (of that color channel) simultaneously. For example, the block410may be dedicated to Green color channels and simultaneously process four green tiles of411G,412G,421G, and422G. Similarly, the block420may be dedicated to Red color channel and simultaneously process four red tiles of411R,412R,421R, and422R. The block430may be dedicated to Blue color channel and simultaneously process four blue tiles of4111B,412B,421B, and422B. As a result, the group of computation resources as represented by the three blocks of410,420, and430may simultaneously process four pixel tiles covering a portion of the image (e.g., in the image area401inFIG.4A). Correspondingly, the system may access memory through the respective data buses (e.g., data bus491,492, and493) to retrieve the amount of data with particular bandwidth as determined by the simultaneously processed pixel tiles on each block. In particular embodiments, for generating resolution image regions with reduced sampling resolutions (e.g., for the regions beyond the user's foveal region), the system may use the same group of computational resources (e.g., memories, buffers, caches, computational units, filters, data bus with certain bandwidth, or any image processing resources) as represented by the block410, block420, and block430to simultaneously process a larger image area. For example, for an image region with a full sampling resolution for Green color channel and half sampling resolution for Red and Blue color channels, the system may need to independently compute a grayscale value for each pixel of Red or Blue color channel for every four pixels of Green color channel. In other words, the sub-chroma pixel of Red, Green, and Blue color channels may have a number ratio of 1:4:1. The system may optimize the usage of the resources for processing the pixel tiles as illustrated inFIG.4D. For example, the block410and420may be dedicated to Green color channels and simultaneously process eight green tiles of411G,412G,413G,414G,421G,422G,423G, and424G. The block430may be used to process the Red and Green color channels and simultaneously process two green tiles of411R and413R and two red tiles of411R and413R. As a result, the group of computation resources as represented by the three blocks of410,420, and430of the scheme400D inFIG.4Dmay simultaneously process an image area402(in the tile array400C inFIG.4C) which is twice large of the image area401on the tile array400A as shown inFIG.4A. Correspondingly, for generating or processing the image area of the same size, the system may need less bandwidth from respective data buses (e.g., data bus491,492, and493) and retrieve less amount of data during the image generating or processing processes. Therefore, the system may process a larger image area (e.g., a larger number of pixel or pixel tiles) in a given clock cycle using the same amount the computational resources as for processing full resolution image because of the reduced memory reading and data processing, and therefore improve the efficiency of the system performance. It is notable than the framework as illustrated inFIGS.4A-4Dis not limited to any specific computational resources. For example, the blocks410,420, and430may represent computation units, memories, filters, buffers, caches, data bus bandwidth, or any suitable resources. As another example, the arrangement of the pixel tiles and the corresponding processing blocks may be an arbitrary arrangement allow the same block size to host or/and process more pixel tiles (e.g., twice as many pixels before sub-sampling). By using the foveated rendering with reduced resolutions in one or more image regions, the system may process a larger image area using the same amount computational resources, and therefore consume less power and time for the image rendering processes. Furthermore, the generating or processing image regions with reduced sampling resolutions, the system may use lower resolution texture data and have less texture memory reading operations, and therefore further reduce the power consumption. For example, considering a region with full resolution for Green color channel and half-resolution for Red and Blue, the system may access full resolution source data for green and compute a separate value for each pixel, which would be sent to the display backplane. However, for red and blue, the system may access one quarter resolution source data and compute one value for each 2×2 of pixels. These values would each be sent to the display backplane, which may replicate them into 2×2 display pixels. As a result, this method reduces not only the memory storage, memory access and processing time required for the lower resolution sub-chroma pixels, but also reduces the bandwidth on the data bus from the display block to the backplane that drives the display. FIGS.5A-5Cillustrate an example replication process and interpolation process for determining grayscale values for the pixels within a pixel sub-array.FIG.5Aillustrates an example pixel array having half sampling resolution. The system may independently compute a grayscale value for each 2×2 pixel sub-array in the pixel array510, as shown inFIG.5A, where the solid dots represent pixels that have independently computed grayscale values and the empty dots represent the pixels whose grayscale values are not independently computed (and will be determined by the replication or interpolation process). For example, the system may independently compute the grayscale values for pixels [0, 0], [0, 2], [2, 0], and [2, 2] for the respective 2×2 pixel sub-arrays. Then, the system may use a replication process or an interpolation process to determine the pixels value for each pixel within the respective 2×2 pixel sub-arrays.FIG.5Billustrates an example replication process for determining grayscale values for all pixels within the sub-array511. The system may replicate grayscale value of the pixel [0, 0] and use the same values for the pixels [0, 1], [1, 0], and [1, 1].FIG.5Cillustrates an example interpolation process for determining grayscale values for all pixels within the sub-array515. For example, the system may determine the grayscale values of the pixels [1, 0], [0, 1], [1, 2] and [2, 1] by interpolating the grayscale values of the pixel pairs of [0, 0] and [2, 0], [0, 0] and [0, 2], [0, 2] and [2, 2], and [2, 0] and [2, 2], respectively. As another example, the system may determine the grayscale value for the pixel [1, 1] by interpolating any suitable combination of pixels [0, 0], [0, 2], [2, 0], and [2, 2]. It is notable that the pixels in the pixel array510could be pixels of any color channels. It is notable that the replication process and interpolation process as illustrated here are for example purpose and the replication process and interpolation process are not limited thereof. For example, the replication process may used to determine grayscale values for the pixels of any sub-arrays with any sizes. As another example, the interpolation process may be based on any number of related pixels (not limited to two pixels). As another example, the interpolation process may be based on a weighted averaging computation to preserve the average brightness of one or more portion of the images. As a result, the system may selectively determine and render the duplicated regions of the image based on grayscale values from the foveating perspective and reduce the amount of computation related to the rendering process. In particular embodiments, the replication process or/and interpolation process may be performed by the display block of the display engine before the brightness correction and dithering processes. In particular embodiments, the replication process or/and interpolation process may be performed by the display system (e.g., by one or more controllers of the display) after the pixel data being transmitted to the display. For example, the system may send the pixel values with reduced sampling resolutions to the display system (which need less data bus bandwidth for transmitting) and sending the location information (e.g., the sub-arrays or pixel locations that the respective physical pixels are associated with) for mapping the corresponding pixel values to the display (or mapping the pixel density per tile being transmitted to physical pixel arrays) in the same or a separate data stream (e.g., a side channel) to the pixel value data. The system may include a display system (e.g., an LED display or pLEd display) which may include driver logics or controllers for performing the replication and interpolation operations. By using the replication process or/and duplication process, the system may extend the computation resource saving and power saving due to the foveated rendering to the downstream rendering pipeline. In particular embodiments, the system may preserve the contrast or/and average brightness in one or more portions of the foveated image. In particular embodiments, the system may pre-process the source data (e.g., MIP map texture data) and access the source data at successively lower resolutions for generating the foveated image to preserve the contrast or/and average brightness. In particular embodiments, the system may apply a sharpness filter to the computed pixel values prior to preserve the contrast or/and average brightness before sending the pixel values to the display backplane. In this case, the pixels with a full sampling resolution may not need to be filtered and only the pixels with reduced sampling resolutions may need to be filtered. For example, for the display region as shown inFIG.3Aand the sampling resolution scheme as shown inFIG.3D, the pixels in the first display region and the pixels of the Green color channel of the second display region may not need to be filtered since they have a full sampling resolution. The pixels of the Red and Blue color channels of the second display and all pixels of the third display region may need to be filtered to preserve the contrast and the average brightness of the corresponding pixels. FIG.6illustrates an example method600for foveated rendering. The method600may begin at step610, where the computing system may access a first rendered frame (e.g., a mainframe image) generated at a first frame rate (e.g., a mainframe rate of 30-90 Hz). At step620, the system may generate, based on the first rendered frame, subframes at a second frame rate higher than the first frame rate. A first subframe of the subframes may be generated by: determining a viewing direction of the user based on sensor data; determining, based on the viewing direction, at least a first viewing region encompassing a foveal focus point (e.g., the gazing point) of the user and a second viewing region excluding the first viewing region; determining, for the first subframe, color values corresponding to the first viewing region using a first sampling resolution and color values corresponding to the second viewing region using a second sampling resolution lower than the first sampling resolution. At step630, the system may output the subframes for display at the second frame rate. In particular embodiments, the system may generate the first subframe using a graphic pipeline including at least a transform block and a pixel block. The system may use the transform block to determine a number of tile-surface pairs by casting a number of rays to a number of surfaces for determining intersections between the tiles and the surface. The system may use the pixel block to determine the color values corresponding to the first and second viewing regions based on the plurality of tile-surface pairs. In particular embodiments, the transform block may cast fewer rays for determining the color values corresponding to the second viewing region (associated with the second sampling resolution) than the color values corresponding to the first viewing region (associated with the first sampling resolution). In particular embodiments, the system may determine, by the pixel block, the color values corresponding to the first viewing region by sampling a first set of surfaces using the first sampling resolution. The system may determine, by the pixel block, the color values corresponding to the second viewing region by sampling a second set of surfaces using the second sampling resolution. The pixel block may perform a smaller amount of computation for determining the color values corresponding to the second viewing region than the color values corresponding to the first viewing region. In particular embodiments, a first color channel of a group of pixels corresponding to the second viewing region may be associated with the second sampling resolution. A second color channel of the group of pixels corresponding to the second viewing region may be associated with a third sampling resolution different from the second sampling resolution. In particular embodiments, the system may independently determine a grayscale value for each n×n pixel array of the first color channel of the group of pixels corresponding to the second viewing region, where the value of n may be determined based on the second sampling resolution. In particular embodiments, the system may independently determine a grayscale value for each m×m pixels of a second color channel of the group of pixels corresponding to the second viewing region, where the value of m may be determined based on the third resolution associated with the second color channel. In particular embodiments, the second sampling resolution of the first color channel and the third sampling resolution of the second color channel may have a relationship of powers of two. In particular embodiments, the system may determine a grayscale value for each pixel within the n×n pixel array based on a replication process which may be performed by a display system. In particular embodiments, the system may determine a grayscale value for each pixel within the n×n pixel array based on an interpolation process performed by a display block of the graphic pipeline prior to a brightness correction process and a dithering process. In particular embodiments, the system may determine a third viewing region excluding the first viewing region and the second viewing region. The respective color values of the first viewing region, the second viewing region, and the third viewing region may be determined based on a gradually lower sampling resolution. In particular embodiments, the system may generate the first subframe based on a source data. The system may pre-process the source data at a successively lower resolution for generating the first subframe and access the source data at the successively lower resolution while generating the first subframe. In particular embodiments, the system may apply a sharpness filter to a number of pixels corresponding to the second viewing region to preserve a contrast level on one or more edges associated with one or more objects in the second viewing region. In particular embodiments, the system may apply a sharpness filter to a number of pixels of the first subframe in the second viewing region to preserve an average brightness in the second viewing region. In particular embodiments, the first frame rate may be within a first range of 30-90 Hz and the second frame rate may be within a second range of 1-2 kHz. Particular embodiments may repeat one or more steps of the method ofFIG.6, where appropriate. Although this disclosure describes and illustrates particular steps of the method ofFIG.6as occurring in a particular order, this disclosure contemplates any suitable steps of the method ofFIG.6occurring in any suitable order. Moreover, although this disclosure describes and illustrates an example method for foveated rendering including the particular steps of the method ofFIG.6, this disclosure contemplates any suitable method for foveated rendering including any suitable steps, which may include all, some, or none of the steps of the method ofFIG.6, where appropriate. Furthermore, although this disclosure describes and illustrates particular components, devices, or systems carrying out particular steps of the method ofFIG.6, this disclosure contemplates any suitable combination of any suitable components, devices, or systems carrying out any suitable steps of the method ofFIG.6. FIG.7illustrates an example computer system700. In particular embodiments, one or more computer systems700perform one or more steps of one or more methods described or illustrated herein. In particular embodiments, one or more computer systems700provide functionality described or illustrated herein. In particular embodiments, software running on one or more computer systems700performs one or more steps of one or more methods described or illustrated herein or provides functionality described or illustrated herein. Particular embodiments include one or more portions of one or more computer systems700. Herein, reference to a computer system may encompass a computing device, and vice versa, where appropriate. Moreover, reference to a computer system may encompass one or more computer systems, where appropriate. This disclosure contemplates any suitable number of computer systems700. This disclosure contemplates computer system700taking any suitable physical form. As example and not by way of limitation, computer system700may be an embedded computer system, a system-on-chip (SOC), a single-board computer system (SBC) (such as, for example, a computer-on-module (COM) or system-on-module (SOM)), a desktop computer system, a laptop or notebook computer system, an interactive kiosk, a mainframe, a mesh of computer systems, a mobile telephone, a personal digital assistant (PDA), a server, a tablet computer system, an augmented/virtual reality device, or a combination of two or more of these. Where appropriate, computer system700may include one or more computer systems700; be unitary or distributed; span multiple locations; span multiple machines; span multiple data centers; or reside in a cloud, which may include one or more cloud components in one or more networks. Where appropriate, one or more computer systems700may perform without substantial spatial or temporal limitation one or more steps of one or more methods described or illustrated herein. As an example and not by way of limitation, one or more computer systems700may perform in real time or in batch mode one or more steps of one or more methods described or illustrated herein. One or more computer systems700may perform at different times or at different locations one or more steps of one or more methods described or illustrated herein, where appropriate. In particular embodiments, computer system700includes a processor702, memory704, storage706, an input/output (I/O) interface708, a communication interface710, and a bus712. Although this disclosure describes and illustrates a particular computer system having a particular number of particular components in a particular arrangement, this disclosure contemplates any suitable computer system having any suitable number of any suitable components in any suitable arrangement. In particular embodiments, processor702includes hardware for executing instructions, such as those making up a computer program. As an example and not by way of limitation, to execute instructions, processor702may retrieve (or fetch) the instructions from an internal register, an internal cache, memory704, or storage706; decode and execute them; and then write one or more results to an internal register, an internal cache, memory704, or storage706. In particular embodiments, processor702may include one or more internal caches for data, instructions, or addresses. This disclosure contemplates processor702including any suitable number of any suitable internal caches, where appropriate. As an example and not by way of limitation, processor702may include one or more instruction caches, one or more data caches, and one or more translation lookaside buffers (TLBs). Instructions in the instruction caches may be copies of instructions in memory704or storage706, and the instruction caches may speed up retrieval of those instructions by processor702. Data in the data caches may be copies of data in memory704or storage706for instructions executing at processor702to operate on; the results of previous instructions executed at processor702for access by subsequent instructions executing at processor702or for writing to memory704or storage706; or other suitable data. The data caches may speed up read or write operations by processor702. The TLBs may speed up virtual-address translation for processor702. In particular embodiments, processor702may include one or more internal registers for data, instructions, or addresses. This disclosure contemplates processor702including any suitable number of any suitable internal registers, where appropriate. Where appropriate, processor702may include one or more arithmetic logic units (ALUs); be a multi-core processor; or include one or more processors702. Although this disclosure describes and illustrates a particular processor, this disclosure contemplates any suitable processor. In particular embodiments, memory704includes main memory for storing instructions for processor702to execute or data for processor702to operate on. As an example and not by way of limitation, computer system700may load instructions from storage706or another source (such as, for example, another computer system700) to memory704. Processor702may then load the instructions from memory704to an internal register or internal cache. To execute the instructions, processor702may retrieve the instructions from the internal register or internal cache and decode them. During or after execution of the instructions, processor702may write one or more results (which may be intermediate or final results) to the internal register or internal cache. Processor702may then write one or more of those results to memory704. In particular embodiments, processor702executes only instructions in one or more internal registers or internal caches or in memory704(as opposed to storage706or elsewhere) and operates only on data in one or more internal registers or internal caches or in memory704(as opposed to storage706or elsewhere). One or more memory buses (which may each include an address bus and a data bus) may couple processor702to memory704. Bus712may include one or more memory buses, as described below. In particular embodiments, one or more memory management units (MMUs) reside between processor702and memory704and facilitate accesses to memory704requested by processor702. In particular embodiments, memory704includes random access memory (RAM). This RAM may be volatile memory, where appropriate. Where appropriate, this RAM may be dynamic RAM (DRAM) or static RAM (SRAM). Moreover, where appropriate, this RAM may be single-ported or multi-ported RAM. This disclosure contemplates any suitable RAM. Memory704may include one or more memories704, where appropriate. Although this disclosure describes and illustrates particular memory, this disclosure contemplates any suitable memory. In particular embodiments, storage706includes mass storage for data or instructions. As an example and not by way of limitation, storage706may include a hard disk drive (HDD), a floppy disk drive, flash memory, an optical disc, a magneto-optical disc, magnetic tape, or a Universal Serial Bus (USB) drive or a combination of two or more of these. Storage706may include removable or non-removable (or fixed) media, where appropriate. Storage706may be internal or external to computer system700, where appropriate. In particular embodiments, storage706is non-volatile, solid-state memory. In particular embodiments, storage706includes read-only memory (ROM). Where appropriate, this ROM may be mask-programmed ROM, programmable ROM (PROM), erasable PROM (EPROM), electrically erasable PROM (EEPROM), electrically alterable ROM (EAROM), or flash memory or a combination of two or more of these. This disclosure contemplates mass storage706taking any suitable physical form. Storage706may include one or more storage control units facilitating communication between processor702and storage706, where appropriate. Where appropriate, storage706may include one or more storages706. Although this disclosure describes and illustrates particular storage, this disclosure contemplates any suitable storage. In particular embodiments, I/O interface708includes hardware, software, or both, providing one or more interfaces for communication between computer system700and one or more I/O devices. Computer system700may include one or more of these I/O devices, where appropriate. One or more of these I/O devices may enable communication between a person and computer system700. As an example and not by way of limitation, an I/O device may include a keyboard, keypad, microphone, monitor, mouse, printer, scanner, speaker, still camera, stylus, tablet, touch screen, trackball, video camera, another suitable I/O device or a combination of two or more of these. An I/O device may include one or more sensors. This disclosure contemplates any suitable I/O devices and any suitable I/O interfaces708for them. Where appropriate, I/O interface708may include one or more device or software drivers enabling processor702to drive one or more of these I/O devices. I/O interface708may include one or more I/O interfaces708, where appropriate. Although this disclosure describes and illustrates a particular I/O interface, this disclosure contemplates any suitable I/O interface. In particular embodiments, communication interface710includes hardware, software, or both providing one or more interfaces for communication (such as, for example, packet-based communication) between computer system700and one or more other computer systems700or one or more networks. As an example and not by way of limitation, communication interface710may include a network interface controller (NIC) or network adapter for communicating with an Ethernet or other wire-based network or a wireless NIC (WNIC) or wireless adapter for communicating with a wireless network, such as a WI-FI network. This disclosure contemplates any suitable network and any suitable communication interface710for it. As an example and not by way of limitation, computer system700may communicate with an ad hoc network, a personal area network (PAN), a local area network (LAN), a wide area network (WAN), a metropolitan area network (MAN), or one or more portions of the Internet or a combination of two or more of these. One or more portions of one or more of these networks may be wired or wireless. As an example, computer system700may communicate with a wireless PAN (WPAN) (such as, for example, a BLUETOOTH WPAN), a WI-FI network, a WI-MAX network, a cellular telephone network (such as, for example, a Global System for Mobile Communications (GSM) network), or other suitable wireless network or a combination of two or more of these. Computer system700may include any suitable communication interface710for any of these networks, where appropriate. Communication interface710may include one or more communication interfaces710, where appropriate. Although this disclosure describes and illustrates a particular communication interface, this disclosure contemplates any suitable communication interface. In particular embodiments, bus712includes hardware, software, or both coupling components of computer system700to each other. As an example and not by way of limitation, bus712may include an Accelerated Graphics Port (AGP) or other graphics bus, an Enhanced Industry Standard Architecture (EISA) bus, a front-side bus (FSB), a HYPERTRANSPORT (HT) interconnect, an Industry Standard Architecture (ISA) bus, an INFINIBAND interconnect, a low-pin-count (LPC) bus, a memory bus, a Micro Channel Architecture (MCA) bus, a Peripheral Component Interconnect (PCI) bus, a PCI-Express (PCIe) bus, a serial advanced technology attachment (SATA) bus, a Video Electronics Standards Association local (VLB) bus, or another suitable bus or a combination of two or more of these. Bus712may include one or more buses712, where appropriate. Although this disclosure describes and illustrates a particular bus, this disclosure contemplates any suitable bus or interconnect. Herein, a computer-readable non-transitory storage medium or media may include one or more semiconductor-based or other integrated circuits (ICs) (such, as for example, field-programmable gate arrays (FPGAs) or application-specific ICs (ASICs)), hard disk drives (HDDs), hybrid hard drives (HHDs), optical discs, optical disc drives (ODDs), magneto-optical discs, magneto-optical drives, floppy diskettes, floppy disk drives (FDDs), magnetic tapes, solid-state drives (SSDs), RAM-drives, SECURE DIGITAL cards or drives, any other suitable computer-readable non-transitory storage media, or any suitable combination of two or more of these, where appropriate. A computer-readable non-transitory storage medium may be volatile, non-volatile, or a combination of volatile and non-volatile, where appropriate. Herein, “or” is inclusive and not exclusive, unless expressly indicated otherwise or indicated otherwise by context. Therefore, herein, “A or B” means “A, B, or both,” unless expressly indicated otherwise or indicated otherwise by context. Moreover, “and” is both joint and several, unless expressly indicated otherwise or indicated otherwise by context. Therefore, herein, “A and B” means “A and B, jointly or severally,” unless expressly indicated otherwise or indicated otherwise by context. The scope of this disclosure encompasses all changes, substitutions, variations, alterations, and modifications to the example embodiments described or illustrated herein that a person having ordinary skill in the art would comprehend. The scope of this disclosure is not limited to the example embodiments described or illustrated herein. Moreover, although this disclosure describes and illustrates respective embodiments herein as including particular components, elements, feature, functions, operations, or steps, any of these embodiments may include any combination or permutation of any of the components, elements, features, functions, operations, or steps described or illustrated anywhere herein that a person having ordinary skill in the art would comprehend. Furthermore, reference in the appended claims to an apparatus or system or a component of an apparatus or system being adapted to, arranged to, capable of, configured to, enabled to, operable to, or operative to perform a particular function encompasses that apparatus, system, component, whether or not it or that particular function is activated, turned on, or unlocked, as long as that apparatus, system, or component is so adapted, arranged, capable, configured, enabled, operable, or operative. Additionally, although this disclosure describes or illustrates particular embodiments as providing particular advantages, particular embodiments may provide none, some, or all of these advantages. | 89,023 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.